text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
JSORB - Javascript Object Request Broker use JSORB; use JSORB::Server::Simple; use JSORB::Dispatcher::Path; use JSORB::Reflector::Package; use JSORB::Client::Compiler::Javascript; # create some code to expose over RPC { package Math::Simple; use Moose; sub add { $_[0] + $_[1] } } # Set up a simple JSORB server JSORB::Server::Simple->new( port => 8080, dispatcher => JSORB::Dispatcher::Path->new( namespace => JSORB::Reflector::Package->new( # tell JSORB to introspect the package introspector => Math::Simple->meta, # add some type information # about the procedure procedure_list => [ { name => 'add', spec => [ ('Int', 'Int') => 'Int' ] } ] )->namespace ) )->run; ## Now you can ... ## go to the URL directly¶ms=[2,2] # and get back a response { "jsonrpc" : "2.0", "result" : 2 } ## compile your own Javascript client ## library for use in a web page ... my $c = JSORB::Client::Compiler::Javascript->new; $c->compile( namespace => $namespace, to => [ 'webroot', 'js', 'MathSimple.js' ] ); # and in your JS var math = new Math.Simple ( '' ); math.add( 2, 2, function (result) { alert(result) } ); ## or use the more low level ## JSORB client library directly var c = new JSORB.Client ({ base_url : '', }) c.call({ method : '/math/simple/add', params : [ 2, 2 ] }, function (result) { alert(result) }); __ __ /\ \ /\_\ ____ ___ _ __ \ \ \____ \/\ \ /',__\ / __`\ /\`'__\ \ \ '__`\ \ \ \ /\__, `\/\ \L\ \\ \ \/ \ \ \L\ \ _\ \ \ \/\____/\ \____/ \ \_\ \ \_,__/ /\ \_\ \ \/___/ \/___/ \/_/ \/___/ \ \____/ \/___/ This is a VERY VERY early release of this module, and while it is quite functional, this module should in no way be seen as complete. You are more then welcome to experiment and play around with this module, but don't come crying to me if it accidently deletes your MP3 collection, kills the neighbors dog and causes the large hadron collider to create a black hole that swallows up all of existence tearing you molecule from molecule along the event horizon for all eternity. The goal of this module is to provide a cleaner and more formalized way to do AJAX programming using JSON-RPC in the flavor of Object Request Brokers. Currently this is more focused on RPC calls between Perl on the server side and Javascript on the client side. Eventually we will have a Perl client and possibly some servers written in other languages as well. The documentation is very sparse and I apologize for that, but the test suite has a lot of good stuff to read and there is a couple neat things in the example directory. If you want to know more about the Javascript end there are tests for that as well. As this module matures more the docs will improve, but as mentioned above, it is still pretty new and we have big plans for its.
http://search.cpan.org/dist/JSORB/lib/JSORB.pm
CC-MAIN-2016-50
refinedweb
439
53.24
Stuck with Android LWUIT port. Someone help me please. Hi fellow developers, I should start by saying that I am new to Android development, but I have already around six months experience developing with LWUIT for JME MIDP 2.0. The application I have been working in is about to be ported into Android and I thought it would be nice if I could keep all those nice forms and sweet looking UI. Therefore I read about Android port in the lwuit-incubator forum. I found this interesting and I want to give it a try. I started by downloading the Ant scripts and demo application provided by thorsten_s in his website So far I have do the following: I downloaded the sources for LWUIT IO and UI directly from the LWUIT SVN repository as well as thorsten_s Android sources from the LWUIT-INCUBATOR SVN. Then I modified the Ant script provided by thorsten_s to create a LWUIT-Android source port. <property name="lwuit.repo" value="${basedir}\lwuit\trunk" /> <property name="lwuit.incubator.repo" value="${basedir}\lwuit-incubator\trunk" /> <copy todir="${mysources}" verbose="true"> <!-- LWUIT sources --> <fileset dir="${lwuit.repo}\IO\src"> <exclude name="**/.svn/**"/> </fileset> <fileset dir="${lwuit.repo}\UI\src"> <exclude name="**/.svn/**"/> </fileset> <!-- ANDROID sources --> <fileset dir="${lwuit.incubator.repo}\thorsten_s\android"> <exclude name="**/.svn/**"/> </fileset> </copy> I then follow up by replacing the com.sun.lwuit.impl.ImplementationFactory class from the LWUIT sources with the one provided by thorsten_s I then create an activity which extend com.sun.lwuit.impl.android.LWUITActivity. and use the onCreate() method to initialize the LWUIT Display. Because at this point I am trying to first run the SearchListApp example provided by thorsten_s in his website my Android activity look as follows. import android.os.Bundle; import com.sun.lwuit.Display; import com.sun.lwuit.impl.android.LWUITActivity; public class SearchListActivity extends LWUITActivity implements BaseClass{ private SearchListActivity activity = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if(activity == null){ activity = this; Display.init(activity); new SearchListApp(activity).startMyApp(); } } public void exitMyApplication() { this.notifyDestroyed(); } public final void notifyDestroyed() { try { if (LWUITActivity.currentActivity != null) { LWUITActivity.currentActivity.finish(); } } catch (Throwable e) { e.printStackTrace(); } } } After that I have modified the AndroidManifest.xml and now it looks like this. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="" package="net.java.lwuit.demo.searchlist" android:versionCode="1" android: <application android: <activity android:name="SearchListActivity" android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> So far, after performing this step and then compiling the project inside IDE (I am using Netbeans with the Android plugin) i am getting no errors.. I get however some warnings concerning the use of varargs but I guess this is fine. Also there was a complain regarding one overlay method call that I did comment in the AndroidVideo class: public AndroidVideo(String url, SurfaceView lwuitView) { super(url); video = new VideoView(lwuitView.getContext()); // video.setZOrderMediaOverlay(true); // TODO: this line was commented out because compiler complained about that method being missing video.setVideoURI(Uri.parse(url)); if (nativeController) { MediaController mc = new MediaController(lwuitView.getContext()); video.setMediaController(mc); } nativePeer = (AndroidPeer) PeerComponent.create(video); } Since my application is not using the video for anything I guess it's okay if I leave this line out =) Okay as I final step I run against a VirtualDevice running Android 1.6. I am using the emulator and I was hoping to see something but the application just start to boot and I am immmediately thrown back to the application menu, without any warnings or error messages, either in the device, or in the debug/developer console from my IDE. So as I told you already in my post title.. I am stuck at this point. I have no idea what to try next and I would happily accept any advise/help you can provide. Thanks. Find attached my netbeans project. Hi there ts! Thank you.. It works like charm.. You make my day. =) hi, I tried to do pretty much the same thing as you. But I am new to android/netbean/ant, don't know how to do. Would you please give me the steps to build the tipster demo. Thanks Hi Hotskin, I would proceed like this. 1. Download android SDK. You will need to choose the target platform that you want to download and be ready to wait, because this download really takes some time. After downloading the SDK you will need to create a virtual device. Think about it as an instance in which your application will be running. For this step you will find plenty of documentation and howto proceed in the Android Developer's website. 2. After that you need to install Netbeans. Netbeans already comes with the Ant build system, so you don't need to take care about that. Unless you decide you will be running build files outside from the IDE. But since I assume you just want to get it working fast, you can jumpstart with the Netbeans and go straight to download the Android plugin. 3. The plugin I'm talking about is called Netbeans 4 Android or Android 4 Netbeans.. Right now I cannot remember , but the thing is that you need to install the plugin and please don't forget to also "Activate" it because it is not "active" by default.. Anyway you will find information about how to proceed in the Netbeans developer website. Notice that perhaps the plugin is not available for the latest, newest version of the Netbeans. I am using the plugin for Netbeans 6.5 in Netbeans 7 and it is working fine. 4. Finally just download the zip file provided by thornsten in his reply. It is completely uptodate and you just need to go through the ant build file and make some few edits to have it running for you.. Hmm.. remember to create the Netbeans project using the existing sources but as an Android project. Regards, Omar Hi, I believe the app is starting and exiting without error because I recently changed the behavior of the implementation and did not update the sample that you downloaded. I still need to update the complete page. For the time being I uploaded the new example and placed it here: It is the Tipster demo from the LWUIT SVN repository. The big difference is that the implementation no longer uses the MIDlet class and you place your implementation directly in your subclass of LWUITActivity. And the build script has been updated because google moved some tools around, breaking the old one.
https://www.java.net/forum/topic/mobile-embedded/lwuit/stuck-android-lwuit-port-someone-help-me-please-0
CC-MAIN-2015-11
refinedweb
1,097
57.77
Structured Exception Handling (SEH) Buffer Overflow Easy File Sharing FTP Server 3.5 On a beautiful Sunday evening, I was laying down in my bedroom watching the latest music video from my all time favourite Indonesian band. The video depicts the band playing field hockey in a parking lot before having a show at that same place later in the evening. It reminded me about a choice I made a few months ago. Recently I have adopted the OKR system in my personal life. Two of the many objectives I had was to be able to play ice hockey and complete OSCE. I have started taking ice skating lessons for about 2 months until I decided that I was going to pause this for OSCE. Long story short, I failed my 1st attempt on OSCE exam. I felt like I have failed myself by not passing the exam. After having a discussion with my manager, he made me realize that it is not about being certified, but its about what I am going to do with the knowledge. One of the things we agreed on is to continue finding bugs in the related topic. I had to start by re-creating old exploits so that I would really understand the concept and get the hang of it. That is why I chose this topic to be my first post. Hopefully you can benefit from this! On this post, I will be exploiting Easy File Sharing FTP Server 3.5. It will cover from fuzzing, until payload execution. However, this post will not cover the payload generated, that is totally up to you to choose which payload! :) Recreating the Exploit The public exploit can be found here. The vulnerable application is also available for download at that link. I know, we have seen what the exploit looks like and we can just run the code to prove that the exploit is actually working. But where is the fun in that?! Assuming that the only thing we know that would cause the application to crash is the character \x2c, which is a comma ,. We can start out by writing a simple python fuzzer. import socketcharacters = ['!','"','#','$','%','&','\'','(',')','*','+',',','-','.','/',':',';','<','=','>','?','@','[','\\',']','^','_','`','{','|','}','~']for char in characters: for i in range(30): print('Character: ' + char + ' amount: ' + str(i*100)) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("192.168.49.136", 23)) s.recv(1024) s.send("USER anonymous\r\n") s.recv(1024) s.send("PASS " + char * 100 * i + "\r\n") s.recv(1024) s.close() After running it, we found out that it will crash when we send \x2c 600 times. If we examine the crash, our payload is just a few address down the ESP, which indicates a SEH buffer overflow. But when we open the SEH chain, we don’t see anything. To verify our hypothesis, instead of sending 600 characters, let’s try again by sending out 3000 characters and examine the SEH chain. This time its clearer as its trying to access an address that looks similar to our payload. Our payload on the stack also seems closer to ESP than before. Finding the offsets The next step is to find the exact offset which overwrites the SE handler so we can control the execution flow. There is an online pattern generator and offset finder that I’d like to use. You can find it here? For the offset, we should generate a pattern of 3000 characters to use on our payload. Note: I will redact the pattern to save space import socket import structpattern = <3000PatternHere>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("192.168.49.136", 23)) s.recv(1024) s.send("USER anonymous\r\n") s.recv(1024) s.send("PASS " + '\x2c' + pattern + "\r\n" ) s.recv(1024) s.close() After firing up this exploit, SE handler is now 68443468 Let’s make one last modification to the exploit to verify that the offset is right. import socket import structs = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("192.168.49.136", 23)) s.recv(1024) s.send("USER anonymous\r\n") s.recv(1024) s.send("PASS " + '\x2c' + 'A'*2563 + 'B'*4 + 'C'*(3000-4-2563-1) + "\r\n" ) s.recv(1024) s.close() In the above snippet, I sent 1 comma, 2563 As, 4 Bs and the remaining with Cs. Hopefully our SE Handler will be overwritten by the 4 Bs. Now it becomes interesting. When the exception is handled, EIP will be overwritten with the address in the SE Handler. Since we can control the value in the handler, we can have it execute our own code What Next? When the first crash occurs, pass exception by pressing shift+f7 and pay attention to the stack once again. Notice that address 0x00F3ADD0 points to the last 4 bytes of our As. To reach that point, we need to pop the stack twice so that 0x00F3ADD0 is on top, and then return to it. We need to find a pop pop ret instruction, preferably at the application DLL so that it is more flexible and not OS dependent. We managed to find that at SSLEAY32.dll at address 0x10017f21. Now it’s time to update the exploit again and examine the crash. This time we will put \xcc replacing the last 4 bytes of our As as a breakpoint replacement. Set a breakpoint at 0x10017f21 and step into the next instructions, we will land at our \xcc. The Final Payload Oh yeah, I know we’ve been waiting for this. For the program to actually jump into our payload, we need to jump from where we are in the code (the \xcc) to the actual payload below that. Let’s set a jump of 20 bytes to be safe, and fill nops (\x90) before our payload. The decimal for 20 is hex14. import socket import structs = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("192.168.49.136", 23)) s.recv(1024) s.send("USER anonymous\r\n") s.recv(1024)# 10017F20 5F POP EDI # 10017F21 5E POP ESI # 10017F22 59 POP ECX # 10017F23 C3 RETN#Crash initiator payload = '\x2c' #Junk payload += 'A' * 2559 #Next SEH payload += '\xeb\x14\x90\x90'#SEH payload += '\x21\x7f\x01\x10'#Actual Payload payload += 'C'*(3000-4-4-2559-1)s.send("PASS " + payload + "\r\n" ) s.recv(1024) s.close() Congratulations! You have reached your final payload! This gives about 422 bytes of payload that we can enter, hopefully it is enough with what you want to do! I will not be covering the payloads here. I just want to help you get through the exploit development process! Cheers! References Exploit writing tutorial part 3 : SEH Based Exploits | Corelan Team In the first 2 parts of the exploit writing tutorial series, I have discussed how a classic stack buffer overflow works… Buffer overflow pattern generator With this tool you can generate a string composed of unique pattern that you can use to replace the sequence of A's of… wiremask.eu Offensive Security's Exploit Database Archive usr/bin/env python # Exploit Title: Easy File Sharing FTP Server 3.5 stack buffer overflow # Date: 27 May 2014 #… SHORT Jump Instructions Using SHORT (Two-byte) Relative Jump Instructions Copyright © 2004, 2013 by Daniel B. Sedory NOT to be reproduced in… thestarman.pcministry.com
https://fahrishih.medium.com/structured-exception-handling-seh-buffer-overflow-e809cb7d0e5d?source=post_internal_links---------1-------------------------------
CC-MAIN-2022-05
refinedweb
1,223
74.08
. Although there are versions of Visual Studio Code available for Linux and Mac OS X, the ComponentOne server-side data access assemblies depend on the full .NET Framework. Therefore, the steps in this tutorial will only work on a Windows machine. Windows 10 is assumed, although earlier versions should also work with the appropriate framework downloads. Prerequisites If you haven't already, download and install the Windows version of Visual Studio Code. If you have Visual Studio 2017 installed, you should already have the necessary components for .NET Core as well as reference assemblies for the .NET Framework. If you do not have Visual Studio 2017 installed, do the following: - Visit the download page for .NET Core. Make sure that you choose the SDK download, not the runtime, and click the link for the Windows (x64) Installer. - Determine which version of the .NET Framework you will target. Run the winver command to see which version of Windows 10 you are running. For version 1703, the framework version is 4.7. For version 1607, the framework version is 4.6.2. - Visit and download the appropriate .NET Framework Developer Pack. (This may require a restart first.) Finally, do a command prompt and change the current directory, if desired. Execute the following commands: mkdir CubeDemo cd CubeDemo dotnet new mvc The last line creates a new ASP.NET Core MVC project. The first time you run this command, it may take a few minutes to populate your local package cache. To open the newly created project in Visual Studio Code, run this command: code . When the editor has finished loading, its left pane will display the files in your project. Click the main project file, CubeDemo.csproj, to open it for editing. If this is the first time you opened a .NET Core project in Visual Studio Code, you will see the following prompt: Click Show Recommendations to switch to the Extensions pane: Click the Install button for C#, then click Reload to activate it. Click Reload Window when prompted to complete the installation. By default, the newly created project targets .NET Core 1.1. However, the ComponentOne data access assemblies require the full .NET Framework, so you will need to modify the project file. Go back to the CubeDemo.csproj tab and change the target framework from netcoreapp1.1 to net47 (or net462, depending upon which version of Windows 10 you are running), and then add the win7-x64 runtime identifier: <PropertyGroup> <TargetFramework>net47</TargetFramework> <RuntimeIdentifier>win7-x64</RuntimeIdentifier> </PropertyGroup> Next, open the integrated terminal in Visual Studio Code (Ctrl + `). If this is first time you executed this command, you will see the following prompt: Click OK, Never Show Again to keep using Windows PowerShell, then type the command dotnet restore to update the NuGet packages for the new target framework. Create a Debug Configuration Open the Debug pane by clicking its icon. The dropdown control in the toolbar shows No Configurations, so you will need to create one. Click the gear icon to the right of the dropdown. When prompted, select the .NET Core environment. Edit the file launch.json and replace both occurrences of: ${workspaceRoot}/bin/Debug/<target-framework>/<project-name.dll> with the following, where the last two path fragments must match the contents of CubeDemo.csproj, which you edited earlier: ${workspaceRoot}/bin/Debug/net47/win7-x64/CubeDemo.exe Save all open files, then press F1 to open the command palette. Type the word task, then select the command Tasks: Configure Task Runner. A list of environments will appear. Select .NET Core from the list. Return to the integrated terminal and re-enter the command dotnet restore. On the Debug toolbar, switch the launch target to .NET Core Launch (web), then press F5. If you see the following error message, click Close to dismiss it. After dismissing the error, press F1 to open the command palette, then type debug, then select the command Debug: Download .NET Core Debugger. If you see the following error message, you can click Close and ignore it: Note that the .NET Core debugger only works if the target framework is netcoreapp1.x, so you will not be able to use it with this project. However, it is necessary to download and install it to perform builds. To run your project without debugging, press Ctrl + F5. The default browser will open on the site. Close the browser, then return to Visual Studio Code and click Shift + F5 to stop the web server process. Add NuGet Packages Since Visual Studio Code doesn't provide an integrated NuGet package manager, you will need to edit the project file directly to specify external package references. Open CubeDemo.csproj and add the following lines to the end of the existing <ItemGroup> tag: <PackageReference Include="C1.AspNetCore.Api" Version="1.0.20171.91" /> <PackageReference Include="C1.AspNetCore.Api.DataEngine" Version="1.0.20171.91" /> After you save the project file, you should see the following prompt: Click Restore, or return to the integrated terminal and run the dotnet restore command again. Generate GrapeCity Runtime Licenses Before you can use the ComponentOne packages you added in the previous section, you will need to generate runtime licenses for your application. Visit and enter the email address and password for your GrapeCity account. On the next screen, select Evaluation from the Serial Number dropdown, then enter CubeDemo in the App Name box. Click the Generate button to emit a class declaration that looks something like this: public static class License { public const string Key = "ABgBGAIYB4BDAHUAYgBlAEQAZQBtAG8ApRLl1hi2W7vtfkFDXJboBWYdrfKOfg5G" + "3TsNjAbp0UoNuYaZBgAO57j/Q6f+0aIbGMn7q4V0pdIy2Nr0fy2Y9B7RU782QCbJ" + "xRtMaeu4PUQLj+VaOdMJ3M2IHRMZ4qGl1NG7YXAeUtmMBUZWK6/ySM5TSgruyqlN" + "<<lines deleted>>" + "EpNEiC0XFqTaOG2cIOQPeDuF57WNJ4kFeNx4HnEkQ6bWin5spSGIEg=="; } Copy all code from the multiline text box to the clipboard. Go back to Visual Studio Code and switch to Explorer view. Add a new file named Licenses.cs to the project, and insert the following contents: namespace CubeDemo { // paste generated License class here } Replace the comment line with the contents of the clipboard. Next, open the file Startup.cs and locate the ConfigureServices method. Add these lines to the beginning of the method: C1.Web.Api.LicenseManager.Key = License.Key; C1.Web.Api.DataEngine.LicenseManager.Key = License.Key; At runtime, this code will apply your license key to the appropriate ComponentOne artifacts. Ctrl + F5 to run the application, then enter the following URL in a new tab:_13<< Close the browser and stop debugging by pressing Shift + F5 in Visual Studio Code. At this point, you Ctrl + F5 to run the project, and then enter the following URL in the browser: You should see the pivot panel populated with the available fields, and the pivot grid should display rows of countries and columns of occupations. View Results in Visual Studio Code As an alternative to opening a separate browser window, you can configure your project to display the target web site within Visual Studio Code itself. First, add a new file named test.md to the project folder, and insert the following content: <div style="border:1px solid #777;position:absolute;left:10px;top:10px;bottom:10px;right:10px;"> <iframe src="" width="100%" height="100%" frameborder="0" /> </div> Next, open the Debug pane and use the toolbar to change the debug target to .NET Core Launch (console). Press Ctrl + F5 to start the web server. With test.md open, press Ctrl + Shift + V to open markdown preview in an adjacent tab. If you see a button labeled Scripts have been disabled in this document, click it, then choose Enable script execution in markdown previews for this workspace. Close the preview tab, reopen test.md, then press Ctrl + Shift + V again. Conclusion In this blog post, you learned how to use ASP.NET Core MVC in conjunction with ComponentOne Studio to render SSAS cubes in Wijmo pivot controls. Moreover, you learned how to accomplish this with Microsoft's free, lightweight editor, Visual Studio Code.
https://www.grapecity.com/en/blogs/using-asp-net-core-mvc-web-api-render-ssas-cubes-part-2/
CC-MAIN-2017-43
refinedweb
1,306
57.77
.x Version 8.1 Features Not Supported in Version 10.x Scenarios Not Supported by Upgrade Many of the changes from the version 8.1 to the version 10.x project model are intended to align the model with broadly used Eclipse and Java conventions. If you've used other Eclipse-based IDEs, version 10.x of Workshop should feel familiar. Note: You might also be interested in the reading Key Differences for WebLogic Workshop 8.1 Users. The following list summarizes the most significant differences between version 8.1 and version 10.x project models. One of the largest aspects of this upgrade is the change from the Javadoc-comment style annotations used in version 8.1 to the Java 5 annotations supported for the version 10.x. For more information, see Upgrading Annotations. Most of the version 8.1 annotations have counterparts in version 10.x; for more information, see Relationship Between Version 8.1 and Version 10.x Annotations. These features have updated counterparts in version 10.x and may be deprecated. Impacts: Web services Upgrade strongly recommended. In version 8.1, web service message-level security is managed using WS-Security (WSSE) policy files. In version 10.x.x service control differs from version 8.1. In version 8.1, you specified both security policies and values in a WebLogic Workshop WSSE policy file. In version 10.x, the means for specifying characteristics for these two aspects of security has been split into multiple locations. For information on where security characteristics are now set, see Configuring Run-Time Message-Level Security Via the Service Control. In the course of upgrading your version 8.1 applications, you might find that some of your application's security-related characteristics differ between its behavior on the domain shipped with Workshop and the upgraded domain to which you redeploy it. This is because the "new" 10.x domain included with Workshop is not backward compatible, whereas the upgraded domain to which you deploy your upgraded application is (because it has been upgraded)..x,.x a conversational web service must have both a START and FINISH operation in order to compile. For more information, see Ensuring START and FINISH Methods for Conversations. Impacts: Web services Version 10.x Due to a difference in the way versions 8.1 and 10.x generate the default targetNamespace value for web services, you may enounter a deployment error if you have two or more web services with the same class name in an application. For web services in version 10.x, WebLogic Server uses the fully-qualified port name — which includes the web service's targetNamespace value — to bind resources it uses internally. As a result, the port name must be unique within an application. Workshop upgrade tools do not upgrade reliable messaging support (such as the @jws:reliable annotation) from version 8.1 to version 10.x. As noted in the version 8.1 documentation, that version's reliable messaging (RM) support was very limited and was not based on a specification that would be supported in future versions. You can manually upgrade reliable messaging support. See Upgrading Reliable Messaging Support — Basic Instructions for high-level upgrade steps. Version 9.x doesn't support using the form-get and form-post message formats to receive messages sent from an HTML form. When upgrading web services that use these formats, you'll need to use another method for receiving data sent from a form in a web browser. In other words, version 10.x web services do not support message formats that do not include SOAP headers. In version 8.1, the @jws:protocol annotation supported the following attributes and values: @jws:protocol form-get="true"— Indicated that the operation or web service supported receiving HTTP GET requests. @jws:protocol form-post="true"— Indicated that the operation or web service supported receiving HTTP POST requests. These attributes have no counterparts in version 10.x and there are no suggested workarounds. If you upgrade to version 10.x, upgrade tools will simply ignore a protocol setting that isn't supported. Impacts: Web services If you created a version 8.1 web service by generating it from a WSDL that specified xs:anyType instead of xs:any, the web service will expect and send incorrect XML payloads after upgrade to version 10.x. You can ensure correct handling of xs:anyType by applying the .x does not support abstract WSDLs. As a result, Workshop.x.x, controls must be annotated with the @TransactionAttribute annotation for transaction support. During upgrade, upgrade tools will add this annotation. For information on adding transaction support, see Enabling Automatic Transaction Support in Controls. Impacts: Control use The standard JdbcControl (org.apache.beehive.controls.JdbcControl) in version 10.x, Oracle..x,.x control model to version 10.x.x, the default is not to create the transaction. For information on ensuring the old default behavior, see Enabling Automatic Transaction Support in Entity Beans. Impacts: Potentially any code that uses annotations Unlike version 8.1, in version 10.x NetUI tags, some changes might be made to JSP tags if you choose to upgrade them. For more information, see Changes When Upgrading from Version 8.1 NetUI JSP Tags. Impacts: JSP files A backward-compatible version is set as the default for the upgraded application because it is more permissive than the version 10.x, version 10.x JSP tags. This includes migrating from the NetUI expression language syntax to the syntax of the version 10.x JSP expression language. However, note that expressions in the version 10.x JSP expression language are unable to bind to public fields, as was the case with NetUI expressions. For a full upgrade to version 10.x JSP tags, in other words, you must manually add get* and set* accessors where public fields were used. For more information on these differences, see Changing Code to Support the Expression Language.x project through which the schemas continue to be compiled into XMLBeans types. Impacts: XML schemas Version 10.x. Impacts: Code that uses XMLBeans and XQuery, including web services, controls, page flows, Enterprise JavaBeans The older XQuery implementation is deprecated, but supported in this version for backward compatibility. Queries based on the older implementation will be kept, but a special XmlOptions parameter will be added to specify that the old implementation should be used..x. You can migrate your modifications after upgrading your application by re-exporting your Ant build file, then merging in your modifications. Note: Unlike version 8.1, version 10.x does not support building your application with Ant from within the IDE. You must export the build file and execute targets from the command line. For details see Creating Ant Build and Maven POM Files for an Application. In version 10.x.x of Workshop.x: upgrade tools don't upgrade wildcard import statements, some of these statements will generate errors on version 10.x because their libraries are not present there. In some cases, you can fix these by replacing them with their version 10.x counterparts. After upgrade, you might find that some import statements with wildcards have been changed to reflect their nearest parallel in version 10.x. These features are no longer supported. The following describes functionality that version 8.1 supported, but which must be rewritten in order for upgraded code to work in version 10.x. Impacts: Web services, Service controls Version 10.x. Note: The lack of support for XQuery maps does not mean that XQuery itself is not supported. You can still execute XQuery expressions using the XMLBeans API. For more information on upgrade changes impacting this API, see Updating XQuery Use to Support Upgraded XQuery Implementation. For more information, see General Steps for Replacing XQuery Maps. Impacts: Web services Version 8.1 supported returning instances of java.util.Map from web service operations. The runtime provided a WebLogic Workshop-specific serialization of the Map to and from XML. The schema for that serialization was included in the WSDL for the Web Service. In version 10.x, java.util.Map instances can no longer be returned from web service operations. For a suggested workaround, see Replacing the Use of java.util.Map as a Web Service Operation Return Type. Impacts: Web services Unlike version 8.1, version 10.x.x.x).x so that it either uses the SOAP protocol or some alternative. For information on changing the message format in web services, see Details: Non-SOAP XML Message Format Over HTTP or JMS is Not Supported. Impacts: Web services In version 8.1 the @jc:handler and @jws:handler annotations included a callback attribute that specified handlers to process SOAP messages associated with callbacks; version 10.x does not include callback-specific handler support. For the counterparts of these annotations in version 10.x, see Upgrading Annotations. Impacts: Web services Version 10.x.x If a version 8.1 web service includes one or more operations that use the RPC SOAP binding and one or more operations that use the document SOAP binding, then after upgrade types generated for those operations will be placed into different namespaces. This will be different from the version 8.1 web service itself, in which the types were in the same namespace. A WSDL generated from the upgraded web service will differ from the version 8.1-generated WSDL. For more information, including a workaround, see Resolving Namespace Differences from Mixed Operations with Document and RPC SOAP Bindings. Supported but deprecated. Upgraded web service and Service control code will include the @UseWLW81BindingTypes and/or @WLWRollbackOnCheckedException annotations applied at the class level. Even though these annotations are deprecated, they are required in order to support clients that used the version 8.1 code. Impacts: Web services In version 8.1 it was possible to have a web service (JWS) or Service control whose WSDL defined multiple services. The web service or control would represent only one of these services. When upgrading such code to version 10.x upgrade will fail. To ensure that upgrade succeeds for this code, you should edit the WSDL so that it defines only the service that is represented by the JWS or Service control. Even though it was supported for WSDLs associated with version 8.1 service controls, in version 10.x multiple occurrences of the <wsdl:import> element is not supported in the same WSDL. For example, you might have used one import to get WSDL portions of the WSDL and another import to get XSD portions for needed types. More information and a workaround is available at Upgrading WSDLs with Multiple <wsdl:import> Statements. Version 8.1 supported using types in the 1999 schema namespace for service controls and web services generated from WSDLs that used the types. Because version 10.x does not support types in this namespace, you will need to manually migrate the WSDL to the 2001 namespace..x,.x JDBC control does not support this value; instead, specify a numerical value. This change is not automatically made by upgrade tools. For more information, see Replacing "All" Requests for Database Control Results. Impacts: Control use Version 10.x.x,.x.x timer control simplifies the API by disallowing multiple calls to the start method. Calls to the start method when the timer control is still running will have no effect. Impacts: Control use, page flows In versions.x. In upgraded code, you can substitute for this functionality by replacing your custom control with a web service. For more information on the version 8.1 feature and suggested version 10.x workarounds, see Providing Support for Callbacks from a Page Flow. Impacts: Web services, control creation These are events and APIs that were either considered low usage or were not general enough in nature to be included in the controls runtime. In situations such as creation Version 8.1 custom control annotation definitions are not upgraded to version 10.x..x. During upgrade your project hierarchy will be changed so that the Controller file is no longer co-located with JSP files. For more information, see Upgrade Changes for Co-Location in Page Flows. The version 8.1 IDE included a set of features through which the IDE kept related files in sync. For example, after generating a service control from a WSDL, changes to the WSDL would cause the IDE to automatically re-generate the service control to match. This functionality is not supported in the version 10.x IDE. For suggested workarounds, see Keeping Files in Sync in the Absence of IDE Support. If you created custom JSP tags in version 8.1 by extending NetUI tags, your tags will not be upgraded by Workshop tools. Extending NetUI tags was not supported. Note that if you elected not to migrate NetUI tags to current tags, your tags may build within the application, but may not work as expected. Likewise, extending JSP tags in version 10.x
http://docs.oracle.com/cd/E15051_01/wlw/docs103/guide/upgrading/conChangesDuringUpgrade.html
CC-MAIN-2014-35
refinedweb
2,168
51.65
On Thu, Apr 05, 2007 at 02:47:21PM +0100, Joel Reymont wrote: > Here's the output from -ddump-splices (thanks Saizan for the tip). > > It's returning a1 instead of a0. > > ghci -fth -e '$( _derive_print_instance makeFunParser '"''"'Foo )' > baz.hs -ddump-splices > baz.hs:1:0: > baz.hs:1:0: Splicing declarations > derive makeFunParser 'Foo > ======> > baz.hs:30:3-28 > instance {FunParser Main.Foo} where > [] > { parse = choice > [(>>) > (reserved ['F', 'o', 'o']) > ((>>) > (char '(') ((>>=) parse (\ a0 -> (>>) > (char ')') (return (Main.Foo a1)))))] } > > baz.hs:30:3: Not in scope: `a1' Sorry for the late multiple reply, I just spent seven hours sleeping... I am not the maintainer of Data.Derive, nor did I write the majority of the nice code; Neil Mitchell did it, you can ask him "why replace DrIFT". However, using abstract syntax trees WAS my idea. First, _derive_print_instance will never give you a TH splice error, since it always evaluates to an empty list of declarations. It uses the TH 'runIO' facility such that type-checking a file using _derive_print_instance will emit the instances to standard output as a side effect. So the error is coming from the $(derive) in baz.hs, if you have more errors try commenting it out. (you'll get bogus code on stdout, but at least it will be completly haskell!) _derive_print_instance was not intended to be a debugging aid, although it certainly works well in that capacity. The intent is that it will be used when the standalone driver is rewritten to use TH, which I intend to do not long after I can (Neil is out of communication for a week with intent to continue hacking Derive; I'm taking this as a repository lock). Yes, we do use type classes to implement recursion across types. This seems to be a very standard idiom in Haskell, used by Show, Read, Eq, Ord, NFData, Arbitrary, and doubtless many more.
http://www.haskell.org/pipermail/haskell-cafe/2007-April/024158.html
CC-MAIN-2014-23
refinedweb
315
74.49
#include <stdlib.h> #include <iostream> #include <cstring> #include <string> /*) { sql::Driver *driver; sql::Connection *con; sql::Statement *stmt; sql::ResultSet *res; /* Create a connection */ driver = get_driver_instance(); con = driver->connect("tcp://127.0.0.1:3306", "root", "dddd"); con->setSchema("Filter"); stmt = con->createStatement(); string No = "111"; res = stmt->executeQuery("SELECT * FROM Tasks WHERE Notes = '"No.c_str()"' "); while (res->next()) { cout << "Notes " << res->getInt(1) << " " ; cout << "ID " << res->getString("TaskID") << endl; } delete res; delete stmt; delete con; return EXIT_SUCCESS; } Hey guys , the problem which i am facing is that i would like to use a variable in order to base my SELECT statement upon. I am not sure how to go about getting it to be accepted by the SQL syntax. Hope for some help here. I tried using c_str() , but it does not seem to be working. How can i fix this in order for my SELECT statement to search based on my variable.
https://www.daniweb.com/programming/software-development/threads/356765/c-sql-select-statement
CC-MAIN-2019-04
refinedweb
154
60.95
A few weeks back I created an incredibly practical and not silly at all application that went through your device's contact list and "fixed" those contacts that didn't have a proper picture. The "fix" was to simply give them a random cat picture. That seems totally sensible, right? I was thinking about this during the weekend and it occured to me that there is an even cooler way we could fix our friends - by turning them all into superheros with the Marvel API. I've built a few apps with this API in the past (I'll link to them at the end) and I knew it had an API for returning characters. I thought - why not simply select a random character from the API and assign it to each of my contacts without a picture? The Character endpoint of the Marvel API does not allow for random selections so I hacked up my own solution. First, I did a generic GET call on the API to get the first page of results. In that test, I was able to see the total number of characters: Given that I assume, but certainly can't verify, that they have IDs from 1 to 1485, I decided to simply select a random number between them. (I ended up going a bit below 1485 just to feel a bit safer.) I figured this would be an excellent use of OpenWhisk, so I wrote up a quick, and simple, action: var request = require('request'); var crypto = require('crypto'); const publicKey = 'my api brings all the boys to the yard'; const privateKey = 'damn right its better than yours'; /* This number came from searching for characters and doing no filter. The API said there was 1485 total results. I dropped it down to 1400 to allow for possible deletes in the future. */ const total = 1400; function getRandomInt (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function main() { return new Promise(function(resolve, reject) { let url = '' +publicKey+'&offset='; let selected = getRandomInt(0, total); url += selected; // add hash let ts = new Date().getTime(); let hash = crypto.createHash('md5').update(ts + privateKey + publicKey).digest('hex'); url += '&hash='+encodeURIComponent(hash)+'&ts='+ts; request.get(url, function(error, response, body) { if(error) return reject(error); let result = JSON.parse(body).data.results[0]; let character = { name:result.name, description:result.description, picture:result.thumbnail.path + '.' + result.thumbnail.extension, url:'' }; if(result.urls && result.urls.length) { result.urls.forEach(function(e) { if(e.type === 'detail') character.url = e.url; }); } resolve(character); }); }); } exports.main = main; I deployed this to OpenWhisk as a zipped action since crypo wasn't supported out of the box. (As an aside, that's wrong, but it's a long story, so don't worry about it now.) I then used one more wsk code to create the GET API, and I was done. And literally, that's it. 55 lines of code or so and the only real complex aspect is the hash. I do remove quite a bit of the Character record just because I didn't think it was necessary. I'm returning just the name, description, picture, and possibly a URL. You can see this in action here: So yeah, I'm building something totally stupid and impractical here, but I freaking love how easy it was to deploy the API to Bluemix. As I said, I've got 50ish lines of code and I'm done, and as a developer, I think that royally kicks ass. Ok, so what about the app? I'm not going to go through all the code since I shared it in the earlier post. The basics were - get all contacts, loop over each, and if they don't have a picture, "fix it", so let's focus on that code block. Contacts.find(["name"]).then((res) => { res.forEach( (contact:Contact) => { if(!contact.photos) { console.log('FIXING '+contact.name.formatted); //console.log(contact); proms.push(new Promise( (resolve, reject) => { this.superHero.getSuperHero().subscribe((res)=>{ console.log('super hero is '+JSON.stringify(res)); this.toDataUrl(res.picture, function(s) { var f = new ContactField('base64',s,true); contact.photos = []; contact.photos.push(f); contact.nickname = res.name; if(!contact.urls) contact.urls = []; contact.urls.push({type:"other",value:res.url});(); }); }); Previously the logic to handle finding a random cat was synchronous, but now we've got an asynch call out to my service so I had to properly handle that in my loop. Everything is still wrapped in a Promise though because I'm still converting the image to base64 for storage on the phone. (And that's probably a violation of the API, but I'm not releasing this to the market, so, yeah.) Outside of that, the code is the same. I call the API in this simple provider: import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/toPromise'; @Injectable() export class SuperHero { apiUrl:string = ''; constructor(public http: Http) { } getSuperHero() { return this.http.get(this.apiUrl + '?safaricanbiteme='+Math.random()).map(res => res.json()); } } So what's with the random code at the end? See this post about a stupid Safari caching bug that impacts it. If you want to see the rest of the Ionic code, you can find it here: And the result? Pure awesomeness. (Ok, maybe just to me.) If your curious about my other uses of the Marvel API, here are a few links:
https://www.raymondcamden.com/2017/01/18/all-my-friends-are-superheroes/
CC-MAIN-2018-17
refinedweb
913
55.54
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java programming articles and tips. Please take your copy here. Take your copy of free "Java Technology Screensaver"!. JavaFAQ Home » Java Notes by Fred Swartz A standard approach to shuffling the elements of an array is to write some code as described below. As of Java 2 the java.util.Collections class contains a static shuffle(list) method, which is described at the bottom. java.util.Collections shuffle(list)<52; i++) { cards[i] = i; } //--- Shuffle by exchanging each element randomly for (int i=0; i<52; i++) { int randomPosition = rgen.nextInt(52); int temp = cards[i]; cards[i] = card[randomPosition]; cards[randomPosition] = temp; } Java has extensive classes for handling data structures, the so-called "Collections" classes. Most of these do not work on simple arrays of primitive values. If your data is already in one of the collections classes, eg, ArrayList, then you can use the java.util.Collections.shuffle() method easily. Arrays of objects can easily be converted to ArrayLists by using the java.util.Arrays.asList(. . .) method. For example, the above randomization could be accomplished with the following. java.util.Collections.shuffle() java.util.Arrays.asList(. . .) import java.util.*; . . . Collections.shuffle(Arrays.asList(cards)); RSS feed Java FAQ News
http://www.javafaq.nu/java-article719.html
CC-MAIN-2015-14
refinedweb
216
50.73
You're using JSX every days without a clue how React does his magic ? Ever wondered why we do have to wrap our elements in a parent ? (JSX expressions must have one parent element. 🤯🤯🤯) Well, this article is for you. I'll do my very best to explain it to you, as I understood it. Keep in mind that's nobody is perfect, and if there's any mistake made, feel free to discuss it on Twitter, we're all learning everyday :D. How does our JSX work ? First things first, we have to make sure that you actually know how to insert new elements into your HTML with JavaScript. If you already know that, feel free to skip, if you don't, well ... Keep reading. In a regular HTML/JS website, here's how you would do : <body> <div id="root"></div> </body> <script> // You first get the targetted element const parent = document.getElementById("root"); // Then you create the new one const newChildren = document.createElement("div"); // And you append it to the parent it belong to parent.appendChild(newChildren); </script> Pretty straightforward, right ? But you noticed that it does create an empty element, you probably want to add at least some text, and even some attributes such as an id. <body> <div id="root"></div> </body> <script> const parent = document.getElementById("root"); const newChildren = document.createElement("div"); newChildren.setAttribute("id", "children"); newChildren.innerHTML = "Hello World !"; parent.appendChild(newChildren); </script> Your HTML page would now render a div, with an id of 'children', containing the 'Hello World' text, and so on for any other elements you want to create (you could write functions to help you out, but that's not the point or this article). It can become a mess really quickly, right ? You would have to handle every attributes you want to add, every listeners, etc. You get the idea. Now, how does React work ? React does expose to use 2 libraries for the web developement : React and ReactDOM. Let's say you do have initialized your React project from create-react-app and it's running properly. Ultimately, once you have removed everything that is not necessery, you have a code looking like this : import React from "react"; import ReactDOM from "react-dom"; import App from "./App"; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById("root") ); Let's get rid of the abstraction for now, and remove the JSX syntax, we'll come back on it later. import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render( /* Insert your elements here */ , document.getElementById('root') ); This function is the entry point of your React App. You are telling React to get a div with the id of 'root', and to render inside of it everything you'll pass as the first argument. Now, how do we actually create elements ? This render function won't accept anything that is not a valid React element. Let's get into it with the raw React API we imported. const element = React.createElement("div", null, "Hello World"); ReactDOM.render(element, document.getElementById("root")); The create element function take 3 arguments : - The type of the HTML element you want to create (div, span, input...) - Some props that I'll explain juste after. For now the value is null as we don't want any - And the children which is basically anything that will be inserted inside this new element created. Now, what if we want to give an id to this element ? const element = React.createElement("div", { id: "children" }, "Hello World"); This is where the second argument is used. It does accept an object of properties that will be applies to your element, here we added an id, but you could do it for a class, or some specific attributes for your tag. Even on onclick event ! const element = React.createElement( "div", { id: "children", onClick: () => console.log("Hello"), }, "Hello World" ); Way better than the regular JS declaration. (As a side note, keep in mind that the last parameter is not mandatary, and you could give in the props with the children key) React.createElement("div", { children: "Hello World" }); What if we have more than one child to render inside your div ? const element = React.createElement("div", { id: "children", onClick: () => console.log("Hello"), children: [ React.createElement("span", { children: "Hello World, ", style: { color: "red" }, }), React.createElement("span", { children: "this is ", style: { color: "green" }, }), React.createElement("span", { children: "Foo Bar !" }), ], }); The children property accept an array of elements, and obviously you could do that as long as you want, and this is actually how your JSX code look like in reality. If you have been using React for a bit before reading this, you should now have a better insight of why you're doing certain things (such as style={{color: 'red'}}), but we'll come on it later. Well, I ain't writing that anyway, how does this is useful ? Indeed, this is pretty annoying to write, and nobody using React will use it with the raw API. That's where React introduced JSX. JSX is basically a sugar synthax for writing the code above, thanks to Babel. (If you don't know what Babel is, it bascially take your code, and convert it into a browser compatibile version, more infos here). So if you write that : const component = () => <div id="children">Hello World !</div>; It'll actually be compiled by Babel as : const component = React.createElement("div", { id: "children" }, "Hello world"); Now, what if we rewrite the previous example with the list of elements in JSX ? It would look like this : const component = () => ( <div id="children"> <span style={{ color: "red" }}>Hello World, </span> <span style={{ color: "green" }}>this is </span> <span>Foo Bar !</span> </div> ); Amazing, isnt it ? It's way cleaner than the raw React API. Let's recap what we learned so far, but starting from JSX : You write your JSX code, which is compiled by Babel to make actually readable. The result is a call to the React.createElement() function with the correct parameters. And now what ? Well, React is doing one more trick for us : He's making one more abstraction, and doing the document.createElement() function for us. As an exemple, I've been digging and I found a pseudo code wrote by Dan Abramov. var node = document.createElement(type); Object.keys(props).forEach((propName) => { if (propName !== "children") { node.setAttribute(propName, props[propName]); } }); children.filter(Boolean).forEach((childElement) => { var childNode = mount(childElement); node.appendChild(childNode); }); We see that React is doing exactly what we did at the beginning, create a new node, setting attributes if needed and appending it into the DOM with the help of the virtual DOM (I'll probably talk about it in another blog post). You can also find the full pseudo code here. Miscellaneous Why I am passing an object for style inside the JSX ? Whenever you'll want to apply inline style to your JSX element, you'll have to wrap the styles inside an object. Why ? Because doing the following won't make any sense : const element = React.createElement( 'div', { id: 'children', onClick: () => console.log('Hello'), // Your browser would obviously complain style : color : red }, 'Hello World'); Right ? And this is exactly what you're telling Babel to do by writing this : <div style={color: 'red'} >Hello World</div> And that's also why you can't embed any kind of statements inside your JSX, such as if...else. How Babel does understand the difference between an html tag, and a custom component ? By capitalizing your component. That's all. If you create a component without capitalizing it, Babel will understand it as a potention html tag, and so will create it. <component>My custom component</component> Not what we wan't. Why do we need to wrap our elements into one parent ? It's because on how the React API work, let's say you write this : const component = ( <div>Hello </div> <div>World !</div> ); React will complain about having a parent, because Babel will compile it this way : const element = React.createElement('div', { id: 'children', children: 'Hello World !', 'ae' } ); Weird again, right ? You could wrap everything into an array, and return this way : const component = [ <div>Hello </div> <div>World !</div> ]; But this is not really how it is suppose to be used, hence why you need a parent. Ending We'll wrap it up for now, hope your enjoyend and learnt something. Again, feel free to send my feedback nor mistakes, I'll appreciate it ! Have a nice day. Discussion
https://dev.to/spartakyste/how-does-react-work-under-the-hood-jsx-wise-44pd
CC-MAIN-2020-45
refinedweb
1,426
67.04
One thing that has always pinched me is the long URLs that I used to see in several of my projects. Due to better manageability, I used to have several folders, i.e. folder hierarchy for maintainability of my application. But the thing that I didn't ever like was the full physical path of pages in URLs. There are other ways which allow us to get rid of it like URL rewriting, but I didn't like that. We could also have our own custom URL route handler or some third party solution, but they have never been strong enough for an application. Frankly speaking, one of my clients asked me several times, "Why does this .aspx extension displays in the URL" and also told me that he does not like these long URLs. I used to tell him, this is the normal behavior of ASP.NET application. But we can change it with our own custom solution. However, that will require a little bit of time and also require a lot of testing. But when Microsoft released ASP.NET MVC 2 with .NET Framework 3.5 SP1, and provided this Routing feature with it, I was very happy and started exploring it. In the meantime, I thought that it should also be provided with ASP.NET, because every time you may not require to follow MVC architecture and also found that MVC has nothing special to do with URL Routing. So I was expecting and waiting for ASP.NET Webform Routing. Now Microsoft introduced URL Routing with ASP.NET 4.0. URL routing is fully integrated, very powerful and straightforward. We access our webapplication using some URL that is normally the physical path of the pages. So URL Routing is a way to provide our own URL in lieu of the physical path of the page. One other way, Routing allows us a way to configure our application to accept a requested URL which actually doesn't map to physical files. From the security perspective of the application, it's important because one can easily know the solution structure of the application. In 1999, on usability Jakob Nielson wrote on his blog "URL as UI" 6 essential points about URL. These are: As from all these points, we can see the URLs as a part of user interface and it should be simple and easy. URLs should also made us visualize the structure of our application which might be a security concern for us, etc. ASP.NET 4.0 provides us the features, taking care of all the points mentioned above. Also, these URLs helps in SEO (Search engine optimization) and improve the page hits but put in the appropriate keywords. Earlier, URL routing was not so easy. For that, we require to have our own custom handler in a way that whenever a URL is requested, our custom route handler class should be invoked, and forward the request to the appropriate requested page or we could have some third party solution. So let's say if we are going to develop our own custom handler, then what do we need to do. HTTPhandler So you can imagine that it was not a straightforward task. It also requires some considerable amount of effort to develop it. As I already discussed in my introduction section, that ASP.NET 4.0 provides us a simplified and robust way to handle the entire URL routing mechanism. To provide URL routing, ASP.NET is now equipped with myriad classes and a number of methods to provide this feature, which allows to easily decouple the URL with physical files. We just need to use them. ASP.NET 4.0 router enables to define any kind of custom routes and we can easily map it to our webform page. ASP.NET 4.0 also made our life simpler by providing us the feature "Bi- Directional routing" with the help of several components like Route Table, Page Routehandler and ExpressionBuilders. With the help of the Route table, we can not only decode the Routed URL with the Route table and with the help of other methods provided by the ASP.NET 4.0, we also generate the URL with the ASP.NET routing mechanism, which gives us the opportunity, not to hard code the URL at several places, rather it will be dynamically generating the URLs with the help of Routing Definition. So we just require to change the Route Table on any changes in the URL, and don't need to change it in several other places throughout the solution. There are two main components of ASP.NET 4.0 URL Routing. This is basically a normal HTTPHandler, which is responsible for looking into all the incoming URL requests, and looking for any Routing definition available for the URL, if yes, then pass the request and data to the corresponding resource. HTTPHandler Expressions are provided with ASP.NET 4.0 to facilitate Bi-Directional Routing and more. Basically there are two types of ExpressionBuilders. ExpressionBuilders RouteURLExpressionBuilder RouteName Parameter RouteValueExpressionBuilder RoutedURL There are also a few new propertiesHttpRequest.RequestContext and Page.RouteData which facilitate the availability of the parameters to all resources. HttpRequest.RequestContext Page.RouteData In this example, I will be displaying the image and name of the book on my web page. Here, I will be going to an ASP.NET application and will take you step by step to create the application. Note: VS2010 Beta 2 System.Web.Routing Define the Route in Application_Start of Global.asax. Also include namespace System.Web.Routing. Application_Start namespace System.Web.Routing. void Application_Start(object sender, EventArgs e) { // Code that runs on application startup RouteTable.Routes.MapPageRoute("StoreRoute", "BookStore/{Name}", "~/Webpages/BookStore/ViewBookDemo.aspx"); } RoutURLExpressionBuilder Label Text Here, I fetch the parameter from the Routing Table and set the Image1 URL dynamically.(Include namespace System.Web.Routing): Image1 //fetching the parameter that is Route table string name = Page.RouteData.Values["Name"] as string; if (name != null) { if (name == "CSS") { Image1.ImageUrl = "~/images/css.jpg"; } else if (name == "Django") { Image1.ImageUrl = "~/images/django.jpg"; } else if (name == "IPhone") { Image1.ImageUrl = "~/images/iphone.jpg"; } else if (name == "Linq") { Image1.ImageUrl = "~/images/Linq.jpg"; } } } To view the entire code, please download the attachment. Now my solution hierarchy in the demo application is: And according to it, my URL should be as: Now when a user requests this application, the request is processed as: request Feedback is the key for me. I would request all of you to share your feedback and give me some suggestions, which would encourage and help me in writing.
http://www.codeproject.com/Articles/77199/URL-Routing-with-ASP-NET-4-0?fid=1570365&df=90&mpp=10&sort=Position&spc=None&select=4342171&tid=4423344
CC-MAIN-2015-35
refinedweb
1,104
57.47
hi, i have make a complete toolchain works using cygwin on win2000, targeting to powerpc, thanks for your kindly help.:-) >Liu Wensheng wrote: >> >> i use a cygwin1.3.5 on win2000 to build a toolchain for powerpc target. >> on host i use : > > My opinion has always been that RedHat has a much better and much more >Unix-like build environment for producing Cygwin-binaries on a PC : their >"Redhat Linux", provided with a Linux-x-Cygwin cross-toolset. So helping >people "to shoot into their own legs" by couraging them to use Cygwin as >the build system is not a bright idea... Some tasks become very hard in >a system which for instance cannot see a difference between 'Gnu.java' >and 'gnu.java'. The GNU tools are thought to be built under a real Unix- >system... > > The speed differences between Linux and NT/Cygwin on the same PC are >about 4 - 8 fold, so a one-hour build under Linux may become a build >lasting the whole workday. If trying to get any productivity (4 or more >toolsets per day), not using Cygwin is a must... this is really a problem. Surely, Redhat Linux is preferable in nearly every aspects compared to cygwin. But my initial intention is for the eas of host side embedded developer. the target is aimed at not too many platform. for example PPC. so i can group the tools together to run on windows host. just as users use Visual C. do you think this is applicable? and give me some suggestions. My next job is to port eCos to my 8240 boards. As my wish to bring convenience to developer mentioned above, i should do all this work on win2000 .(if you want to let others to go this way, why not you cut the way out first? :-) ....). but i have met problem when compiling eCos1.3.1. It alerts me that i will meet many similar questions, i am stubbed into these questions, i will gradually lost the target of "Porting Ecos" task. so i want to change to Redhat platform. > > And when understanding to use for instance Linux as the 'Unix-like' >build-system, not using Cygwin even as the run-time system comes easily >into one's mind. The Mingw-alternative which uses the native CRTDLL.DLL >or MSVCRT.DLL run-time C-libraries may then be the better and more >Windows-like environment. Even Win3.1x/Win32s is then possible as the >run-time host for GUI-only apps like Insight-GDB... Windows is the best >Windows, a real Unix the best Unix... cygwin , Mingw, you think which is more suitbable for my needs? > > So I advice you to ask from the Windows-nerds about the peculiarities of >the 'Cygwin-Unix' on the Cygwin-maillist, if needing any help with Cygwin >as a build system... > > Your problems here however are quite generic, so... > >> the source file is : >> binutils-2.11.2 binutils-2.10.1 >> gcc-2.95.1 >> newlib-1.9.0 >> gdb-5.0 >> >> after setting up the target and prefix environment variables, > > Why any extra permanent environment settings? Only a > > .../configure --prefix=my_prefix --host=my_host --target=my_target > >should be enough... > >> i do as the instructions. > > What instructions? > >> after i successfully make and install the binutils, the binutils's version is must up to date, i can cause problem. my successful version is changed to binutils2.10.1 . gcc2.95.3-- newlib1.9.0 --gdb5.1 > > The target headers, where are they now? > > The official 'instructions' in the Stallman's "Using and Porting the GNU >Compiler Collection (GCC)", ie. the 'GCC Manual', clearly say that the >target headers and libs (if available) should be preinstalled into the >'/usr/local/$target/include' and '.../lib', when using the default $prefix, >'/usr/local' (ie. not defining any kind of own $prefix). A simple logic >then says that the '$prefix/$target/include' is the preinstall place for >the target headers... ("Installation / Cross-Compiler") > >The newlib-1.9.0 sources definitely have the target headers in the: > > newlib/libc/include > >and it is a 'piece of cake' to copy them with a single command into their >final place. No prebuilt libs of cources, but after doing the > > make all-gcc ; make install-gcc this is the stage of making "bootstrap gcc" as explained in the link. > >in order to build and install GCC, building and installing newlib is >possible. Then the extra 'libiberty' and 'libstdc++' libs in the GCC >sources can be tried... > > Although you have a GCC, perhaps you have thought that the GCC-manual is >unnecessary. But it isn't... A SAQ ("Seldom Asked Question") is the "Where >in the hell is the GCC Manual?"... I have always wondered why the manual is >not interesting... I have a WinHelp-one, produced with 'makertf' and 'HC31.EXE' >from the source 'gcc.texi', partly under Linux, partly under Windows, and >used under Windows, but the net is full of PDF, PS, HTML etc. versions... >The GNUPro-docs (PDF) from RedHat are quite popular for the GNU tools. > > > >> i go on to second step to compile gcc. but when compiling, the following >> error messages occurs: >> >> _muldi3 >> /../gcc-2.95.1/gcc/libgcc2.c:41 stdlib.h: No such file or directory >> /../gcc-2.95.1/gcc/libgcc2.c:42 unistd.h: No such file or directory >> >> when i examine the file libgcc2.c: >> #ifndef inhibit_libc >> /* fixproto guarantees these system headers exitst. */ >> #include <stdlib.h> >> #include <unistd.h> >> #endif > > The target headers are quite necessary for compiling 'libgcc.a' right, so the >'inhibit_libc' doesn't help... Anyway there aren't problems in finding these >headers in the newlib sources. > > Another issue then are the new 'relational search paths' in gcc-2.95.[123] and >only the latest, gcc-2.95.3, handles them right, ie. precreating the install >directory: > > $prefix/lib/gcc-lib/$target/2.95.3 > >The 2.95.1 and 2.95.2 don't precreate their equivalents, so this error comes >whether the target headers are or are not in the 'search directory': > > $prefix/lib/gcc-lib/$target/2.95.x/../../../../$target/include > >ie. in the > > $prefix/$target/include > >While a human being sees these two being the same and can find the headers, >although there are unexisting parts in the path, the C-function in 'cpp' >cannot do this... > > If you look with the '.../gcc/xgcc -print-search-dirs', you will see the search paths >for the target GCC... > >> probably this is the libc or glibc problem, what shall do. when compiling the source >> , i think the header file needed is to the host(cygnus) not to the target(powerpc). >> and the two file resides in /usr/include. > > Producing a library for the target of course needs the headers for the target, not >for the host. The C-headers aren't 'generic', although newlib is the base library in >Cygwin too... > >Cheers, Kai Regards diego. wenshengliu@263.net ------ Want more information? See the CrossGCC FAQ, Want to unsubscribe? Send a note to crossgcc-unsubscribe@sourceware.cygnus.com
https://sourceware.org/pipermail/crossgcc/2001-December/007578.html
CC-MAIN-2021-21
refinedweb
1,176
67.86
Hi all, this is my first post on the forum and I hope I can get some help with my problem. I need to make a program to extract the last word from a sentence. I managed to extract the last character, but I have no idea how to get a word. Any help is appreciated!Any help is appreciated!Code:#include <iostream> #include <string> using namespace std; int main() { string strSentence = "Today will rain"; int intLength; intLength = strSentence.length(); cout << "\nThe last character of " << strSentence << " is a " << "'" << strSentence.substr(intLength - 1,1) << "'" ; cout << endl; system("PAUSE"); return 0; }//end main
http://cboard.cprogramming.com/cplusplus-programming/111395-extract-word.html
CC-MAIN-2014-52
refinedweb
101
76.62
Continuing yesterday's theme of marshaling directives, today I'll talk about UnmanagedType.Struct, another poorly-named (and therefore confusing) enumeration value. If you pretend that it's called "UnmanagedType.Variant," it should make more sense. This marshaling directive is only valid for parameters/fields/return types defined as System.Object (not types derived from System.Object), and it instructs the marshaler to marshal the instance to unmanaged code as a VARIANT. For both COM Interop and PInvoke scenarios, Object parameters and return types are already marshaled as VARIANTs by default. Therefore, using UnmanagedType.Struct is only necessary for structures with System.Object fields that you need to marshal as VARIANT fields. Without this directive, Object fields are marshaled as IUnknown pointers. Since structures with VARIANT fields are pretty rare (at least in Microsoft APIs), this marshaling directive is rarely needed. How does Object/VARIANT marshaling work? Although this marshaling only applies when the parameter/field/return value is defined as System.Object in metadata (ignoring late-bound scenarios), the instance of the Object passed can, of course, be any derived type. When marshaling from managed code to unmanaged code, the Interop marshaler figures out what type of VARIANT to create based on the instance's type. If you're passing an instance of a System.String, it creates a VT_BSTR VARIANT. If you're passing an instance of a System.Boolean, it creates a VT_BOOL VARIANT. When marshaling from unmanaged code to managed code, the marshaler does the reverse. Some of the conversions are not as straightforward, but I'll leave that discussion for another time. Here's a short C# PInvoke example that demonstrates Object/VARIANT marshaling in both directions by calling the VariantChangeType API: HRESULT VariantChangeType( VARIANTARG* pvargDest, VARIANTARG* pvarSrc, unsigned short wFlags, VARTYPE vt ); (VARIANTARG is just a typedef for VARIANT.) In the code below, the source Object (whose type is System.Boolean) gets marshaled to VariantChangeType as a VT_BOOL VARIANT. The API sets the pvargDest VARIANT pointer to a VT_BSTR VARIANT, which gets marshaled back to managed code as the destination Object whose type is System.String. I decided not to mark the System.Object parameters with [MarshalAs(UnmanagedType.Struct)] simply because I prefer to only use MarshalAsAttribute when I'm overriding default marshaling behavior. using System; using System.Runtime.InteropServices; public class VariantMarshaling { [DllImport("oleaut32.dll", PreserveSig=false)] static extern void VariantChangeType( ref object pvargDest, [In] ref object pvarSrc, ushort wFlags, VarEnum vt ); // From OLEAUTO.H const ushort VARIANT_ALPHABOOL = 2; public static void Main () { object destination = null; object source = true; VariantChangeType(ref destination, ref source, VARIANT_ALPHABOOL, VarEnum.VT_BSTR); // Prints "True" Console.WriteLine(destination); // Prints "System.String" Console.WriteLine(destination.GetType()); } } Recall that defining variables, parameters, etc. as object in C# is equivalent to defining them as System.Object. I won't get into the ongoing debate about which form is better, but I tend to use the language-specific identifiers in my code examples.
http://blogs.msdn.com/b/adam_nathan/archive/2003/04/24/56642.aspx?Redirected=true&t=UnmanagedType.Struct%20and%20VARIANT%20marshaling
CC-MAIN-2015-14
refinedweb
490
51.55
Nativescript's Open Source Functional Testing Framework Nativescript's Open Source Functional Testing Framework The 2.5 release of NativeScript brought a functional testing framework to the project. Now over 2 years old, it's been rewritten due to changes in iOS/Android. Join the DZone community and get the full member experience.Join For Free The. UI Testing Framework Why Do We Need One?: - Manage emulators, simulators, and real devices. - Verify images in a flexible way. - Test results, logs, and screenshots. - Locate elements via images when accessibility is not available. What is the NativeScript Testing Framework? the background). However, the limitation here is that tests run on iOS can be executed only from a MacOS host machine. templates. Easy to Get Started the Android 6.0 (API 23) emulator: mvn clean test -P nativeapp.emu.default.api23 Or execute the native app demo tests on the iPhone 7 iOS 10.0 simulator: mvn clean test -P nativeapp.sim.iphone7.ios10 After execution, the test results can be reviewed through a detailed report in the time of start/end and duration. In the case of/. Demo Details Structure The demo represents a Maven project and its pom.xml file contains information about its dependencies, how to build it, and available profiles with which to execute it. It could be imported into an IDE such as IntelliJ IDEA Community Edition. The testapp folder is the place to put the application under test. In the demo there are: - A native android application, nativeapp.apk, - A hybrid android application, hybridapp.apk, - A native iOS application, built for a simulator and archived as .tgz nativeapp.tgz, - A hybrid iOS application, built for a simulator and archived as .tgz hybridapp.tgz. The resources folder contains a few subfolders: - config - here are the .properties files grouped per application and containing basic information for every configuration of the application and the device on which we want to execute our tests. - images - this folder aims to gather all the expected images to perform an image verification against, and images to locate an area of the screen, again grouped per application and device name. - report - here is a .xsl file used to generate the HTML report mentioned earlier. - suites - here are the TestNG suite files describing which tests to execute. These suite files along with the config files are used in describing the Maven profiles to execute a command line in the pom.xml file. that is being tested (in this particular case a single page app). The most important thing for this class is that it inherits the BasePage class from the functional testing framework. public class HomePage extends BasePage { public HomePage() { super(); } } This gives stopping them, respectively. In addition, it provides various methods to compare the current screen of the app. Simply use the following: this.assertScreen(); The hybrid app sample follows the same logic. It aims to demonstrate how to locate and interact with elements via images located in the resources folder. The next few lines shows how to declare and initialize a field of type Sikuli, locate an area of the screen, and perform a the Android 6.0 (API 23) emulator: mvn clean test -P hybridapp.emu.default.api23 Or execute the hybrid app demo tests on the iPhone 7 iOS 10.0 simulator: mvn clean test -P hybridapp.sim.iphone7.ios10 How Does the Framework Do All of This? The java client of Appium, TestNG framework, and Sikuli library are all inn the heart of the testing framework, which allows us to implement the image locating functionality. The testing framework itself builds through Gradle, however, our demo and tests currently use Maven. It is worth mentioning a few key points on the framework implementation, such as the test Context. This class provides a direct access to fields like app, client, device, gestures, locators, log, settings, and others. The Device abstraction allows a single point of managing operations on. Framework in Action: Published at DZone with permission of Vasil Chimev , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/nativescripts-open-source-functional-testing-frame?fromrel=true
CC-MAIN-2019-51
refinedweb
693
58.89
--- Begin Message --- - To: Alan Cox <alan@cymru.net> - Subject: Re: Future of Linux - From: BadlandZ <rob@current.nu> - Date: Wed, 11 Nov 1998 20:42:46 -0600 - Message-id: <364A4B26.58EF8EF5@current.nu> - References: <199811112002.UAA19629@snowcrash.cymru.net>Alan Cox wrote: > > > What else will the lsb cover? Or has there been a decision about that > > yet? > > The only other stuff covered at the meeting was X11. The good work XFree > does is a big help there as their binary interfaces and the X specification > API's are both stable. Motif has been raised as a question, as has > OpenGL/MESA. I think it is unwize at this point to make LSB conserned with X11R6 standards. Of course it should/could comply with what X11R6, but I think there is significan merit to the comments by Jim Gettys about replacing X with an X2 situation where all applications/widgets are universally reading themes/styles from diffrent libs to allow more seemless application integeration to a user defined "look" as discribed in Therefore I think LSB should focus on more basic issues like making FHS compliant, sysV vs BSD init standards, and libs. Compilers are also an issue I feel strongly about. I think gcc and egcs are awsome, but no match (yet) for commercial compilers. And, if I may, hit on the issue of piracy. Although a strongly unpopular idea I know, I have been thinking more and more deeply about this issue. Let me paste and old draft of something I am working on here: Shit, I can't find it. Yes, I have drank a lot tonight. But basically, it ammounts to this: Every ISV complains about piracy. Some major software vendors that are vital (proof provide in draft I can't find) only port to things like SGI/IRIX where they can use key/serialnumber to allow one system only installs. therefore if: Linux/hardware allowed Serial number on hardware, accessable in OS then: ISV's would LOVE to port to linux because there would be no piracy, and they would have a better/secure sales expectation. It is interesting, InSight is a case in point, desperately needed and continously purchaced for $2000+ a year by almost every Chemistry department in the world, but only runs on SGI/IRIX stuff. Anyhow, that would also give GNU a shot in the arm, if you can't pirate it, you WANT a free version, so, instant motivation! > ESR also raised the question of standardising things like > Python. The suggestion for that was that the python people ought to > define any such standard and then the lsb issue is purely one of namespace > ie "lsb-python-..." shall be Python meeting he following criteria, with > the following options etc. Not a bad idea, the "lsb-..." for more than python. The idea being, if a end user downloads it, they can install it and use it on _ANY_ linux box (reguardless of OS and Hardware). > This is so that every app doesnt install their own version of python > "just in case". That could be extended to all interpreters and some > libraries probably and a farmed out approach would IMHO be good. Ooo. to deep for me tonight. I'll get back to that. -- "Robert W. Current" <rob@current.nu> - email - my server (looking for a good site to host) "Hey mister, turn it on, turn it up, and turn me loose." - Dwight Yoakam --- End Message ---
https://lists.debian.org/lsb-discuss/1998/11/msg00006.html
CC-MAIN-2016-44
refinedweb
573
72.26
Zones A zone is a contiguous portion of the DNS namespace. It contains a series of records stored on a DNS server. Each zone is anchored at a specific domain node. However, zones are not domains. A DNS domain is a branch of the namespace, whereas a zone is a portion of the DNS namespace generally stored in a file, and can contain multiple domains. A domain can be subdivided into several partitions, and each partition, or zone, can be controlled by a separate DNS server. Using the zone, the DNS server answers queries about hosts in its zone, and is authoritative for that zone. Zones can be primary or secondary. A primary zone is the copy of the zone to which the updates are made, whereas a secondary zone is a copy of the zone that is replicated from a master server. Zones can be stored in different ways. For example, they can be stored as zone files. On Windows 2000 servers, they can also be stored in the Active Directory ™ directory service . Some secondary servers store them in memory and perform a zone transfer whenever they are restarted. Figure 5.3 shows an example of a DNS domain that contains two primary zones. In this example, the domain reskit.com contains two subdomains: noam.reskit.com. and eu.reskit.com. Authority for the noam.reskit.com. subdomain has been delegated to the server noamdc1.noam.reskit.com. Thus, as Figure 5.3 shows, one server, noamdc1.noam.reskit.com, hosts the noam.reskit.com zone, and a second server, reskitdc1.reskit.com, hosts the reskit.com zone that includes the eu.reskit.com subdomain. Figure 5.3 Domains and Zones Rather than delegating the noam.reskit.com zone to noamdc1.noam.reskit.com, the administrator can also configure reskitdc1 to host the zone for noam.reskit.com. Also, you cannot configure two different servers to manage the same primary zones; only one server can manage the primary zone for each DNS domain. There is one exception: multiple computers can manage Windows 2000 Active Directory–integrated zones. For more information, see "Windows 2000 DNS" in this book. You can configure a single DNS server to manage one zone or multiple zones, depending on your needs. You can create multiple zones to distribute administrative tasks to different groups and to provide efficient data distribution. You can also store the same zone on multiple servers to provide load balancing and fault tolerance. For information about what zones contain, see "Resource Records and Zones" later in this chapter.
https://technet.microsoft.com/en-us/library/cc958980.aspx
CC-MAIN-2015-27
refinedweb
426
59.7
In the last article we looked at HTTP messages and saw examples of the text commands and codes that flow from the client to the server and back in an HTTP transaction. But, how does the information in these messages move through the network? When are the network connections opened? When are the connections closed? These are some of the questions this article will answer as we look at HTTP from a low-level perspective. But first, we'll need to understand some of the abstractions below HTTP. A Whirlwind Tour of Networking To understand HTTP connections we have to know just a bit about what happens in the layers underneath HTTP. Network communication, like many applications, consists of layers. Each layer in a communication stack is responsible for a specific and limited number of responsibilities. For example, HTTP is what we call an application layer protocol because it allows two applications to communicate over a network. Quite often one of the applications is a web browser, and the other application is a web server like IIS or Apache. We saw how HTTP messages allow the browser to request resources from the server. But, the HTTP specifications don't say anything about how the messages actually cross the network and reach the server-that's the job of lower-layer protocols. A message from a web browser has to travel down a series of layers, and when it arrives at the web server it travels up through a series of layers to reach the web service process. The layer underneath HTTP is a transport layer protocol. Almost all HTTP traffic travels over TCP (short for Transmission Control Protocol), although this isn't required by HTTP. When a user types a URL into the browser, the browser first extracts the host name from the URL (and port number, if any), and opens a TCP socket by specifying the server address (derived from the host name) and port (which defaults to 80). Once an application has an open socket it can begin writing data into the socket. The only thing the browser needs to worry about is writing a properly formatted HTTP request message into the socket. The TCP layer accepts the data and ensures the message gets delivered to the server without getting lost or duplicated. TCP will automatically resend any information that might get lost in transit, and this is why TCP is known as a reliable protocol. In addition to error detection, TCP also provides flow control. The flow control algorithms in TCP will ensure the sender does not send data too fast for the receiver to process the data. Flow control is important in this world of varied networks and devices. In short, TCP provides services vital to the successful delivery of HTTP messages, but it does so in a transparent way so that most applications don't need to worry about TCP. As the previous figure shows, TCP is just the first layer beneath HTTP. After TCP at the transport layer comes IP as a network layer protocol. IP is short for Internet Protocol. While TCP is responsible for error detection, flow control, and overall reliability, IP is responsible for taking pieces of information and moving them through the various switches, routers, gateways, repeaters, and other devices that move information from one network to the next and all around the world. IP tries hard to deliver the data at the destination (but it doesn't guarantee delivery-that's TCP's job). IP requires computers to have an address (the famous IP address, an example being 208.192.32.40). IP is also responsible for breaking data into packets (often called datagrams), and sometimes fragmenting and reassembling these packets so they are optimized for a particular network segment. Everything we've talked about so far happens inside a computer, but eventually these IP packets have to travel over a piece of wire, a fiber optic cable, a wireless network, or a satellite link. This is the responsibility of the data link layer. A common choice of technology at this point is Ethernet. At this level, data packets become frames, and low-level protocols like Ethernet are focused on 1s, 0s, and electrical signals. Eventually the signal reaches the server and comes in through a network card where the process is reversed. The data link layer delivers packets to the IP layer, which hands over data to TCP, which can reassemble the data into the original HTTP message sent by the client and push it into the web server process. It's a beautifully engineered piece of work all made possible by standards. Quick HTTP Request With Sockets and C# If you are wondering what it looks like to write an application that will make HTTP requests, then the following C# code is a simple example of what the code might look like. This code does not have any error handling, and tries to write any server response to the console window (so you'll need to request a textual resource), but it works for simple requests. A copy of the following code sample is available from. The sample name is sockets-sample. using System; using System.Net; using System.Net.Sockets; using System.Text; public class GetSocket { public static void Main(string[] args) { var host = ""; var resource = "/"; Console.WriteLine("Connecting to {0}", host); if(args.GetLength(0) >= 2) { host = args[0]; resource = args[1]; } var result = GetResource(host, resource); Console.WriteLine(result); } private static string GetResource(string host, string resource) { var hostEntry = Dns.GetHostEntry(host); var socket = CreateSocket(hostEntry); SendRequest(socket, host, resource); return GetResponse(socket); } private static Socket CreateSocket(IPHostEntry hostEntry) { const int httpPort = 80; foreach (var address in hostEntry.AddressList) { var endPoint = new IPEndPoint(address, httpPort); var socket = new Socket( endPoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp); socket.Connect(endPoint); if (socket.Connected) { return socket; } } return null; } private static void SendRequest(Socket socket, string host, string resource) { var requestMessage = String.Format( "GET {0} HTTP/1.1\r\n" + "Host: {1}\r\n" + "\r\n", resource, host ); var requestBytes = Encoding.ASCII.GetBytes(requestMessage); socket.Send(requestBytes); } private static string GetResponse(Socket socket) { int bytes = 0; byte[] buffer = new byte[256]; var result = new StringBuilder(); do { bytes = socket.Receive(buffer); result.Append(Encoding.ASCII.GetString(buffer, 0, bytes)); } while (bytes > 0); return result.ToString(); } } Notice how the program needs to look up the server address (using Dns.GetHostEntry), and formulate a proper HTTP message with a GET operator and Host header. The actual networking part is fairly easy, because the socket implementation and TCP take care of most of the work. TCP understands, for example, how to manage multiple connections to the same server (they'll all receive different port numbers locally). Because of this, two outstanding requests to the same server won't get confused and receive data intended for the other. Networking and Wireshark If you want some visibility into TCP and IP you can install a free program such as Wireshark (available for OSX and Windows from wireshark.org). Wireshark is a network analyzer that can show you every bit of information flowing through your network interfaces. Using Wireshark you can observe TCP handshakes, which are the TCP messages required to establish a connection between the client and server before the actual HTTP messages start flowing. You can also see the TCP and IP headers (20 bytes each) on every message. The following figure shows the last two steps of the handshake, followed by a GET request and a 304 redirect. With Wireshark you can see when HTTP connections are established and closed. The important part to take away from all of this is not how handshakes and TCP work at the lowest level, but that HTTP relies almost entirely on TCP to take care of all the hard work and TCP involves some overhead, like handshakes. Thus, the performance characteristics of HTTP also rely on the performance characteristics of TCP, and this is the topic for the next section. HTTP, TCP, and the Evolution of the Web In the very old days of the web, most resources were textual. You could request a document from a web server, go off and read for five minutes, then request another document. The world was simple. For today's web, most webpages require more than a single resource to fully render. Every page in a web application has one or more images, one or more JavaScript files, and one or more CSS files. It's not uncommon for the initial request for a home page to spawn off 30 or 50 additional requests to retrieve all the other resources associated with a page. In the old days it was also simple for a browser to establish a connection with a server, send a request, receive the response, and close the connection. If today's web browsers opened connections one at a time, and waited for each resource to fully download before starting the next download, the web would feel very slow. The Internet is full of latency. Signals have to travel long distances and wind their way through different pieces of hardware. There is also some overhead in establishing a TCP connection. As we saw in the Wireshark screenshot, there is a three-step handshake to complete before an HTTP transaction can begin. The evolution from simple documents to complex pages has required some ingenuity in the practical use of HTTP. Parallel Connections Most user agents (aka web browsers) will not make requests in a serial one-by-one fashion. Instead, they open multiple parallel connections to a server. For example, when downloading the HTML for a page the browser might see two <img> tags in the page, so the browser will open two parallel connections to download the two images simultaneously. The number of parallel connections depends on the user agent and the agent's configuration. For a long time we considered two as the maximum number of parallel connections a browser would create. We considered two as the maximum because the most popular browser for many years-Internet Explorer (IE) 6-would only allow two simultaneous connections to a single host. IE was only obeying the rules spelled out in the HTTP 1.1 specification, which states: A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. To increase the number of parallel downloads, many websites use some tricks. For example, the two-connection limit is per host, meaning a browser like IE 6 would happily make two parallel connections to, and two parallel connections to images.odetocode.com. By hosting images on a different server, websites could increase the number of parallel downloads and make their pages load faster (even if the DNS records were set up to point all four requests to the same server, because the two-connection limit is per host name, not IP address). Things are different today. Most user agents will use a different set of heuristics when deciding how many parallel connections to establish. For example, Internet Explorer 8 will now open up to six concurrent connections. The real question to ask is: how many connections are too many? Parallel connections will obey the law of diminishing returns. Too many connections can saturate and congest the network, particularly when mobile devices or unreliable networks are involved. Thus, having too many connections can hurt performance. Also, a server can only accept a finite number of connections, so if 100,000 browsers simultaneously create 100 connections to a single web server, bad things will happen. Still, using more than one connection per agent is better than downloading everything in a serial fashion. Fortunately, parallel connections are not the only performance optimization. Persistent Connections In the early days of the web, a user agent would open and close a connection for each individual request it sent to a server. This implementation was in accordance with HTTP's idea of being a completely stateless protocol. As the number of requests per page grew, so did the overhead generated by TCP handshakes and the in-memory data structures required to establish each TCP socket. To reduce this overhead and improve performance, the HTTP 1.1 specification suggests that clients and servers should implement persistent connections, and make persistent connections the default type of connection. A persistent connection stays open after the completion of one request-response transaction. This behavior leaves a user agent with an already open socket it can use to continue making requests to the server without the overhead of opening a new socket. Persistent connections also avoid the slow start strategy that is part of TCP congestion control, making persistent connections perform better over time. In short, persistent connections reduce memory usage, reduce CPU usage, reduce network congestion, reduce latency, and generally improve the response time of a page. But, like everything in life there is a downside. As mentioned earlier, a server can only support a finite number of incoming connections. The exact number depends on the amount of memory available, the configuration of the server software, the performance of the application, and many other variables. It's difficult to give an exact number, but generally speaking, if you talk about supporting thousands of concurrent connections, you'll have to start testing to see if a server will support the load. In fact, many servers are configured to limit the number of concurrent connections far below the point where the server will fall over. The configuration is a security measure to help prevent denial of service attacks. It's relatively easy for someone to create a program that will open thousands of persistent connections to a server and keep the server from responding to real clients. Persistent connections are a performance optimization but also a vulnerability. Thinking along the lines of a vulnerability, we also have to wonder how long to keep a persistent connection open. In a world of infinite scalability, the connections could stay open for as long as the user-agent program was running. But, because a server supports a finite number of connections, most servers are configured to close a persistent connection if it is idle for some period of time (five seconds in Apache, for example). User agents can also close connections after a period of idle time. The only visibility into connections closed is through a network analyzer like Wireshark. In addition to aggressively closing persistent connections, most web server software can be configured to disable persistent connections. This is common with shared servers. Shared servers sacrifice performance to allow as many connections as possible. Because persistent connections are the default connection style with HTTP 1.1, a server that does not allow persistent connections has to include a Connection header in every HTTP response. The following code is an example. HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Connection: close Content-Length: 17149 The Connection: close header is a signal to the user agent that the connection will not be persistent and should be closed as soon as possible. The agent isn't allowed to make a second request on the same connection. Pipelined Connections Parallel connections and persistent connections are both widely used and supported by clients and servers. The HTTP specification also allows for pipelined connections, which are not as widely supported by either servers or clients. In a pipelined connection, a user agent can send multiple HTTP requests on a connection before waiting for the first response. Pipelining allows for a more efficient packing of requests into packets and can reduce latency, but it's not as widely supported as parallel and persistent connections. Where Are We? In this chapter we've looked at HTTP connections and talked about some of the performance optimizations made possible by the HTTP specifications. Now that we have delved deep into HTTP messages and even examined the connections and TCP support underneath the protocol, we'll take a step back and look at the Internet from a wider perspective.
http://code.tutsplus.com/tutorials/http-succinctly-http-connections--net-33707
CC-MAIN-2014-10
refinedweb
2,692
53.51
Timeline.7.xstable/1.8.x by - Fixed #21582 -- Corrected URL namespace example. Thanks oubiga for … - 17:51 Changeset [929cead]stable/1.7.xstable/1.8.x by - Switched setup.py to setuptools. - 16:01 Ticket #21685 (admin_doc/model_index.html should use application names) created by - Currently, the template uses app_label|capfirst to display the … - 15:58 Changeset [6e3ca65]stable/1.7.xstable/1377]stable/1.7.xstable/1.7.xstable/1.7.xstable/1.8.x by - Fixed #21609 -- Amended CONTRIBUTING.rst pull request guidelines. … - 12:29 Changeset [cfbdd587]stable/1.7.xstable/1.8.x by - Added file forgotten in previous commit, plus one more test. - 12:25 Changeset [e179291]stable/1.7.xstable/1.8.x by - Added tests for invalid values of INSTALLED_APPS. - 12:25 Changeset [b355e98]stable/1.7.xstable/1.8.x by - Normalized exceptions raised by AppConfig.create. It raises … - 12:25 Changeset [ce1bc2c]stable/1.7.xstable/1.8.x by - Made the AppConfig API marginally more consistent. Eliminated the … - 12:21 Changeset [fec5330c]stable/1.7.xstable/1.8.x by - Made unset_installed_apps reset the app registry state. Previously … - 12:21 Changeset [8925aaf6]stable/1.7.xstable/1.7.xstable/1.8.x by - Fixed #21206 -- No longer run discovery if the test label doesn't … - 08:04 Changeset [52325b0]stable/1.7.xstable/1.8.x by - Turned apps.ready into a property. Added tests. - 07:12 Changeset [92243017]stable/1.7.xstable/1.8.x by - Beefed up the comments for AppConfig.all_models. - 07:03 Changeset [8f04f53]stable/1.7.xstable/1.8.x by - Removed a few gratuitous lambdas. - 06:52 Changeset [4e7aa573]stable/1.7.xstable/1.7.xstable/1.8.x by - Fixed #21627 -- Added unicode_literals to changepassword command. … - 04:14 Changeset [b536ad0]stable/1.6.x by - [1.6.x] Fixed #21662 -- Kept parent reference in prepared geometry … - 04:10 Changeset [318cdc07]stable/1.7.xstable/1/1.7.xstable/1]stable/1.7.xstable/1.8.x by - Fixed unittest typo - 08:56 Changeset [b798d2b]stable/1.7.xstable/1]stable/1.7.xstable/1]stable/1.7.xstable/1 Note: See TracTimeline for information about the timeline view.
https://code.djangoproject.com/timeline?from=2013-12-26T13%3A07%3A09-08%3A00&precision=second
CC-MAIN-2015-11
refinedweb
365
56.93
Discover Functional JavaScript was named one of the best new Functional Programming books by BookAuthority! Pure functions produce the same output value, given the same input. They have no side-effects, and are easier to read, understand and test. Given all this, I would like to create a store that hides the state but at the same time uses pure functions. Immutability Pure functions don’t modify their input. They treat the input values as immutable. An immutable value is a value that, once created, cannot be changed. Immutable.js provides immutable data structures like List. An immutable data structure will create a new data structure at each action. Consider the next code: import { List } from "immutable"; const list = List(); const newList = list.push(1); push() creates a new list that has the new element. It doesn’t modify the existing list. delete() returns a new List where the element at the specified index was removed. The List data structure offers a nice interface for working with lists in an immutable way, so I will use it as the state value. The Store The store manages state. State is data that can change. The store hides that state data and offers a public set of methods for working with it. I would like to create a book store with the add(), remove() and getBy() methods. I want all these functions to be pure functions. There will be two kind of pure functions used by the store: - functions that will read and filter the state. I will call them getters. - functions that will modify the state. I will call them setters. Both these kinds of functions will take the state as their first parameter. Discover Functional JavaScript was named one of the best new Functional Programming books by BookAuthority! For more on applying functional programming techniques in React take a look at Functional React. Learn functional React, in a project-based way, with Functional Architecture with React and Redux.
https://www.freecodecamp.org/news/how-to-create-a-store-using-pure-functions-2c19b678552f/
CC-MAIN-2021-31
refinedweb
328
67.86
The 1.2 schema managed to sneak into the 2.0.x releases - but nothing is actually referencing it. Until you get to 2.1.x (trunk) all of the programs still refer to 1.1. So, we should probably figure out whether or not we need to jump up to 1.3 or if we can stay on 1.2. And, this would make a good opportunity to find out if anyone has any additional changes that they would like to make to the schema. Regardless of whether or not we need to change the schema version immediately - we should be releasing 2.1 soon, and it would be better to get as many changes as possible into the schema now rather than burning though lots of version numbers. So, does anyone else have a take on new elements that should be added to the schema and whether or not we need to freeze 1.2? Jay Jarek Gawor wrote: > This is a change to an existing and published schema. I think that > needs its own new file and namespace. > > Jarek > >> Modified: geronimo/server/trunk/framework/modules/geronimo-system/src/main/xsd/attributes-1.2.xsd >> URL: >> ============================================================================== >> --- geronimo/server/trunk/framework/modules/geronimo-system/src/main/xsd/attributes-1.2.xsd (original) >> +++ geronimo/server/trunk/framework/modules/geronimo-system/src/main/xsd/attributes-1.2.xsd Mon Nov 19 21:25:27 2007 >> @@ -183,6 +183,19 @@ >> </xsd:documentation> >> </xsd:annotation> >> </xsd:attribute> >> + <xsd:attribute >> + <xsd:annotation> >> + <xsd:documentation> >> +. >> + </xsd:documentation> >> + </xsd:annotation> >> + </xsd:attribute> >> <!--<xsd:attribute--> >> <!--<xsd:annotation>--> >> <!--<xsd:documentation>--> >> > >
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200711.mbox/%3C47431BDF.6080306@jnwd.net%3E
CC-MAIN-2015-18
refinedweb
262
51.24
package require html namespace eval scgi { proc listen {port} { socket -server [namespace code connect] $port } proc connect {sock ip port} { fconfigure $sock -blocking 0 -translation {binary crlf} fileevent $sock readable [namespace code [list read_length $sock {}]] } proc read_length {sock data} { append data [read $sock] if {[eof $sock]} { close $sock return } set colonIdx [string first : $data] if {$colonIdx == -1} { # we don't have the headers length yet fileevent $sock readable [namespace code [list read_length $sock $data]] return } else { set length [string range $data 0 $colonIdx-1] set data [string range $data $colonIdx+1 end] read_headers $sock $length $data } } proc read_headers {sock length data} { append data [read $sock] if {[string length $data] < $length+1} { # we don't have the complete headers yet, wait for more fileevent $sock readable [namespace code [list read_headers $sock $length $data]] return } else { set headers [string range $data 0 $length-1] set headers [lrange [split $headers \0] 0 end-1] set body [string range $data $length+1 end] set content_length [dict get $headers CONTENT_LENGTH] read_body $sock $headers $content_length $body } } proc read_body {sock headers content_length body} { append body [read $sock] if {[string length $body] < $content_length} { # we don't have the complete body yet, wait for more fileevent $sock readable [namespace code [list read_body $sock $headers $content_length $body]] return } else { handle_request $sock $headers $body } } } proc handle_request {sock headers body} { array set Headers $headers parray Headers puts $sock "Status: 200 OK" puts $sock "Content-Type: text/html" puts $sock "" puts $sock "<HTML>" puts $sock "<BODY>" puts $sock [::html::tableFromArray Headers] puts $sock "</BODY>" puts $sock "<H3>Body</H3>" puts $sock "<PRE>$body</PRE>" if {$Headers(REQUEST_METHOD) eq "GET"} { puts $sock {<FORM METHOD="post" ACTION="/scgi">} foreach pair [split $Headers(QUERY_STRING) &] { lassign [split $pair =] key val puts $sock "$key: [::html::textInput $key $val]<BR>" } puts $sock "<BR>" puts $sock {<INPUT TYPE="submit" VALUE="Try POST">} } else { puts $sock {<FORM METHOD="get" ACTION="/scgi">} foreach pair [split $body &] { lassign [split $pair =] key val puts $sock "$key: [::html::textInput $key $val]<BR>" } puts $sock "<BR>" puts $sock {<INPUT TYPE="submit" VALUE="Try GET">} } puts $sock "</FORM>" puts $sock "</HTML>" close $sock } scgi::listen 9999 vwait forever Woof! uses a descendant of the above code for its SCGI support. Note the above code does not protect against malformed (and malicious) protocol input. Will update here once I fix Woof.MJ - Usually a webserver forms the SCGI requests and I think it's a fair assumption that those requests are valid. But because it has been a while since I looked at this, what would malicious protocol input be?APN I overlooked that the requests come from your own webserver so you are right. I missed that. By malicious, I meant input that would cause DoS attacks, e.g. sending a header length greater than the actual data would cause the above code to spike to 100% CPU, I think easily fixed by a EOF check. Thanks for this code BTW, as it is likely to be Woof!'s preferred web server interface mechanism as it supports Apache (with mod_scgi), nginx (mod_scgi), lighttpd (built-in) and IIS (with isapi_scgi).MJ - The request length is also determined by the server, so if that forms the requests correctly, that's not really a problem either. Of course an [eof] check never hurts. Also you are very welcome, I am glad this is useful to someone. For me it was just a nice small project.MS - I do wonder what webservers do with a POST request having a form variable CONTENT_LENGTH=987654321. IIUC, the SCGI protocol [2] forbids form variables named CONTENT_LENGTH and SCGI, as it forbids duplicate header names and those two are obligatory. Also REQUEST_METHOD and REQUEST_URI are likely to get you in trouble; any others? APN Form variables are not sent as HTTP headers. They are part of the content. The SCGI restriction refers to HTTP headers only. APN In case any one's interested, I've implemented a SCGI ISAPI extension for IIS 6/7/8. It is available at [3]. Documentation is at [4]. APN Another pure-Tcl SCGI server, this one multithreaded is at
http://wiki.tcl.tk/19670
CC-MAIN-2016-18
refinedweb
680
63.63
Hi.. Here is a program about determining the total number of green-necked vulture eggs counted by all conservationists in the conservation district. The program in fact works correctly but there is a line which i am not being able to understand its use in the program and it is the line "cin >> next". Hope to get some explanation about this code..thank you in advance. //DISPLAY 4.9 Nicely Nested Loops //Determines the total number of green-necked vulture eggs //counted by all conservationists in the conservation district. #include <iostream> using namespace std; int get_one_total(); //Precondition: User will enter a list of egg counts //followed by a negative number. //Postcondition:returns a number equal to the sum of all the egg counts. int main( ) { cout << "This program tallies conservationist reports\n" << "on the green-necked vulture.\n" << "Each conservationist's report consists of\n" << "a list of numbers. Each number is the count of\n" << "the eggs observed in one" << " green-necked vulture nest.\n" << "This program then tallies" << " the total number of eggs.\n"; int number_of_reports; cout << "How many conservationist reports are there? "; cin >> number_of_reports; int grand_total = 0, subtotal, count; for (count = 1; count <= number_of_reports; count++) { cout << endl << "Enter the report of " << "conservationist number " << count << endl; subtotal = get_one_total (); cout << "Total egg count for conservationist " << " number " << count << " is " << subtotal << endl; grand_total = grand_total + subtotal; } cout << endl << "Total egg count for all reports = " << grand_total << endl; return 0; } //Uses iostream: int get_one_total() { int total; cout << "Enter the number of eggs in each nest.\n" << "Place a negative integer" << " at the end of your list.\n"; total = 0; int next; cin >> next; while (next >= 0) { total = total + next; cin >> next; } return total; }
https://www.daniweb.com/programming/software-development/threads/316510/what-is-the-meaning-of-the-function-cin-next
CC-MAIN-2017-30
refinedweb
279
53.71
import simplejson as json # Returns empty file -> no results (wrong): get_open = os.popen( binary_path + ' -q "' + __resource__ + '/top250.php"', "r" ) # Returns content that was requested (right): get_open = os.popen( binary_path + ' -q ' + __resource__ + '/top250.php', "r" ) krish_2k4 Wrote:this seems to have stopped updating in Eden Beta2 im using script.ratingupdate-1.1.8 version. heres the xbmc.log don't know which part you need so copy/pasted it all! m4x1m Wrote:Enable in xbmc: Settings -> System -> Debugging -> Enable debug logging restart the script (few movies is enough) and repost debug log. I've tried with latest nightly build and the script work perfectly! krish_2k4 Wrote:here is the debug log Quote:- In every os.popen command, the php filepath was put into doublequotes, I removed the doublequotes. For example: etc etc. m4x1m Wrote:It's an issue in xbmc related python in Windows platform. Read the post above of MCipher. The problem are the doublequote like in Dharma. If the problem isn't resolved with the final stable version of XBMC I will take measures to solve it! voidy Wrote:i feel like a retard i cant seem to find the php-cgi binary and i cant seem to find what to do in the readme im on a mac running snow leopard with xbmc eden beta2 on the experiance 1080 skin. m4x1m Wrote:I'm sorry but I haven't a Mac to guide you step by step for the installation of php-cgi. Search on google how to install it in Mac then in the settings addon you can browse the folder to set the right path!
http://forum.xbmc.org/printthread.php?tid=107331&page=11
CC-MAIN-2013-20
refinedweb
269
75.4
RodneyShag + 22 comments Java solution - more efficient than Editorial solution import java.util.Scanner; public class Solution { public static void main(String [] args) { Scanner scan = new Scanner(System.in); int L = scan.nextInt(); int R = scan.nextInt(); scan.close(); int xored = L ^ R; int significantBit = 31 - Integer.numberOfLeadingZeros(xored); int result = (1 << (significantBit + 1)) - 1; System.out.println(result); } } To maximize A xor B, we want A and B to differ as much as possible at every bit index. We first find the most significant bit that we can force to differ by looking at L and R. For all of the lesser significant bits in A and B, we can always ensure that they differ and still have L <= A <= B <= R. Our final answer will be the number represented by all 1s starting from the most significant bit that differs between A and B Let's try an example L = 10111 R = 11100 _X___ <-- that's most significant differing bit 01111 <-- here's our final answer Notice how we don't directly calculate the values of A and B. From my HackerRank solutions. snaran + 0 comments My way had the same concept as your picture: to calculate the highest order bit that is different and subtract one from it, though my way has a loop that you avoided by doing an ^. static int maxXor(int l, int r) { int lBit = Integer.highestOneBit(l); int rBit = Integer.highestOneBit(r); if (lBit == rBit) { if (lBit == 0) { return 0; } else { int newL = l & (~lBit); int newR = r & (~lBit); return maxXor(newL, newR); } } else { int max = Math.max(lBit, rBit); return (max << 1) -1; } } seshu_techie + 4 comments It is a nice solution. My solution is similar with no additional method calls. int x = l ^ r; int max = 0; while(x > 0) { max <<= 1; max |= 1; x >>= 1; } return max; H__CL + 2 comments Guess we can prove it like this. Let , it is easy to see that (since , have at most n bits, so the bitwise xor can not be larger than ). Now assume that and . This is equivalent to that the binary expansion of and differs by the most significant bit, i.e., and . Then and satisfies and which achieves the upper bound of the xor of two numbers less than . Thus the maximum of for is , where n is the bit length of . sanchit2812 + 1 comment[deleted] H__CL + 0 comments The proof goes like 1. estimate a upper bound of and 2. find and that satisfies the constraint and achieves the upper bound. Let's say (binary) and , then we can construct: (put 1 at the most significant bit followed by 0s) and (put 0 at the most significant bit followed by 1s) so that . And the constraints are also satisfied. Hope this example helps. h1485237610 + 0 comments public static void main(String[] args) { try (Scanner sc = new Scanner(System.in);) { int l = sc.nextInt(); int r = sc.nextInt(); System.out.println(allOne(l ^ r)); } } public static int allOne(int i) { i |= (i >> 1); i |= (i >> 2); i |= (i >> 4); i |= (i >> 8); i |= (i >> 16); return i; } theBeacon + 1 comment Whatever number is calculated in this way may not be in actual domain of possible values. How can correctness of this solution can be proved and verified. RodneyShag + 1 comment Hi. I cannot provide a formal mathematical proof, mostly because it would probably take 4 paragraphs of explaining in detail. It's best to just see it always works by going through a few examples on your own. A good thing to notice is that the number calculated will always be in the actual domain of possible values because my solution says "We first find the most significant bit that we can force to differ by looking at L and R." This is the main step that gets A and B to be in the correct domain. alphazygma + 1 comment I have a question, why did you have int significantBit = 31 - Integer.numberOfLeadingZeros(xored); Instead of 32, which represents the 32bits used for an int. Using the base example of L = 10, R = 15 L ^ R = 1010 ^ 1111 = 0101 = 5 The result of Integer.numberOfLeadingZeros(xored) is 29, which matches 32bits - 3bits (for 5) And then your result would be int result = (1 << significantBit) - 1; Just trying to learn on your reasoning to use 31. RodneyShag + 0 comments Your version of the code works also. I used 31 instead of 32 since I count the 32 bits from 0 to 31 instead of 1 to 32. That is, I count the leftmost bit as the 31st bit, and the rightmost bit as the 0th bit. mehmetmustafa + 1 comment Nice solution. It can be written in one line: System.out.println(Integer.highestOneBit(L ^ R) * 2 - 1); nabilpatel1107 + 1 comment @rshaghoulian Instead of substracting from 31 can we substract from 32 and do not add 1 later? or there is something I am missing because of it, it is required to substract from 31? RodneyShag + 1 comment Yes, you can do that. I subtract from 31 since I like to number the bits 0 to 31 (from right to left). If you subtract from 32, it's kind of like numbering the bits 1 to 32. navi25 + 0 comments We can also do - static int maximizingXor(int l, int r) { int xored = l ^ r; int highestBit = Integer.highestOneBit(xored); return 2*highestBit -1; } The logic is same. One additional charactersitics that's been used is:- Sum of all the trailing zeros converted to ones is HighestBit - 1. For eg - L = 10111 R = 11100 _X___ <-- that's most significant differing bit 01000 <-- This is what HighestBit gave us. We need to calculate 01111 in int, this can be obtained by sum of 01000 and 00111 and that's equal to 2*highestBit - 1. gursoyserhan + 0 comments This is absolutely great! Here, this is same operation in PHP, function maximizingXor($l, $r) { $xored = $l ^ $r; $most = strlen(decbin($xored)); $bin = ''; for ( $i =0; $i < $most; $i++) $bin = $bin . "1"; return bindec($bin); } eloyekunle + 1 comment In the spirit of this solution, here's one in Python: (1 << (len(format(l^r, 'b')))) - 1 pro_noobmaster + 0 comments Python Oneliner Solution def maximizingXor(l, r): return pow(2,len(bin(l^r))-2)-1 R101154 + 7 comments I'm getting a wrong answer on test case 1. I looked at the test case and when I copy and paste the test case in myself -- it works just fine. All other test cases run just fine without any errors. I believe there is something wrong with the test. eagle31337 + 1 comment Check range =) abhiranjan + 0 comments @ pr0grammer1, it seems you are able to solve this problem :) Let us know if you have any other query. harinani04 + 0 comments guys just check your code once again...it's working for me...check if the "2nd for" loop is considering the last element of array... Goodwine + 3 comments It is possible to make a solution O(b)where bis the position of the most significant bit of R Hint: L=7,R=8, and mostSignificantBit. benpfisher + 1 comment Is this considered constant time? I think so. dead_pool + 1 comment sir can you explain how the below solution is solving this problem. def maxXOR(L,R): P = L^R ret = 1 while(P): # this one takes (m+1) = O(logR) steps ret <<= 1 P >>= 1 return (ret - 1) i dont understand that how we are getting right answer by doing this method. conkosidowski + 1 comment Consider this: - Strip your number down to the most significant bit followed by some amount of 0s or 1s thereafter. This can be done using l^r. The first time one of the numbers has a 1 and the other doesn't, this will be your most significant bit. - Set all the following 0s to 1s. If 1000 is your most significant, the maximum will be 1111. - Print the decimal interpretation of the resulting binary number. dead_pool + 1 comment Thanks buddy for response.But can you explain why we are replacing all zeros to ones after most significant digit. conkosidowski + 0 comments The best way I can explain it is with examples. 8 is 1000 and 7 is 0111. Anything between 8 and 0 is going to be 15 because your highest XOR number is 1111. As you can see, you can find this by finding the most significant bit and flipping all the following to 1. 8 is 1000 and 9 is 1001. Here you'll find that the answer is 0001. This is because 8 and 9 share the same most significant bit. So after XOR the most significant bit is 0001. 27 is 11011 and 8 is 01000. Here your MSB is 10000 and you can choose 10111 and 01000. I wish I could explain it better but it's easiest to notice the pattern and go from there. Every answer will always be a power of 2 minus 1, so every answer is always a series of 1s. benpfisher + 1 comment It can be better than log(R). If R and L are 4 byte integers, it can be done in 31 steps worst case. jameseamaya + 1 comment lg(n) means log2(n), log(n) means log10(n). usually Alphablackwolf + 2 comments I have it in constant O(1) time here: antitau + 1 comment c++, same complexity, but probably faster on newer processors, although not directly portable: #include <climits> int maxXor(int a, int b) { if(a^=b) return ~INT_MIN>>__builtin_clz(a)-1; return 0; } akshay_megharaj + 1 comment How does a compiler compute 'number of leading zeros' in O(1)? I think this solution is O(b) where b is the index of MSB in a^b akshay_megharaj + 2 comments Although your code is O(1), any O(b) solution will be of the same complexity or perhaps even faster for cases where b < 4. Here b is the index of MSB in a^b. A very simple O(b) solution that will work for any values of L & R and will work faster than above code for cases where b < 4 int maxXor(int a, int b) { int value = a ^ b, result = 1; while (value) { value = value >> 1; result = result << 1; } return result - 1; } sushrut619 + 1 comment Nice solution. How does one deduce a solution like this ? Before peeking into the discussion forum, I was thinking that if l is 0 then the answer would be r ^ 0 = r. Hence, when l is not 0 the answer should have something to do with r ^ l. But I could not take my thought process any further. akshay_megharaj + 0 comments For me what works is taking a few examples and trying to find a pattern. Once I find a pattern, I convince myself that it will work for all cases or try to find a test case that will fail. Once I find a test case that fails I'll try to modify the pattern or come up with a new pattern. Like in this case the pattern was the MSB that differs between the two numbers. Hope this helps. gagandeep2598 + 1 comment Can you please explain how you came up with this soultion. Thanks :) ankitrhode + 0 comments Time complexity is O(n); Ex:1 2 3 4 5 l=1 and r=5 property:Ex-or operations commutative. 1@2 2@4 3@4 4@5 1@3 1@4 1@5 first time,it take n-1 @(ex-or) operation and after that only 1 for each input jeremy_roy + 0 comments Solution in O(g) time, where g is the MSF of (l xor r). For a 32 bit unsigned integer, worst-case g would be 32 operations. int maxXor(int l, int r) { int a = l ^ r; int max = 0; while (a != 0) { max |= a; a = a >> 1; } return max; } sanne09 + 1 comment I'm new to programming so sorry if it's a dumb question but I have to convert the decimal integers into binary right? jberlage + 4 comments Nope, you won't have to do any explicit conversion. Decimal integers are represented as binary under the hood. You'll want to look into the bitwise operators, they can be used on integers and are a useful way to manipulate the underlying bits. user983783 + 0 comments I was actually struggling with this problem because I thought that I had to convert from decimal to binary. I didnt know I could simply use var03=var01^var02; he he... teamdebo + 2 comments There is no such thing as decimal integers. Decimal values are NOT integers, they are called floating point values and are stored in memory through approximation unlike integers which are exact values. This is very important to understand and remember because this is exactly the reason why you never ever ever use decimal/floating point variables to store or calculate values of high precision such as currency because the approximations which are floating point values can result in arithmetic error when many calculations are performed. The reason I'm leaving this comment is not to be arrogant but to make a point that there is a fundamental difference in the way integers are stored in memory and if you want to save yourself hours of wasted time looking for devious bugs, read up about these differences. Omnikron13 + 0 comments Actually, the 'integers' passed in from stdin aren't binary integers or decimal integers. They're a string of ASCII characters, which -do- need to be converted. Depending on language, you very well might need to do explicit conversion to get actual integers. Remember, the dude said he was new to programming, you need to be careful how you answer such questions. =P Sort 278 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/maximizing-xor/forum
CC-MAIN-2019-43
refinedweb
2,307
71.34
Links to original questionsTotal time limit: 1000ms memory limit: 65536kB description You buy a box of n apples, and it's unfortunate to buy a bug in the box. A bug can eat an apple every x hour, assuming that the bug won't eat before you eat an apple, then you've a number of full apples. Sample outputSample output 10 4 9 Note: it's required to complete the number of apples.Note: it's required to complete the number of apples. 7 #include <iostream> #include <iomanip> using namespace std; int main() { int n, x, y; bool b; cin>> n>> x>> y; b = y%x; cout <<n-y/x-b <<endl; return 0; }
https://www.dowemo.com/article/47363/15:-apple-and-bugs
CC-MAIN-2018-30
refinedweb
115
67.08
HTML and CSS Reference In-Depth Information Getting SVG on Your Page SVG is an XML-based markup language that provides support for a number of vector primitives, including text, rectangles, circles, ellipses, and arbitrary paths. These primitives can use different strokes and fills, including pattern and gradient fills. SVG also supports advanced features such as clipping, masking, compositing, and animation. Browsers provide an overabundance of ways to place SVG on the page, including as the source in an <img> tag, linked from an <embed> tag, linked from an <object> tag, and embedded using an <iframe> , or dropped directly onto the page in an <svg> tag. Having all those options is a little confusing, but you don't actually need to know all of them. There are three separate use cases, each with a preferred embedding mech- anism: • First, if you just want a simple way to get an external SVG document on the page, use an <img> tag. You can't script the tag, and users of Firefox pre 4.0 will be out of luck (this is currently approximately 4% of Firefox users and getting smaller all the time), but it's the easiest way to put an SVG document onto the page that you just need to work as an image. <img src='mydocument.svg' /> • Second, if you have an external SVG document that you want to script and interact with, use the <embed> tag, which enables you to reach into the document and add event handlers and the like: <embed src="mydocument.svg" type="image/svg+xml" /> • Finally, the last and most common usage from a game development perspective is to embed your SVG document directly into the page: <svg id="mysvg" xmlns="" version="1.1"> <rect x="20" y="20" width="50" height="50" fill="black" /> </svg> Often you want to start with an empty <svg> tag (much like the <canvas> tag) and add all your objects dynamically. This is how the Quintus engine will be extended in this chapter to support SVG. One thing you'll notice is the inclusion of the version attribute and the xmlns (short for XML namespace) attribute. The namespace is important because you're embedding a different type of document into your HTML and need to tell the browser how to handle it. In this case the browser is clever enough to render the document without the namespace, but trying to create elements via JavaScript without the namespace won't work. Getting to Know the Basic SVG Elements As you saw briefly, SVG documents are simple XML documents that can be embedded directly into the page. You can also load an SVG document directly into the browser by loading it from a URL or your local machine. Listing 14-1 shows a simple, hand-written SVG file embedded in an HTML5 document. You'll need the penguin.png file in your images/ directory to make it work yourself. The output of the file is shown in Search WWH :: Custom Search
http://what-when-how.com/Tutorial/topic-497t64c3d5/HTML5-Mobile-Game-Development-267.html
CC-MAIN-2017-47
refinedweb
501
57.1
Interprets communication between the client and the IRC server. More... #include <ircnetworkadapter.h> Interprets communication between the client and the IRC server. Definition at line 27 of file ircnetworkadapter.h. Gets adapter type for this adapter instance. Implements IRCAdapterBase. Definition at line 37 of file ircnetworkadapter.h. Checks if client is an operator on a specified channel. Definition at line 42 of file ircnetworkadapter.h. Bans specified user from a channel. The data that is required to deliver a ban is contained inside the string returned by /whois query. This will create a IRCDelayedOperation object and first send /whois <nickname> query. When the /whois returns the delayed operations are searched for pending bans. This is when bans are delivered. For end-user this effect should be almost completely invisible. Definition at line 155 of file ircnetworkadapter.cpp. Detaches the specified IRCChatAdapter instance from this network without deleting it. Definition at line 178 of file ircnetworkadapter.cpp. Implemented to support direct communication between client and server. All messages that do not begin with '/' character will be ignored here. In fact if the '/' character is missing an error() signal will be emitted to notify the user of this fact. Programmers must remember that although IRC protocol itself doesn't require clients to prepend the messages with a slash this class always does - the slash character is stripped, then the remainder of the message is sent 'as is'. Implements IRCAdapterBase. Definition at line 189 of file ircnetworkadapter.cpp. Checks if pAdapter equals this or is one of chat windows of this network. Definition at line 387 of file ircnetworkadapter.cpp. Checks if user is an operator on a given channel. This will work only for known channels (ie. the ones the client is registered on). Definition at line 412 of file ircnetworkadapter.cpp. The idea of the adapter system is that each adapter is either a network or is a child of a network. This method is supposed to return a pointer to a network to which this adapter belongs. Implements IRCAdapterBase. Definition at line 130 of file ircnetworkadapter.h. Signal emitted when a new chat (priv or channel) is opened from this network. Opens a new chat adapter for specified recipient. If specified recipient is a channel a /join command will be sent to that channel. If recipient is a user a chat window will simply be opened. If adapter is not connected to a network or empty name is specified this becomes a no-op. Also if such recipient is already present the window will emit focus signal request. Definition at line 543 of file ircnetworkadapter.cpp. Definition at line 569 of file ircnetworkadapter.cpp. Sets channel flags. Flags are set using following command: Definition at line 658 of file ircnetworkadapter.cpp. Definition at line 155 of file ircnetworkadapter.h. Gets title for this adapter. Implements IRCAdapterBase. Definition at line 676 of file ircnetworkadapter.cpp. All allowed modes with their nickname prefixes for this network. Definition at line 770 of file ircnetworkadapter.cpp.
http://doomseeker.drdteam.org/docs/doomseeker_1.0/classIRCNetworkAdapter.php
CC-MAIN-2019-43
refinedweb
502
60.61
Does anyone know if anything been added to the API that allows a plugin to grab the current path triggered from a sidebar folder right click context menu entry? I'd like to be able to right click a folder and "Open Command Window Here" type thing. Windows only Make a new folder in your plugin directory, create a new file called opencommand.py and put this code in it: import sublime, sublime_plugin import subprocess class OpenPromptCommand(sublime_plugin.TextCommand): def run(self, edit): dire = os.path.dirname(self.view.file_name()) retcode = subprocess.Popen("cmd", "/K", "cd", dire]) def is_enabled(self): return self.view.file_name() and len(self.view.file_name()) > 0[/code] Now create another file called "Content.sublime-menu" and put the following in it: [code] { "caption": "-", "id": "file" }, { "command": "open_prompt", "caption": "Open Command Window Here…" } ] Now when you right click and click "Open Command Window Here.." you'll get a new command prompt at the location of the file you were editing. Oh, sorry, I see you wanted this from the sidebar... Just change "Context" to "Sidebar" in the filename. Saved me some time with this. Thanks! Hmm, I can't seem to get the Popen command to work on my setup (XP64, dev2122). The item shows up in the context menu enabled, but doesn't seem to act in any way. I've dickered with the Popen command, but I admit little facility with Python libraries. Any ideas what the call might be fragile on? Yup. Try adding the import os. import sublime, sublime_plugin, subprocess, os The problem with loading this from the sidebar is that the working directory is set using self.view.file_name().What I wanted was a way to use the folder I right clicked on as the current working directory for the command prompt. I see... Well the sidebar can pass arguments through I believe, so I'll have a play Ah ha, yes, there it is. Thanks, works like a charm now! Yeah that was what was hanging me up. I see { "caption": "Delete Folder", "command": "delete_folder", "args": {"dirs": ]} }, in the Default\Side Bar.sublime-menu but when I add "args" to my Side Bar entry I get this in the console: Traceback (most recent call last): File ".\sublime_plugin.py", line 276, in run_ TypeError: run() got an unexpected keyword argument 'dirs' Sublime Terminal does what you are looking for, and works cross-platform too: wbond.net/sublime_packages/terminal You can right click on any folder, and also use ctrl+shift+t to open a terminal/command prompt at the folder of the current file, or ctrl+shift+alt+t to open at the project folder the current file is in. this works for me, thanks everyone. open_prompt.py import sublime, sublime_plugin, subprocess, os class OpenPromptCommand(sublime_plugin.TextCommand): def run(self, edit, paths=]): retcode = subprocess.Popen("cmd", "/K", "cd", paths[0]]) Side Bar.sublime-menu { "caption": "-", "id": "end" }, { "caption": "Open Command Window Here", "command": "open_prompt", "args": {"paths": ]} } ]
https://forum.sublimetext.com/t/open-command-window-here/2643/7
CC-MAIN-2016-44
refinedweb
494
66.33
font_awesome_flutter 10.1.0 font_awesome_flutter: ^10.1.0 copied to clipboard font_awesome_flutter # The Font Awesome Icon pack available as set of Flutter Icons. Based on Font Awesome 6.1.0. Includes all free icons: - Regular - Solid - Brands Installation # In the dependencies: section of your pubspec.yaml, add the following line: dependencies: font_awesome_flutter: <latest_version> Usage # import 'package:font_awesome_flutter/font_awesome_flutter.dart'; class MyWidget extends StatelessWidget { Widget build(BuildContext context) { return IconButton( // Use the FaIcon Widget + FontAwesomeIcons class for the IconData icon: FaIcon(FontAwesomeIcons.gamepad), onPressed: () { print("Pressed"); } ); } } Icon names # Icon names equal those on the official website, but are written in lower camel case. If more than one icon style is available for an icon, the style name is used as prefix, except for "regular". Due to restrictions in dart, icons starting with numbers have those numbers written out. Examples: Example App # View the Flutter app in the example directory to see all the available FontAwesomeIcons. Customizing font awesome flutter # We supply a configurator tool to assist you with common customizations to this package. All options are interoperable. By default, if run without arguments and no icons.json in lib/fonts exists, it updates all icons to the newest free version of font awesome. Setup # To use your custom version, you must first clone this repository to a location of your choice and run flutter pub get inside. This installs all dependencies. The configurator is located in the util folder and can be started by running configurator.bat on Windows, or ./configurator.sh on linux and mac. All following examples use the .sh version, but work same for .bat. (If on windows, omit the ./ or replace it with .\.) An overview of available options can be viewed with ./configurator.sh --help. To use your customized version in an app, go to the app's pubspec.yaml and add a dependency for font_awesome_flutter: '>= 4.7.0'. Then override the dependency's location: dependencies: font_awesome_flutter: '>= 4.7.0' ... dependency_overrides: font_awesome_flutter: path: /path/to/your/font_awesome_flutter ... Enable pro icons # ❗ By importing pro icons you acknowledge that it is your obligation to keep these files private. This includes not uploading your package to a public github repository or other public file sharing services. - Go to the location of your custom font_awesome_flutter version (see setup) - Download the web version of font awesome pro and open it - Move all .ttffiles from the webfontsdirectory and icons.jsonfrom metadatato /path/to/your/font_awesome_flutter/lib/fonts. Replace existing files. - Run the configurator. It should say "Custom icons.json found" It may be required to run flutter clean in apps who use this version for changes to appear. Excluding styles # One or more styles can be excluded from all generation processes by passing them with the --exclude option: $ ./configurator.sh --exclude solid $ ./configurator.sh --exclude solid,brands See the optimizations and dynamic icon retrieval by name sections for more information as to why it makes sense for your app. Retrieve icons dynamically by their name or css class # Probably the most requested feature after support for pro icons is the ability to retrieve an icon by their name. This was previously not possible, because a mapping from name to icon would break all discussed optimizations. Please bear in mind that this is still the case. As all icons could theoretically be requested, none can be removed by flutter. It is strongly advised to only use this option in conjunction with a limited set of styles and with as few of them as possible. You may need to build your app with the --no-tree-shake-icons flag for it to succeed. Using the new configurator tool, this is now an optional feature. Run the tool with the --dynamic flag to generate... $ ./configurator.sh --dynamic ...and the following import to use the map. For normal icons, use faIconMapping with a key of this format: 'style icon-name'. import 'package:font_awesome_flutter/name_icon_mapping.dart'; ... FaIcon( icon: faIconMapping['solid abacus'], ); ... To exclude unused styles combine the configurator options: $ ./configurator.sh --dynamic --exclude solid A common use case also includes fetching css classes from a server. The utility function getIconFromCss() takes a string of classes and returns the icon which would be shown by a browser: getIconFromCss('far custom-class fa-abacus'); // returns the abacus icon in regular style. custom-class is ignored Duotone icons # Duotone support has been discontinued after font awesome changed the way they lay out the icon glyphs inside the font's file. The new way using ligatures is not supported by flutter at the moment. For more information on why duotone icon support was discontinued, see this comment. Why aren't the icons aligned properly or why are the icons being cut off? # Please use the FaIcon widget provided by the library instead of the Icon widget provided by Flutter. The Icon widget assumes all icons are square, but many Font Awesome Icons are not. What about file size and ram usage # This package has been written in a way so that it only uses the minimum amount of resources required. All links (eg. FontAwesomeIcons.abacus) to unused icons will be removed automatically, which means only required icon definitions are loaded into ram. Flutter 1.22 added icon tree shaking. This means unused icon "images" will be removed as well. However, this only applies to styles of which at least one icon has been used. Assuming only icons of style "regular" are being used, "regular" will be minified to only include the used icons and "solid" and "brands" will stay in their raw, complete form. This issue is being tracked over in the flutter repository. However, using the configurator, you can easily exclude styles from the package. For more information, see customizing font awesome flutter Why aren't the icons showing up on Mobile devices? # If you're not seeing any icons at all, sometimes it means that Flutter has a cached version of the app on device and hasn't pushed the new fonts. I've run into that as well a few times... Please try: - Stopping the app - Running flutter cleanin your app directory - Deleting the app from your simulator / emulator / device - Rebuild & Deploy the app. Why aren't the icons showing up on Web? # Most likely, the fonts were not correctly added to the FontManifest.json. Note: older versions of Flutter did not properly package non-Material fonts in the FontManifest.json during the build step, but that issue has been resolved and this shouldn't be much of a problem these days. Please ensure you are using Flutter 1.14.6 beta or newer!
https://pub.flutter-io.cn/packages/font_awesome_flutter
CC-MAIN-2022-33
refinedweb
1,095
58.28
KMG Textures Contents About KMG files There are many image file formats out there. First of all the image data may be encoded in various ways. For example the pixel format could be RGB565 or ARGB4444. Furthermore the image data may be paletted, twiddled or compressed. The image may also come with mipmaps (smaller versions of itself). The mipmaps could be stored from small to big or big to small, with padding, etc. Lastly, meta information such as the dimensions, used pixel format, how many mipmaps are available, whether compression was applied, etc. must be stored somehow. Therefore, instead of just storing the raw image data, so called container formats are used. KOS provides an image container format called KMG. You are in no way forced to use this format and may want to create your own for advanced use cases, but it gets the basic job done and comes with convenient functions and a conversion program in KOS. This tutorial will explain how to convert images to Dreamcast-friendly formats and pack them into a KMG container file. It will then explain how to load the data into memory, ready to be used in drawing code. Conversion Directory setup In your project's folder, you should create a folder named assets and another one named romdisk (names can be changed). assets will contain your raw image files. assets/foo.png will be converted to romdisk/foo.kmg. The raw asset files are put into a different directory, because only the converted files will be used in the game and the raw asset files would only take up space. Makefile rules Makefiles describe a dependency graph to only rebuild files that were changed. This is useful because converting all our image files all the time, although they haven't changed, would take a lot of build time. In our case the dependency graph for our game might look like the following: image1.png --> image1.kmg -\ image2.png --> image2.kmg --+--> romdisk.img --> romdisk.o --+--> game.elf image3.png --> image3.kmg -/ main.c --> main.o ----/ So when image2.png is changed by an artist, only image2.kmg is regenerated using the vqenc conversion tool, which leads to regenerating romdisk.img which leads to regenerating romdisk.o. Let's look at the Makefile rules to describe this process starting from the back. Makefile rule syntax The general syntax is target_file: dependency_file1 dependency_file2 <TAB>shell commands For a full introduction to Make please refer to [1] romdisk.o rule This rule will convert romdisk.img to romdisk.o which can be passed to gcc during compilation (kos-cc main.o romdisk.o -o game.elf) romdisk.o: romdisk.img $(KOS_BASE)/utils/bin2o/bin2o romdisk.img romdisk romdisk.o romdisk.img rule This rule gathers a list of filenames of the form assets/*.png. It will then convert this list into the form romdisk/*.kmg. The generated kmg filename list is then used as the list of required files for romdisk.img. genromfs is told to put the directory romdisk into romdisk.img romdisk.img: $(patsubst assets/%.png,romdisk/%.kmg,$(wildcard assets/*.png)) $(KOS_GENROMFS) -f romdisk.img -d romdisk -v The converted image files will be available as /rd/foo.kmg during runtime. ELF building rule This rule gathers up all of the built object files of the program and links them with any necessary libraries to produce a .elf file that can be run. First, define the target for your build and all the object files that are required for the binary. Put this little bit of setup work up at the top of your Makefile: $(TARGET) = test.elf $(OBJS) = main.o romdisk.o all: rm-elf $(TARGET) include $(KOS_BASE)/Makefile.rules rm-elf: -rm -f $(TARGET) romdisk.* Then, add the build target to create your actual binary from the built object files, like so: $(TARGET): $(OBJS) kos-cc -o $(TARGET) $(OBJS) -lkmg -lkosutils You will need to build libkmg in kos-ports if you have not already done so in order to build your program. PNG to KMG conversion rule Next we need to call a program to actually convert foo.png to foo.kmg. KOS provides an image data conversion tool that can optionally put the resulting data in the KMG container format. You can find it in the folder kos/utils/vqenc. Usage: vqenc [options] image1 [image2..] Options: -t, --twiddle create twiddled textures -m, --mipmap create mipmapped textures (EXPERIMENTAL) -v, --verbose verbose -d, --debug show debug information -q, --highq higher quality (much slower) -k, --kmg write a KMG for output -a, --alpha use alpha channel (and output ARGB4444) -b, --amask use 1-bit alpha mask (and output ARGB1555) The following Makefile rule should be added to use it to convert images: romdisk/%.kmg: assets/%.png $(KOS_BASE)/utils/vqenc/vqenc -v -t -q -k $< mv assets/$*.kmg romdisk/ -v will turn on verbose conversion information and give you some additional insight. -t will reorder the pixels into the order the PVR graphics chip can render the fastest. -q will tell the conversion program that it can take its time to perform a better VQ compression. -k will save the converted and compressed image data in a .kmg file. Otherwise it will create a .vq file without meta information which you could use in your own image container format. Since vqenc does not accept a destination directory name, the result is moved to the romdisk directory manually. In Make, $* represents the result of the % placeholder in the rule line. After typing make you should see that foo.kmg is put into the romdisk, but foo.png is not. To gain access to the romdisk and read files in the romdisk directory as /rd/foo.kmg, you need to add the following: extern uint8 romdisk[]; KOS_INIT_ROMDISK(romdisk); The loading function will load the texture and put its content into video memory. #include <kos/img.h> #include <kmg/kmg.h> #include <dc/pvr.h> #include <stdio.h> pvr_ptr_t load_kmg(char const* filename, uint32* w, uint32* h, uint32* format) { kos_img_t img; pvr_ptr_t rv; if(kmg_to_img(filename, &img)) { printf("Failed to load image file: %s\n", filename); return NULL; } if(!(rv = pvr_mem_malloc(img.byte_count))) { printf("Couldn't allocate memory for texture!\n"); kos_img_free(&img, 0); return NULL; } pvr_txr_load_kimg(&img, rv, 0); kos_img_free(&img, 0); *w = img.w; *h = img.h; *format = img.fmt; return rv; } Usage in main(): int main() { /* ... */ uint32 w, h, format; pvr_ptr_t txr = load_kmg("/rd/image.kmg", &w, &h, &format); if(txr) printf("Loaded /rd/image.kmg with dimensions %ux%u, format %u\n", w, h, format); } You can now use the PVR API to draw this texture.
https://dcemulation.org/index.php?title=KMG_Textures&amp%3Bdiff=prev&amp%3Boldid=3673
CC-MAIN-2019-26
refinedweb
1,104
67.35
- Author: - gsf0 - Posted: - August 14, 2009 - Language: - Python - Version: - 1.1 - regex match nav - Score: - -1 (after 1 ratings) A filter that re.matches a regex against a value. Useful for nav bars as follows: {% if location.path|match:"/$" %} class="current"{% endif %} For location.path see my location context_processor. More like this - location context_processor by gsf0 7 years ago - Template tag "ifregex" and "ifnotregex" by arthurfurlan 7 years, 3 months ago - Regular Expression Dictionary by skitch 9 years, 1 month ago - SSL Redirect Middleware by zbyte64 8 years, 1 month ago - Regular Expression Replace Template Filter by joshua 9 years, 5 months ago Please login first before commenting.
https://djangosnippets.org/snippets/1686/
CC-MAIN-2016-36
refinedweb
108
56.66
SOLVED - Browser does not work on android - Alexorleon last edited by Alexorleon Hello! Please help. The following examples do not work on android: Android armeabi-v7a (GCC 4.9, Qt 5.4.1) on linux OpenSUSE, produces errors: Project ERROR: Unknown module(s) in QT: webengine Project ERROR: Unknown module(s) in QT: webenginewidgets or example import QtWebEngine 1.0 QML module could not be found. I'm trying to use QWebView for android, but it does not work. Incorrect properties. import QtQuick 2.3 or 1.0 import QtQuick.Controls 1.3 import QtWebKit 3.0 For desctop everything works fine. How to run on android Qt Internet browser? - p3c0 Moderators last edited by @Alexorleon AFAIK Qt WebView works on android. Try the example at the bottom provided on that page. - Alexorleon last edited by @p3c0 Yes, thanks. I found this example. And he is works. And I found - Webengine is not supported on android and not with MinGW either.
https://forum.qt.io/topic/54600/solved-browser-does-not-work-on-android
CC-MAIN-2019-43
refinedweb
161
71.61
Hi y’all, I’m hoping I placed this post in the correct place. For my first arduino project I’m driving a NEMA17 stepper motor with the TMC2100 “silent stepper” driver. I’m using a 12v 2250 milliampere adapter to power the driver and put a 47 microfarad capacitor between the + and - alas I just noticed it is 10V. The Arduino board is powered using a “powerbar” with two usb ports. Sorry for my failed attempt to create a clear electronic scheme. I’m not the most fervent programmer, but I’ve been trying to get the board to bootup when powered and to tell the stepperdriver to accelerate smoothly from stillness, rotate very slowly and come to a full stop with a chance of slowly accelerating in the opposite direction. Repeat. I’m using an inverted and offset cosine to fade in the speed when it gets to it’s minimum and maximum. There’s probably a much more efficient and consistent way of getting this program to work, but here’s my attempt: #include "AccelStepper.h" // Define a stepper and the pins it will use AccelStepper stepper(4,5); int StepSpeed; // scaled version of stepper speed in steps per second(?)), uses a cosine to vary in speed int Counter; // counts the amount of updates before adding to the index of the cosine; The time it takes to progress the cosine int Index; // drives the cosine that varies the stepperspeed (Scale) int WavDir; // direction of rotation int Mod; // introduces an extra modulation in speed for extra unpredictability int ModCounter; // counts amount of updates before adding to the index of the extra mod int ModIndex; // index that in turn drives another cosine to extra modulate the speed, maybe this should be a sine wave instead, bool WavDirEva; void setup() { int speed = 1000; stepper.setMaxSpeed(speed); stepper.setAcceleration(100); stepper.setSpeed(1000); Serial.begin(9600); Counter = 0; StepSpeed = 0; Index = 0; Mod = 0; ModIndex = random(0,628); WavDir = random(-1,1); if (WavDir== 0) {WavDir = 1;} WavDirEva = false; } void loop() { Counter = Counter + 1; if (Counter >= Mod) // after this many updates changes speed by adding 1 index { StepSpeed = ((( cos ( Index / 100.0 ) * -0.5 ) + 0.5 ) * 1000 ) * WavDir; // cosine because it starts at the most outward amplitude, making it more smooth around both extremes // the * -1 + 1 makes the cos start at 0 and climb up to, perhaps better to multiply the StepSpeed with the mod rather than add it. So that the mod can have a random beginning point without losing the "fade in" Index = Index + 1; Counter = 0; if (Index == 628){ //6,28 = (2*pi) almost a full cos cycle. Index = 0; } // Serial.println(StepSpeed); } ModCounter = ModCounter + 1; //introduces an extra modulation for stepperspeed. if (ModCounter == 10000) { ModIndex = ModIndex + 1; Mod = (((cos(ModIndex/100.0)*-0.5)+0.5)*1000)+1500; // Maximum should be 1000 steps p/s. So compensate in StepSpeed ModCounter = 0; if(ModIndex == 628) { ModIndex = 0; } // Serial.println(Mod); if (StepSpeed != 0 && WavDirEva == true) { WavDirEva = false; } if (StepSpeed == 0 && WavDirEva == false) // somewhere here something is still wrong, causing too often a negative stepspeed { WavDir = random(-1,1); if (WavDir== 0) { WavDir = 1; } // Serial.println(WavDir); WavDirEva = true; } Serial.println(StepSpeed); Serial.println(Mod); Serial.println(WavDirEva); } stepper.move(StepSpeed); stepper.run(); } Here’s what I’m trying to achieve: it worked for a while, but after connecting 3 times or so the board started only flashing the blue power light causing the motor to only pulse a bit synchronous to the power light. This happened with two different boards. Did I fry them? They don’t show up in the serial ports anymore. Help is very much appreciated! Jorick
https://forum.arduino.cc/t/flashing-micro-board-power-light/622845/12
CC-MAIN-2021-43
refinedweb
613
58.42
ncl_reset man page RESET — The "super" version of RESET zeroes the internal integer array used to detect crowded lines; other versions do nothing. Synopsis CALL RESET C-Binding Synopsis #include <ncarg/ncargC.h> void c_reset() Usage The "super" version of the Dashline utility attempts to cull crowded lines from the output picture. It does this using an internal integer array, ISCREN, in which each bit used represents a single pixel in a square array of 1024x1024 pixels of the picture. Initially, all the bits must be set to zero by calling the routine RESET. When a line segment is about to be drawn, an appropriate set of bits in ISCREN (representing the pixels through which the line segment passes) is examined; if any bit in that set is a 1, the line segment is not drawn. Then, the bits representing the pixels in the immediate vicinity of the line segment are set to 1, so as to prevent any subsequent line segment from being drawn through the area. As each label is drawn, the bits in ISCREN representing the area occupied by the label are set to 1, as well, which prevents subsequent line segments from being drawn through the label. This culling process has the following implications for the user of the "super" version of Dashline: - The routine RESET should be called at the beginning of each new picture. If this is not done, each of a consecutive series of pictures will have progressively more gaps where line segments have been culled. - More important lines should be drawn first, followed by less important lines. For example, the "super" version of the routine CONREC draws labeled contour lines first, followed by the contour lines halfway in between the labeled lines, followed by the rest of the lines. - Because each line segment is either drawn in its entirety or not at all, long straight line segments are broken into shorter pieces. The internal parameter MLLINE determines how small the resulting pieces will be; you may wish to reduce its value so that smaller pieces will be used. Examples Use the ncargex command to see the following relevant examples: tdashp, fdlsmth. Access To use RESET or c_reset, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: dashline, dashline_params, curved, dashdb, dashdc, frstd, lastd, lined, vectd, ncarg_cbind Hardcopy: NCAR Graphics Contouring and Mapping Tutorial; NCAR Graphics Fundamentals, UNIX Version; User's Guide for NCAR GKS-0A Graphics University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_reset
CC-MAIN-2017-39
refinedweb
427
54.76
Pip install from a notebook¶ Links: notebook, html, python, slides, GitHub How to install a module from a notebook. from jyquickhelper import add_notebook_menu add_notebook_menu() Update or install a module from the notebook¶ Running pip install from the command line requires to be in the right folder. And sometimes, several python installations interfere between each others. Why doing it from the notebook itself: try: # pip >= 19.3 from pip._internal.main import main as pip_main except Exception: try: # pip >= 10.0 from pip._internal import main as pip_main except Exception: # pip < 10.0 from pip import main as pip_main pip_main("install -q qgrid".split()) 0 Interesting options¶ Avoid installing dependencies¶ try: pip_main("install -q qgrid --no-deps".split()) except Exception as e: print(e) Upgrade¶ try: pip_main("install -q qgrid --upgrade --no-deps".split()) except Exception as e: print(e) For the hackathon…¶ pip_main("install pyquickhelper pyensae ensae_projects --upgrade --no-deps --no-cache-dir".split())
http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/notebooks/chsh_pip_install.html
CC-MAIN-2021-10
refinedweb
154
51.75
Today’s websites are evolving into web apps: As a result there is a lot of code on the client side! A big code base needs to be organized. Module systems offer the option to split your code base into modules. There are multiple standards for how to define dependencies and export values: <script>-tag style (without a module system) <script>-tag style This is how you would handle a modularized code base if you didn’t use a module system. <script src="module1.js"></script> <script src="module2.js"></script> <script src="libraryA.js"></script> <script src="module3.js"></script> Modules export an interface to the global object, i. e. the window object. Modules can access the interface of dependencies over the global object. require This style uses a synchronous require method to load a dependency and return an exported interface. A module can specify exports by adding properties to the exports object or setting the value of module.exports. require("module"); require("../file.js"); exports.doStuff = function() {}; module.exports = someValue; It’s used server-side by node.js. Asynchronous Module Definition Other module systems (for the browser) had problems with the synchronous require (CommonJS) and introduced an asynchronous version (and a way to define modules and exporting values): require(["module", "../file"], function(module, file) { /* ... */ }); define("mymodule", ["dep1", "dep2"], function(d1, d2) { return someExportedValue; }); Read more about CommonJS and AMD. ECMAScript 2015 (6th Edition), adds some language constructs to JavaScript, which form another module system. import "jquery"; export function doStuff() {} module "localModule" {} Let the developer choose their module style. Allow existing codebases and packages to work. Make it easy to add custom module styles. Modules should be executed on the client, so they must be transferred from the server to the browser. There are two extremes when transferring modules: Both are used in the wild, but both are suboptimal: A more flexible transferring would be better. A compromise between the extremes is better in most cases. → While compiling all modules: Split the set of modules into multiple smaller batches (chunks). This allows for multiple smaller, faster requests. The chunks with modules that are not required initially can be loaded on demand. This speeds up the initial load but still lets you grab more code when it will actually be used. The “split points” are up to the developer. → A big code base is possible! Note: The idea is from Google’s GWT. Read more about Code Splitting. Why should a module system only help the developer with JavaScript? There are many other resources that need to be handled: Or translated/processed: This should be as easy as: require("./style.css"); require("./style.less"); require("./template.jade"); require("./image.png"); Read more about Using loaders and Loaders. When compiling all these modules, a static analysis tries to find its dependencies. Traditionally this could only find simple stuff without expression, but require("./template/" + templateName + ".jade") is a common construct. Many libraries are written in different styles. Some of them are very weird… A clever parser would allow most existing code to run. If the developer does something weird, it would try to find the most compatible solution. © 2012–2015 Tobias Koppers Licensed under the MIT License.
http://docs.w3cub.com/webpack~1/motivation/
CC-MAIN-2018-43
refinedweb
535
59.5
The Structure of OCaml Programs Now we're going to take some time out to take a high-level look at some real OCaml programs. I want to teach you about local and global definitions, when to use ;; vs. ;, modules, nested functions, and references. For this we're going to look at a lot of OCaml concepts which won't yet make sense because we haven't seen them before. Don't worry about the details for the moment. Concentrate instead on the overall shape of the programs and the features which I'll point out. Local "variables" (really local expressions) Let's take the average function and add a local variable in C. (Compare it to the first definition we had above). double average (double a, double b) { double sum = a + b; return sum / 2; } Now let's do the same to our OCaml version: # let average a b = let sum = a +. b in sum /. 2.0;; val average : float -> float -> float = <fun> The standard phrase let name = expression in is used to define a named local expression, and name can then be used later on in the function instead of expression, till a ;; which ends the block of code. Notice that we don't indent after the in. Just think of let ... in as if it were a statement. Now comparing C local variables and these named local expressions is a sleight of hand. In fact they are somewhat different. The C variable sum has a slot allocated for it on the stack. You can assign to sum later in the function if you want, or even take the address of sum. This is NOT true for the OCaml version. In the OCaml version, sum is just a shorthand name for the expression a +. b. There is no way to assign to sum or change its value in any way. (We'll see how you can do variables whose value changes in a minute). Here's another example to make this clearer. The following two code snippets should return the same value (namely (a+b) + (a+b)²): # let f a b = (a +. b) +. (a +. b) ** 2.;; val f : float -> float -> float = <fun> # let f a b = let x = a +. b in x +. x ** 2.;; val f : float -> float -> float = <fun> The second version might be faster (but most compilers ought to be able to perform this step of "common subexpression elimination" for you), and it is certainly easier to read. x in the second example is just shorthand for a +. b. Global "variables" (really global expressions) You can also define global names for things at the top level, and as with our local "variables" above, these aren't really variable at all, just shorthand names for things. Here's a real (but cut-down) example: let html = let content = read_whole_file file in GHtml.html_from_string content ;; let menu_bold () = match bold_button#active with | true -> html#set_font_style ~enable:[`BOLD] () | false -> html#set_font_style ~disable:[`BOLD] () ;; let main () = (* code omitted *) factory#add_item "Cut" ~key:_X ~callback: html#cut ;; In this real piece of code, html is an HTML editing widget (an object from the lablgtk library) which is created once at the beginning of the program by the first let html = statement. It is then referred to in several later functions. Note that the html name in the code snippet above shouldn't really be compared to a real global variable as in C or other imperative languages. There is no space allocated to "store" the " html pointer". Nor is it possible to assign anything to html, for example to reassign it to point to a different widget. In the next section we'll talk about references, which are real variables. Let-bindings Any use of let ..., whether at the top level (globally) or within a function, is often called a let-binding. References: real variables What happens if you want a real variable that you can assign to and change throughout your program? You need to use a reference. References are very similar to pointers in C/C++. In Java, all variables which store objects are really references (pointers) to the objects. In Perl, references are references - the same thing as in OCaml. Here's how we create a reference to an int in OCaml: # ref 0;; - : int ref = {contents = 0} Actually that statement wasn't really very useful. We created the reference and then, because we didn't name it, the garbage collector came along and collected it immediately afterwards! (actually, it was probably thrown away at compile-time.) Let's name the reference: # let my_ref = ref 0;; val my_ref : int ref = {contents = 0} This reference is currently storing a zero integer. Let's put something else into it (assignment): # my_ref := 100;; - : unit = () And let's find out what the reference contains now: # !my_ref;; - : int = 100 So the := operator is used to assign to references, and the ! operator dereferences to get out the contents. Here's a rough-and-ready comparison with C/C++: OCaml # let my_ref = ref 0;; val my_ref : int ref = {contents = 0} # my_ref := 100;; - : unit = () # !my_ref;; - : int = 100 C/C++ int a = 0; int *my_ptr = &a; *my_ptr = 100; *my_ptr; References have their place, but you may find that you don't use references very often. Much more often you'll be using let name = expression in to name local expressions in your function definitions. Nested functions C doesn't really have a concept of nested functions. GCC supports nested functions for C programs but I don't know of any program which actually uses this extension. Anyway, here's what the gcc info page has to say about nested functions:) /* ... */ } You get the idea. Nested functions are, however, very useful and very heavily used in OCaml. Here is an example of a nested function from some real code: # let read_whole_channel chan = let buf = Buffer.create 4096 in let rec loop () = let newline = input_line chan in Buffer.add_string buf newline; Buffer.add_char buf '\n'; loop () in try loop () with End_of_file -> Buffer.contents buf;; val read_whole_channel : in_channel -> string = <fun> Don't worry about what this code does - it contains many concepts which haven't been discussed in this tutorial yet. Concentrate instead on the central nested function called loop which takes just a unit argument. You can call loop () from within the function read_whole_channel, but it's not defined outside this function. The nested function can access variables defined in the main function (here loop accesses the local names buf and chan). The form for nested functions is the same as for local named expressions: let name arguments = function-definition in. You normally indent the function definition on a new line as in the example above, and remember to use let rec instead of let if your function is recursive (as it is in that example). Modules and open OCaml comes with lots of fun and interesting modules (libraries of useful code). For example there are standard libraries for drawing graphics, interfacing with GUI widget sets, handling large numbers, data structures, and making POSIX system calls. These libraries are located in /usr/lib/ocaml/ (on Unix anyway). For these examples we're going to concentrate on one quite simple module called Graphics. The Graphics module is installed into 7 files (on my system): /usr/lib/ocaml/graphics.a /usr/lib/ocaml/graphics.cma /usr/lib/ocaml/graphics.cmi /usr/lib/ocaml/graphics.cmx /usr/lib/ocaml/graphics.cmxa /usr/lib/ocaml/graphics.cmxs /usr/lib/ocaml/graphics.mli For the moment let's just concentrate on the file graphics.mli. This is a text file, so you can read it now. Notice first of all that the name is graphics.mli and not Graphics.mli. OCaml always capitalizes the first letter of the file name to get the module name. This can be very confusing until you know about it! If we want to use the functions in Graphics there are two ways we can do it. Either at the start of our program we have the open Graphics;; declaration. Or we prefix all calls to the functions like this: Graphics.open_graph. open is a little bit like Java's import statement, and much more like Perl's use statement. To use Graphics in the interactive toplevel, you must first load the library with #load "graphics.cma";; Windows users: For this example to work interactively on Windows, you will need to create a custom toplevel. Issue the command ocamlmktop -o ocaml-graphics graphics.cma from the command line. A couple of examples should make this clear. (The two examples draw different things - try them out). Note the first example calls open_graph and the second one Graphics.open_graph. (* To compile this example: ocamlc graphics.cma grtest1.ml -o grtest1 *) open Graphics;; open_graph " 640x480";; for i = 12 downto 1 do let radius = i * 20 in set_color (if i mod 2 = 0 then red else yellow); fill_circle 320 240 radius done;; read_line ();; (* To compile this example: ocamlc graphics.cma grtest2.ml -o grtest2 *) Random.self_init ();; Graphics ();; Both of these examples make use of some features we haven't talked about yet: imperative-style for-loops, if-then-else and recursion. We'll talk about those later. Nevertheless you should look at these programs and try and find out (1) how they work, and (2) how type inference is helping you to eliminate bugs. The Pervasives module There's one module that you never need to " open". That is the Pervasives module (go and read /usr/lib/ocaml/pervasives.mli now). All of the symbols from the Pervasives module are automatically imported into every OCaml program. Renaming modules What happens if you want to use symbols in the Graphics module, but you don't want to import all of them and you can't be bothered to type Graphics each time? Just rename it using this trick: module Gr = Graphics;; Gr.open_graph " 640x480";; Gr.fill_circle 320 240 240;; read_line ();; Actually this is really useful when you want to import a nested module (modules can be nested inside one another), but you don't want to type out the full path to the nested module name each time. Using and omitting ;; and ; Now we're going to look at a very important issue. When should you use ;;, when should you use ;, and when should you use none of these at all? This is a tricky issue until you "get it", and it taxed the author for a long time while he was learning OCaml too. Rule #1 is that you should use ;; to separate statements at the top-level of your code, and never within function definitions or any other kind of statement. Have a look at a section from the second graphics example above: Random.self_init ();; Graphics.open_graph " 640x480";; let rec iterate r x_init i = if i = 1 then x_init else let x = iterate r x_init (i-1) in r *. x *. (1.0 -. x);; We have two top-level statements and a function definition (of a function called iterate). Each one is followed by ;;. Rule #2 is that sometimes you can elide the ;;. As a beginner you shouldn't worry about this — you should always put in the ;; as directed by Rule #1. But since you'll also be reading a lot of other peoples' code you'll need to know that sometimes we can elide ;;. The particular places where this is allowed are: - Before the keyword let. - Before the keyword open. - Before the keyword type. - At the very end of the file. - A few other (very rare) places where OCaml can "guess" that the next thing is the start of a new statement and not the continuation of the current statement. Here is some working code with ;; elided wherever possible: open Random (* ;; *) open Graphics;; self_init ();; () (* ;; *) Rules #3 and #4 refer to the single ;. This is completely different from ;;. The single semicolon ; is what is known as a sequence point, which is to say it has exactly the same purpose as the single semicolon in C, C++, Java and Perl. It means "do the stuff before this point first, then do the stuff after this point when the first stuff has completed". Bet you didn't know that. Rule #3 is: Consider let ... in as a statement, and never put a single ; after it. Rule #4 is: For all other statements within a block of code, follow them with a single ;, except for the very last one. The inner for-loop in our example above is a good demonstration. Notice that we never use any single ; in this code: for i = 0 to 39 do let x_init = Random.float 1.0 in let x_final = iterate r x_init 500 in let y = int_of_float (x_final *. 480.) in Graphics.plot x y done The only place in the above code where might think about putting in a ; is after the Graphics.plot x y, but because this is the last statement in the block, Rule #4 tells us not to put one there. Note about ";" Brian Hurt writes to correct me on ";" The ;is an operator, just like +is. Well, not quite just like +is, but conceptually the same. +has type int -> int -> int- it takes two ints and returns an int (the sum). ;has type unit -> 'b -> 'b- it takes two values and simply returns the second one. Rather like C's ,(comma) operator. You can write a ; b ; c ; djust as easily as you can write a + b + c + d. This is one of those "mental leaps" which is never spelled out very well - in OCaml, nearly everything is an expression. if/then/elseis an expression. a ; bis an expression. match foo with ...is an expression. The following code is perfectly legal (and all do the same thing): # let f x b y = if b then x+y else x+0 let f x b y = x + (if b then y else 0) let f x b y = x + (match b with true -> y | false -> 0) let f x b y = x + (let g z = function true -> z | false -> 0 in g y b) let f x b y = x + (let _ = y + 3 in (); if b then y else 0);; val f : int -> bool -> int -> int = <fun> Note especially the last one - I'm using ;as an operator to "join" two statements. All functions in OCaml can be expressed as: let name [parameters] = expression OCaml's definition of what is an expression is just a little wider than C's. In fact, C has the concept of "statements"- but all of C's statements are just expressions in OCaml (combined with the ;operator). The one place that ;is different from +is that I can refer to +just like a function. For instance, I can define a sum_listfunction, to sum a list of ints, like: # let sum_list = List.fold_left ( + ) 0;; val sum_list : int list -> int = <fun> Putting it all together: some real code In this section we're going to show some real code fragments from the lablgtk 1.2 library. (Lablgtk is the OCaml interface to the native Unix Gtk widget library). A word of warning: these fragments contain a lot of ideas which we haven't discussed yet. Don't look at the details, look instead at the overall shape of the code, where the authors used ;;, where they used ; and where they used open, how they indented the code, how they used local and global named expressions. ... However, I'll give you some clues so you don't get totally lost! ?fooand ~foois OCaml's way of doing optional and named arguments to functions. There is no real parallel to this in C-derived languages, but Perl, Python and Smalltalk all have this concept that you can name the arguments in a function call, omit some of them, and supply the others in any order you like. foo#baris a method invocation (calling a method called baron an object called foo). It's similar to foo->baror foo.baror $foo->barin C++, Java or Perl respectively. First snippet: The programmer opens a couple of standard libraries (eliding the ;; because the next keyword is open and let respectively). He then creates a function called file_dialog. Inside this function he defines a named expression called sel using a two-line let sel = ... in statement. Then he calls several methods on sel. (* First snippet *) open StdLabels open GMain let file_dialog ~title ~callback ?filename () = let sel = GWindow.file_selection ~title ~modal:true ?filename () in sel#cancel_button#connect#clicked ~callback:sel#destroy; sel#ok_button#connect#clicked ~callback:do_ok; sel#show () Second snippet: Just a long list of global names at the top level. Notice that the author elided every single one of the ;; because of Rule #2. (* Second snippet *) let window = GWindow.window ~width:500 ~height:300 ~title:"editor" () let vbox = GPack.vbox ~packing:window#add () let menubar = GMenu.menu_bar ~packing:vbox#pack () let factory = new GMenu.factory menubar let accel_group = factory#accel_group let file_menu = factory#add_submenu "File" let edit_menu = factory#add_submenu "Edit" let hbox = GPack.hbox ~packing:vbox#add () let editor = new editor ~packing:hbox#add () let scrollbar = GRange.scrollbar `VERTICAL ~packing:hbox#pack () Third snippet: The author imports all the symbols from the GdkKeysyms module. Now we have an unusual let-binding. let _ = expression means "calculate the value of the expression (with all the side-effects that may entail), but throw away the result". In this case, "calculate the value of the expression" means to run Main.main () which is Gtk's main loop, which has the side-effect of popping the window onto the screen and running the whole application. The "result" of Main.main () is insignificant - probably a unit return value, but I haven't checked - and it doesn't get returned until the application finally exits. Notice in this snippet how we have a long series of essentially procedural commands. This is really a classic imperative program. (* Third snippet *) open GdkKeysyms let () = window#connect#destroy ~callback:Main.quit; let factory = new GMenu.factory file_menu ~accel_group in factory#add_item "Open..." ~key:_O ~callback:editor#open_file; factory#add_item "Save" ~key:_S ~callback:editor#save_file; factory#add_item "Save as..." ~callback:editor#save_dialog; factory#add_separator (); factory#add_item "Quit" ~key:_Q ~callback:window#destroy; let factory = new GMenu.factory edit_menu ~accel_group in factory#add_item "Copy" ~key:_C ~callback:editor#text#copy_clipboard; factory#add_item "Cut" ~key:_X ~callback:editor#text#cut_clipboard; factory#add_item "Paste" ~key:_V ~callback:editor#text#paste_clipboard; factory#add_separator (); factory#add_check_item "Word wrap" ~active:false ~callback:editor#text#set_word_wrap; factory#add_check_item "Read only" ~active:false ~callback:(fun b -> editor#text#set_editable (not b)); window#add_accel_group accel_group; editor#text#event#connect#button_press ~callback:(fun ev -> let button = GdkEvent.Button.button ev in if button = 3 then begin file_menu#popup ~button ~time:(GdkEvent.Button.time ev); true end else false); editor#text#set_vadjustment scrollbar#adjustment; window#show (); Main.main ()
https://ocaml.org/learn/tutorials/structure_of_ocaml_programs.html
CC-MAIN-2017-30
refinedweb
3,143
64.61
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 4.6, “How to override default accessors and mutators in Scala classes.” Problem You want to override the getter or setter methods that Scala generates for you. Solution This is a bit of a trick problem, because you can’t override the getter and setter methods Scala generates for you, at least not if you want to stick with the Scala naming conventions. For instance, if you have a class named Person with a constructor parameter named name, and attempt to create getter and setter methods according to the Scala conventions, your code won’t compile: // error: this won't work class Person(private var name: String) { // this line essentially creates a circular reference def name = name def name_=(aName: String) { name = aName } } Attempting to compile this code generates three errors: Person.scala:3: error: overloaded method name needs result type def name = name ^ Person.scala:4: error: ambiguous reference to overloaded definition, both method name_= in class Person of type (aName: String)Unit and method name_= in class Person of type (x$1: String)Unit match argument types (String) def name_=(aName: String) { name = aName } ^ Person.scala:4: error: method name_= is defined twice def name_=(aName: String) { name = aName } ^ three errors found I’ll examine these problems more in the Discussion, but the short answer is that both the constructor parameter and the getter method are named name, and Scala won’t allow that. To solve this problem, change the name of the field you use in the class constructor so it won’t collide with the name of the getter method you want to use. A common approach is to add a leading underscore to the parameter name, so if you want to manually create a getter method called name, use the parameter name _name in the constructor, then declare your getter and setter methods according to the Scala conventions: class Person(private var _name: String) { def name = _name // accessor def name_=(aName: String) { _name = aName } // mutator } Notice the constructor parameter is declared private and var. The private keyword keeps Scala from exposing that field to other classes, and the var lets the value of the field be changed. Creating a getter method named name and a setter method named name_= conforms to the Scala convention and lets a consumer of your class write code like this: val p = new Person("Jonathan") p.name = "Jony" // setter println(p.name) // getter If you don’t want to follow this Scala naming convention for getters and setters, you can use any other approach you want. For instance, you can name your methods getName and setName, following the JavaBean style. (However, if JavaBeans are what you really want, you may be better off using the @BeanProperty annotation, as described in Recipe 17.6, “When Java Code Requires JavaBeans”.) Discussion When you define a constructor parameter to be a var field, Scala makes the field private to the class and automatically generates getter and setter methods that other classes can use to access the field. For instance, given a simple class like this: class Stock (var symbol: String) after the class is compiled with scalac, you’ll see this signature when you disassemble it with javap: $ javap Stock public class Stock extends java.lang.Object{ public java.lang.String symbol(); public void symbol_$eq(java.lang.String); public Stock(java.lang.String); } You can see that the Scala compiler generated two methods: a getter named symbol and a setter named symbol_$eq. This second method is the same as a method you’d name symbol_=, but Scala needs to translate the = symbol to $eq to work with the JVM. That second method name is a little unusual, but it follows a Scala convention, and when it’s mixed with some syntactic sugar, it lets you set the symbol field on a Stock instance like this: stock.symbol = "GOOG" The way this works is that behind the scenes, Scala converts that line of code into this line of code: stock.symbol_$eq("GOOG") You generally never have to think about this, unless you want to override the mutator method. Summary As shown in the Solution, the recipe for overriding default getter and setter methods is: - Create a private varconstructor parameter with a name you want to reference from within your class. In the example in the Solution, the field is named _name. - Define getter and setter names that you want other classes to use. In the Solution the getter name is name, and the setter name is name_=(which, combined with Scala’s syntactic sugar, lets users write p.name = "Jony"). - Modify the body of the getter and setter methods as desired. It’s important to remember the private setting on your field. If you forget to control the access with private (or private[this]), you’ll end up with getter/setter methods for the field you meant to hide. For example, in the following code, I intentionally left the private modifier off of the _symbol constructor parameter: // intentionally left the 'private' modifier off _symbol class Stock (var _symbol: String) { // getter def symbol = _symbol // setter def symbol_= (s: String) { this.symbol = s println(s"symbol was updated, new value is $symbol") } } Compiling and disassembling this code shows the following class signature, including two methods I “accidentally” made visible: public class Stock extends java.lang.Object{ public java.lang.String _symbol(); // error public void _symbol_$eq(java.lang.String); // error public java.lang.String symbol(); public void symbol_$eq(java.lang.String); public Stock(java.lang.String); } Correctly adding private to the _symbol field results in the correct signature in the disassembled code: public class Stock extends java.lang.Object{ public java.lang.String symbol(); // println(stock.symbol) public void symbol_$eq(java.lang.String); // stock.symbol = "AAPL" public Stock(java.lang.String); } Note that while these examples used fields in a class constructor, the same principles hold true for fields defined inside a class. The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations: Add new comment
https://alvinalexander.com/scala/how-to-override-default-accessors-mutators-scala-classes
CC-MAIN-2017-39
refinedweb
1,037
60.35
mode, it is configure in the configuration file. Whats the problem? We came to the question that suppose we have provided a WP7 application to one of our client. And that application consumes some WCF service. Suppose the service endpoint address is changed, now how can the client configure the endpoint address on his mobile/device. Every mobile has their persistence storage media. And each application gets the quota where you can store some configuration settings or data to utilize where applicable. In WP7 there is a class called IsolatedStorage through which developer can get an access to the Application store. Below i have shown a way to use the Isolated Storage and consume WCF service from WP7 mobile application. Requirements: WP7 SDK, Visual Studio 2010. Steps: 1. Open Visual Studio and create a new project of “Windows Phone Application” 2. Add another WCF Service Application Project. 3. Open the WP7 project and in the MainPage.xaml form add two textboxes for defining Server Name and Port, add a button to get the data from WCF service and the textbox to display the Service result. 4. Open the Service Application project and define operation contract named GetData as shown below. namespace TestService { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IService1" in both code and config file together. [ServiceContract] public interface ITestService { [OperationContract] string GetData(); } } namespace TestService { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together. public class WCFTestService : ITestService { public string GetData() { return "Hello world!"; } } } 5. Add reference to this service in WP7 project 6. Now in the Button click event we need to call the service, following snippet shows the way to store the Server name and port in the isolated storage of mobile device and consume the service. private void btn_Click(object sender, RoutedEventArgs e) { String strUrl = "http://" + txtServerName.Text + ":" + txtPort.Text + "/WCFTestService.svc"; IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForApplication(); if (file.FileExists("Sources.res")) { file.DeleteFile("Sources.res"); } IsolatedStorageFileStream fs = file.CreateFile("Sources.res"); System.IO.StreamWriter writer = new System.IO.StreamWriter(fs); writer.Write(strUrl); writer.Close(); fs.Close(); fs.Dispose(); System.ServiceModel.EndpointAddress epa = new System.ServiceModel.EndpointAddress(strUrl); Services.TestService.TestServiceClient client = new Services.TestService.TestServiceClient("BasicHttpBinding_ITestService", epa); client.GetDataCompleted += new EventHandler(client_GetDataCompleted); client.GetDataAsync(); } void client_GetDataCompleted(object sender, Services.TestService.GetDataCompletedEventArgs e) { textBox3.Text = e.Result; } 7. In the MainPage constructor add following snippet to read the Server Name and Port from the isolated saved earlier. // Constructor public MainPage() { InitializeComponent(); IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForApplication(); string[] strArr=file.GetFileNames(); if (file.FileExists("Sources.res")) { IsolatedStorageFileStream fs = file.OpenFile("Sources.res", System.IO.FileMode.Open); System.IO.StreamReader reader = new System.IO.StreamReader(fs); String str = reader.ReadLine(); textBox1.Text = str.Substring(str.IndexOf("/") + 2, str.LastIndexOf(":") - 1 - str.IndexOf("/") - 1); textBox2.Text = str.Substring(str.LastIndexOf(":") + 1, 4); reader.Close(); fs.Close(); fs.Dispose(); } } Now run the application, specify the Server Name and Port and then click on the Retrieve Service Data button. in step 2 creating the WCF Service application, don’t you have to define the base address someplace? Like. Actually i didn’t edit the default config file and it was the same containing the complete endpoint address which adds up on referencing the service. We cannot remove the endpoint address otherwise it wont behave properly so just in sake we keep the address as it is and then in the code behind generate a new address and connect to it. Nice post. Thanks. You may want to correct your code in below 2 location 1. client.GetDataAsync(); I think this need an interger as a parameter 2. client.GetDataCompleted += new EventHandler(client_GetDataCompleted); I think this will be client.GetDataCompleted += new EventHandler(client_GetDataCompleted); Sorry my bad. 2nd point will be client.GetDataCompleted += new EventHandler(client_GetDataCompleted); I just want to mention I am just newbie to weblog and honestly loved you’re web-site. Very likely I’m going to bookmark your blog post . You amazingly come with terrific posts. Thank you Respect to author , some good entropy. “Consider that this day ne’er dawns again.” by Alighieri Dante. I liked as much as you’ll receive performed right here. The sketch is attractive, your authored subject matter stylish. nonetheless, you command get got an edginess over that you want be turning in the following. in poor health indisputably come further formerly again as exactly the similar just about very often inside of case you defend this hike. Hi there, simply become alert to your blog via Google, and located that it’s really informative. I’m going to watch out for brussels. I will appreciate if you proceed this in future. Lots of other folks might be benefited out of your writing. Cheers! Real fantastic information can be found on web blog . I appreciate, cause I found just what I was looking for. You have ended my 4 day long hunt! God Bless you man. Have a nice day. Bye Outstanding post, you have pointed out some good points , I too think this s a very excellent website. Ive mentioned how important it really is to comprehend the two sides. of course like your web-site however you have to check the spelling on quite a few of your posts. A number of them are rife with spelling problems and I to find it very bothersome to tell the reality then again I¡¦ll surely come back again. Keep up the wonderful piece of work, I read few articles on this website and I conceive that your website is really interesting and contains sets of excellent information. A unique discussion might be priced at comment. I believe that you should produce more on this specific topic, may well be a taboo topic but frequently people are too few to speak for such topics. To the next. Best wishes You could certainly see your enthusiasm in the paintings you write. The arena hopes for even more passionate writers such as you who aren’t afraid to say how they believe. Always go after your heart. his is the appropriate weblog for anybody who desires to locate out about this topic. You realize so considerably its virtually laborious to argue with you (not that I truly would need…HaHa). You definitely put a brand new spin on a subject thats been written about for years. Excellent stuff, just wonderful! Hi there, just was aware of your blog via Google, and located that it is really informative. I’m gonna be careful for brussels. I will appreciate when you proceed this in future. Numerous other folks will probably be benefited from your writing. Cheers! Thank you pertaining to sharing that exceptional written content on your web site. I ran into it on google. I’m going to check back once more once you publish a lot far more aricles. Definitely, what a great blog and informative posts, I will bookmark your blog.Best Regards! Woh I like your articles , saved to bookmarks ! . rapidly. Your write up is a great example of it. You made some decent points there. I looked on the internet for the issue and found most individuals will go along with with your website. Rattling nice layout and excellent written content, nothing else we want :D. Needed to draft you a bit of word just to give many thanks once again for the pleasing ideas you’ve shown on this page. This is really unbelievably open-handed with people like you giving unhampered precisely what a number of us would’ve advertised for an e book to earn some bucks for their own end, mostly seeing that you might have tried it in the event you decided. The thoughts as well acted like the easy way to know that other individuals have a similar zeal just like my personal own to understand more and more on the subject of this condition. I’m certain there are a lot more enjoyable moments ahead for those who examine your site. Hello Guru, what entice you to publish an article. This article was extremely helpful, especially since I was searching for thoughts on this subject last Thursday. Thanks so much for providing individuals with a very terrific opportunity to discover important secrets from this blog. It can be very terrific and as well , jam-packed with a lot of fun for me and my office peers to search your web site the equivalent of three times in one week to find out the fresh things you have got. Of course, I’m usually happy for the surprising tactics you give. Selected 1 areas in this posting are without a doubt the most effective we’ve ever had. This web site is definitely a walk-through for all of the information you wanted about this and didn’t know who to ask. Glimpse here, and you’ll definitely discover it. I just want to tell you that I am very new to blogging and seriously enjoyed your web page. Likely I’m going to bookmark your site . You amazingly come with impressive stories. Regards for sharing your web site. I simply want to say I am just beginner to blogs and seriously enjoyed you’re website. Probably I’m likely to bookmark your site . You amazingly come with really good writings. Thank you for sharing with us your webpage. Hello there, simply become aware of your blog via Google, and found that it is truly informative. I’m gonna be careful for brussels. I’ll appreciate when you proceed this in future. A lot of other people shall. Enjoyed reading through this, very good stuff, thanks . “Golf isn’t a game, it’s a choice that one makes with one’s life.” by Charles Rosin. I do trust all the concepts you’ve introduced in your post. They are very convincing and can certainly work. Still, the posts are very short for beginners. May you please prolong them a bit from subsequent time? Thank you for the post. This really is the correct weblog for anyone who wants to uncover out about this topic. You realize so considerably its almost hard to argue with you (not that I in fact would want…HaHa). You surely put a new spin on a subject thats been written about for years. Fantastic stuff, just great! I keep listening to the news update speak about receiving boundless online grant applications so I have been looking around for the finest site to get one. Could you advise me please, where could i acquire some? I regard something genuinely special in this website. Pretty great post. I just stumbled upon your blog and wished to say that I have truly enjoyed surfing around your weblog posts. In any case I’ll be subscribing on your feed and I am hoping you write once more very soon! I really like your writing style, superb info, regards for posting :D. “Kennedy cooked the soup that Johnson had to eat.” by Konrad Adenauer. Just wanna comment on few general things, The website style is perfect, the content is rattling great : D. I conceive this web site has got some very superb info for everyone : D. I admire the valuable information you be offering on your articles. I can bookmark your blog and feature my kids test up right here generally. I am slightly certain they’re going to learn lots of new stuff right here than anybody else! Admiring the dedication. I truly prize your piece of work, Wonderful post. Hiya, I am really glad I have found this info. Today bloggers publish only about gossips and web and this is really frustrating. A good blog with exciting content, that is what I need. Thank you for keeping this web site, I’ll be visiting it. Do you do newsletters? Can’t find it. I would like to thnkx for the efforts you’ve put in writing this blog. conceive this internet site contains some really excellent information for everyone. “Few friendships would survive if each one knew what his friend says of him behind his back.” by Blaise Pascal. I got what you mean , thankyou for putting up.Woh I am thankful to find this website through google. “Finley is going over to get a new piece of bat.” by Jerry Coleman. I’m also commenting to make you be aware of what a great encounter my wife’s child obtained reading through yuor web blog. She learned many pieces, which included how it is like to have an ideal helping mindset to get men and women easily master specific multifaceted matters. You undoubtedly did more than our desires. Thanks for providing the insightful, dependable, educational and in addition fun thoughts on this topic to Emily. My husband and i were quite comfortable Jordan could finish up his investigations with the ideas he made out of the web page. It is now and again perplexing just to find yourself giving freely techniques which usually other folks could have been making money from. And we all discover we have got the writer to give thanks to because of that. All of the explanations you have made, the easy site navigation, the friendships your site give support to foster – it’s got everything astonishing, and it’s facilitating our son in addition to us understand the topic is exciting, and that’s extraordinarily vital. Thank you for the whole lot! I want to express thanks to the writer just for bailing me out of such a predicament. Because of searching through the world-wide-web and finding principles which were not beneficial, I assumed my entire life was over. Living without the presence of answers to the issues you’ve resolved by means of your good report is a critical case, and ones that could have negatively affected my entire career if I hadn’t noticed your web page. The understanding and kindness in maneuvering every item was vital. I am not sure what I would’ve done if I had not come across such a subject like this. I am able to at this point look ahead to my future. Thank you so much for the skilled and effective guide. I won’t hesitate to propose your web page to anyone who ought to have counselling on this area. I have been perusing the posts, and I basically agree with what Mary said. This is a good common sense article. Extremely helpful to one who is just locating the resouces about this part. It will certainly aid educate me. Many thanks for taking time to discuss this, I feel strongly about it and like learning additional on this theme. If doable, as you gain expertise, would you mind updating your weblog with excess facts? It can be very beneficial for me. I just want to say I am just newbie to blogging and actually liked your web site. Probably I’m likely to bookmark your site . You surely have impressive well written articles. With thanks for sharing your website. I simply want to mention I’m new to weblog and absolutely loved this web-site. Most likely I’m likely to bookmark your blog post . You surely come with remarkable articles. Kudos for sharing with us your blog site. Very good written story. It will be supportive to anyone who usess it, including yours truly :). Keep doing what you are doing – i will definitely read more posts. You have brought up a very wonderful details , appreciate it for the post. “For visions come not to polluted eyes.” by Mary Howitt.! You can give a link to my website posts but please do not copy paste the content from here. Thanks for appreciating. I was studying some of your blog posts on this website and I conceive this site is real informative! Keep posting. great issues altogether, you just received a new reader. What would you recommend in regards to your post that you simply made some days in the past? Any sure? Hi there! This is my first visit to your blog! We are a team of volunteers and starting a new initiative in a community in the same niche. Your blog provided us useful information to work on. You have done a outstanding job! There is definately a lot to learn about this issue. I like all of the points you’ve made. You developed some decent points there. I looked more than the internet for any difficulty and discovered most individuals goes as effectively as together with your web web site. There exists apparently a bundle to realize about this. I suppose you designed some wonderful factors in features also. Good write-up, I am regular visitor of one¡¦s blog, maintain up the excellent operate, and It’s going to be a regular visitor for a long time. I really like blogging when it’s for or about something I believe in. I also read news blogs often and find that it makes me feel much much more intelligent every time I read them. I also feel like I’m a pretty excellent person who tries to treat other people with respect, no matter what their view is. You will find some real haters out there. Thanks once again. I learned several things about blogging. I will certainly put your web site on my speed dial. Some genuinely great content on this internet site , appreciate it for contribution. I lately ran into your blog and get been studying along. I assumed I’d leave my primary remark. I don???ê?èt understand what to say but I’ve relished reading. Pleasant blog. I will maintain visiting this site extremely typically. I keep listening to the reports lecture about receiving boundless online grant applications so I have been looking around for the top site to get one. Could you advise me please, where could i acquire some? As soon as I discovered this website I went on reddit to share some of the love with them. Thank you for another informative blog. The place else may I get that kind of info written in such a perfect method? I’ve a mission that I am simply now running on, and I have been on the look out for such info. I’ve been browsing online more than 3 hours today, yet I never found any interesting article like yours. It is pretty worth enough for me. Personally, if all site owners and bloggers made good content as you did, the internet will be much more useful than ever before. Its such as you read my mind! You appear to know so much about this, such as you wrote the e-book in it or something. I feel that you simply could do with a few percent to power the message home a little bit, however other than that, this is excellent blog. An excellent read. I will definitely be back. I gotta bookmark this site it seems invaluable extremely helpful Just wanna remark on few general things, The website style is perfect, the subject matter is real wonderful : D. It’s actually a great and helpful piece of info. I am happy that you just shared this helpful information with us. Please stay us informed like this. Thanks for sharing. Rattling nice design and great content material , practically nothing else we want : D. This web site is really a walk-through for all of the info you wanted about this and didn’t know who to ask. Glimpse here, and you’ll definitely discover it. You have observed very interesting details ! ps decent site. “Mediocrity knows nothing higher than itself, but talent instantly recognizes genius.” by Conan Do. In many cases, good work.i ‘ Please specify the return value LL. Posts. Awsome I love your resources. Thank you for the auspicious writeup. It in fact was a amusement account it. Look advanced to far added agreeable from you! By the way, how can we communicate? I truly appreciate this post. I have been looking everywhere for this! Thank goodness I found it on Bing. You have made my day! Thx again! Wonderful site. Lots of helpful information here. I’m sending it to some friends ans additionally sharing in delicious. And of course, thanks on your sweat! Wow, amazing blog format! How long have you been blogging for? you made blogging look easy. The full glance of your site is fantastic, let alone the content material! I think that is one of the most important information for me. And i am happy reading your article. However should commentary on some basic issues, The site taste is great, the articles is in point of fact nice :D. Excellent process, cheers. Very superb information can be found on web site. Excellent website. Plenty of useful info here. I am sending it to some buddies ans additionally sharing in delicious. And naturally, thanks on your effort! I see something really interesting about your blog so I saved to fav. Rattling nice style and excellent written content , nothing else we require : D. I genuinely enjoy examining on this website, it has excellent content. “The living is a species of the dead and not a very attractive one.” by Friedrich Wilhelm Nietzsche. I got what you mean , thanks for putting up.Woh I am thankful to find this website through google. “No one can earn a million dollars honestly.” by William Jennings Bryan. I am no longer sure where you are getting your information, however good topic. I must spend a while finding out more or figuring out more. Thank you for great info I used to be in search of this information for my mission. Admiring the commitment just want to tell you that I’m all new to blogging and site-building and seriously savored this web page. Likely I’m likely to bookmark your blog post . You really have wonderful article content. Bless you for sharing your web-site. Fantastic goods from you, man. I have understand your stuff previous to and you are just too wonderful. website. I just want to mention I am just new to blogs and certainly savored your blog site. Likely I’m want to bookmark your blog post . You amazingly have amazing articles and reviews. Thanks a lot for revealing your web page. I simply want to mention I’m very new to blogging and site-building and seriously loved you’re page. Almost certainly I’m planning to bookmark your website . You actually come with incredible well written articles. Cheers for revealing your web page. I simply want to mention I’m newbie to blogs and actually loved your page. More than likely I’m likely to bookmark your blog post . You definitely come with remarkable stories. Appreciate it for revealing your web. Normally I do not learn post on blogs, but I would like to say that this write-up very compelled me to check out and do it! Your writing taste has been amazed me. Thanks, quite nice article. I just want to say I’m beginner to blogging and site-building and truly liked your blog site. Probably I’m going to bookmark your website . You actually come with wonderful articles. With thanks for sharing your website. I truly enjoy examining on this site, it has got wonderful posts. “Never fight an inanimate object.” by P. J. O’Rourke. I wanted to compose a small message to say thanks to you for all the stunning solutions you are posting at this site. My time consuming internet lookup has at the end been paid with reasonable information to talk about with my partners. I ‘d state that that we readers actually are unquestionably endowed to be in a wonderful site with very many awesome individuals with helpful plans. I feel extremely lucky to have used your entire weblog and look forward to some more cool times reading here. Thanks a lot once more for everything. I have recently started a site, the information you offer on this website has helped me greatly. Thank you for all of your time & work. As soon as I detected this internet site I went on reddit to share some of the love with them. Excellent site. A lot of useful information here. I’m sending it to some pals ans additionally sharing in delicious. And of course, thank you on your effort! I like this weblog very much so much great information. Your web page is taking forever to load. This is the second page I tried loading and it took pretty much 30 seconds to load the page. And there is hardly something on it! I know it’s not my net connection speed for the reason that anything else that I attempt works fine. Maybe your server is beneath attack or something? Or do you just have THAT lots of guests at your web site ideal now?? Just believed I’d give you a heads up! Thanks for all your efforts that you have put in this. Very interesting info. “As a rule, what is out of sight disturbs men’s minds more seriously than what they see.” by Richard Bach. Hello my family member! I wish to say that this post is amazing, great written and include approximately all vital infos. I would like to peer more posts like this. I conceive this website contains some rattling excellent info for everyone :D. “I like work it fascinates me. I can sit and look at it for hours.” by Jerome K. Jerome. Great post, I believe website owners should acquire a lot from this web site its real user pleasant. So much fantastic info on here :D. I was reading through some of your blog posts on this website and I conceive this web site is rattling instructive! Continue posting. You completed certain good points there. I did a search on the theme and found most persons will have the same opinion with your blog. Thanks for this post, I am a big fan of this site would like to go along updated. I love the efforts you have put in this, thanks for all the great articles. Nice read, I just passed this onto a colleague who was doing a little research on that. And he just bought me lunch as I found it for him smile Therefore let me rephrase that: Thank you for lunch! “England and America are two countries separated by the same language.” by George Bernard Shaw. Generally I don’t learn article on blogs, however I would like to say that this write-up very forced me to try and do so! Your writing style has been surprised me. Thanks, very nice post. I was reading some of your blog posts on this internet site and I conceive this web site is rattling informative! Continue posting. There is noticeably a bundle to know about this. I assume you made certain nice points in features also. Hello, you used to write great, but the last several posts have been kinda boring… I miss your tremendous writings. Past few posts are just a little out of track! come on! I simply needed to say thanks once again. I do not know what I would’ve used in the absence of the entire aspects contributed by you over such a theme. It actually was the daunting dilemma in my circumstances, however , being able to view your professional technique you handled it forced me to weep over fulfillment. I will be happy for this information and have high hopes you comprehend what a great job you happen to be putting in instructing many people through your blog. Most likely you have never encountered all of us. I think this site contains very superb composed subject matter blog posts. Great write-up, I’m normal visitor of one’s website, maintain up the nice operate, and It is going to be a regular visitor for a lengthy time. Hi there to every body, it’s my first pay a visit of this website; this webpage includes remarkable and really excellent material designed for visitors. Hi my friend! I wish to say that this article is amazing, nice written and come with almost all significant infos. I’d like to look more posts like this. Awesome post once again! Thank you. I believe you have mentioned some very interesting details , thankyou for the post. Magnificent beat ! I would like to apprentice even as you amend your site, how can i subscribe for a weblog site? The account helped me a acceptable deal. I had been a little bit familiar of this your broadcast offered shiny clear idea. You made some respectable points there. I appeared on the internet for the problem and found most people will associate with together with your website. Merely wanna say that this is very beneficial , Thanks for taking your time to write this. “Some things have to be believed to be seen.” by Ralph Hodgson. Many thanks for being my personal teacher on this subject. My partner and i enjoyed your article a lot and most of all liked how you really handled the aspect I regarded as being controversial. You happen to be always really kind towards readers like me and aid me in my everyday living. Thank you. I’ve recently started a site, the info you provide on this site has helped me tremendously. Thank you for all of your time & work. Excellent information it is really. We have been waiting for this tips. I was looking through some of your blog posts on this internet site and I conceive this site is really informative! Keep putting up. I have been examinating out many of your posts and it’s clever stuff. I will definitely bookmark your blog. I always was interested in this subject and stock still am, appreciate it for posting. Thanks a bunch for sharing this with all people you really realize what you are speaking about! Bookmarked. Please also discuss with my site =). We can have a link exchange contract among us! Way cool! Some extremely valid points! I appreciate you writing this post and the rest of the website is extremely good. I like the efforts you have put in this, thank you for all the great content. Hiya, I’m really glad I’ve found this info. Nowadays bloggers publish only about gossips and web and this is really frustrating. A good blog with exciting content, that is what I need. Thanks for keeping this web-site, I will be visiting it. Do you do newsletters? Cant find it. I am only writing to let you be aware of of the fine experience my friend’s daughter went through browsing your web site. She learned so many things, most notably what it is like to have a great teaching heart to get many people clearly gain knowledge of various tortuous subject matter. You actually surpassed our expected results. Many thanks for producing these important, trusted, informative and even easy guidance on your topic to Julie. I must express my gratitude for your generosity for individuals who should have guidance on this one topic. Your real commitment to passing the solution across appeared to be astonishingly valuable and have always made others much like me to attain their pursuits. Your new warm and friendly tips and hints means much to me and especially to my mates. Many thanks; from each one of us. Great content Really I liked the content. I am thinking how I might be informed whenever a fresh post has been made. I have subscribed to your rss feed thanks! I wish to show my thanks to this writer just for rescuing me from this type of predicament. After looking through the world wide web and obtaining methods which were not helpful, I thought my entire life was over. Existing without the presence of strategies to the difficulties you have solved by means of your entire short post is a critical case, as well as the ones which may have adversely affected my career if I hadn’t encountered your blog post. Your actual knowledge and kindness in handling all areas was excellent. I am not sure what I would’ve done if I had not come across such a thing like this. I can now relish my future. Thanks for your time so much for this skilled and results-oriented help. I won’t be reluctant to suggest your web site to any individual who wants and needs recommendations about this area. I wanted to send you that tiny word to give thanks over again just for the gorgeous things you’ve discussed above. It is so shockingly open-handed with you to make freely just what some people could possibly have advertised as an e-book in making some profit for themselves, specifically since you might well have tried it in case you decided. These advice likewise served like the fantastic way to understand that other individuals have the same zeal the same as my own to realize somewhat more in regard to this matter. I am sure there are lots of more pleasurable situations in the future for many who look into your site. You have mentioned very interesting points! ps nice internet site. I truly enjoy reading on this site, it has wonderful posts . “When a man’s willing and eager, the gods join in.” by Aeschyl! Really superb information can be found on web blog . Thank you, I’ve recently been looking for info approximately this subject for a long time and yours is the greatest I’ve discovered till now. However, what about the bottom line? Are you certain about the supply? Last time I was right here you had a various writing style, are you utilizing content writers or do you write this all oneself? I was asking yourself simply because Ive been flipping by means of some of your other posts and it appears that its not all written by the very same person, judging by the writing style. Perhaps I’m wrong, but I am just curious to know. Hehe no am the only person author on this blog. Anyhow writing style changes time to time depending on the type of topic and sometimes the mood as well… Very interesting topic, thank you for putting up. Wow! This could be one particular of the most helpful blogs We have ever arrive across on this subject. Basically Excellent. I’m also an expert in this topic therefore I can understand your effort.. I am glad that I have observed this blog. Ultimately anything not a crap, which we understand quite usually. The web site is lovingly maintained and up to date. So it really should be, thank you for this welcome transform. I would like to show my appreciation to this writer just for rescuing me from this particular challenge. Because of checking throughout the world-wide-web and coming across tips which are not helpful, I figured my entire life was done. Living devoid of the strategies to the issues you’ve resolved by way of your short post is a crucial case, and the ones that could have adversely affected my career if I hadn’t encountered the website. That knowledge and kindness in taking care of all areas was invaluable. I’m not sure what I would’ve done if I hadn’t encountered such a step like this. I am able to now look ahead to my future. Thank you so much for your professional and effective guide. I won’t be reluctant to propose your web blog to any person who needs and wants assistance about this problem.. Just desire to say your article is as astounding. The clarity in your submit is just spectacular and that i can suppose you are knowledgeable in this subject. Well together with your permission allow me to snatch your RSS feed to keep updated with coming near near post. Thanks one million and please continue the enjoyable work. I just want to say I am all new to blogging and certainly loved you’re web blog. Likely I’m likely to bookmark your website . You actually come with outstanding posts. Thanks a bunch for sharing with us your webpage. Hi my loved one! I want to say that this article is amazing, nice written and include almost all significant infos. I’d like to look more posts like this . hello!,I like your writing so much! proportion we be in contact more about your post on AOL? I need a specialist on this space to resolve my problem. Maybe that is you! Taking a look ahead to look you. The moment I visited this blog I deepened my knowledge I just added this feed to my bookmarks. I have to say, I very much enjoy reading your blogs. Keep it up! You have brought up a very great details , regards for the post. Hi there, You’ve performed an incredible job. I will certainly digg it and personally recommend to my friends. I’m sure they will be benefited from this website. I needs to take the time learning a lot more or understanding more. Thank you for great material I had been looking for this information for my job. Real great information can be found on site . I conceive you have mentioned some very interesting points , appreciate it for the post. as soon as i saw this post i shared it on my facebook and a few other social media sites. thank you! Simply wanna comment that you have a very decent website , I the style it really stands out. Appreciate it for helping out, wonderful info . I feel that is among the most significant information for me. And i am satisfied reading your article. But wanna commentary on some general things, The site taste is wonderful, the articles is in reality nice :D. Good activity, cheers. I conceive you have mentioned some very interesting details , thankyou for the post. Ahaa, its good conversation on the topic of this article at this place at this web site, I have read all that, so at this time me also commenting at this place. I was looking through some of your content on this internet site and I believe this site is rattling informative ! Keep putting up. I just want to say I am all new to blogging and certainly loved you’re web blog. Likely I’m likely to bookmark your website . You actually come with outstanding posts. Thanks a bunch for sharing with us your webpage.! Merely to follow up on the update of this theme on your web page and would wish to let you know just how much I liked the time you took to create this valuable post. Inside the post, you really spoke of how to actually handle this problem with all ease. It would be my own pleasure to get some more ideas from your website and come up to offer other individuals what I learned from you. I appreciate your usual terrific effort. Hello Keep up the fantastic piece of work, I read few articles on this internet site and I think that your blog is real interesting and contains lots of great information. Very interesting details you have remarked, thankyou for posting . “Brass bands are all very well in their place – outdoors and several miles away.” by Sir Thomas Beecham. you’re in point of fact a good webmaster. The site loading velocity is incredible. It seems that you’re doing any unique trick. Moreover, The contents are masterwork. you’ve performed a wonderful activity in this matter! I’m having a small issue. I’m unable to subscribe to your rss feed for some reason. I’m using google reader by the way. great example of it. I dugg some of you post as I thought they were very useful extremely helpful Together with every thing that seems to be developing inside this particular subject material, many of your opinions happen to be very stimulating. Nevertheless, I appologize, but I do not give credence to your whole suggestion, all be it exciting none the less. It appears to me that your remarks are actually not totally justified and in actuality you are yourself not really fully convinced of the argument. In any case I did appreciate examining it. I consider something really interesting about your website so I saved to my bookmarks . Very interesting topic, regards for posting. I think this internet site holds very excellent written content blog posts. I think this web site has got very great pent subject matter content. I just want to say I am all new to blogging and certainly loved you’re web blog. Likely I’m likely to bookmark your website . You actually come with outstanding posts. Thanks a bunch for sharing with us your webpage. Spot on with this write-up, I truly think this website needs much more consideration. I’ll probably be again to read much more, thanks for that info. I just added this feed to my bookmarks. I have to say, I very much enjoy reading your blogs. Keep it up! Nice weblog right here! Additionally your website a lot up very fast! What web host are you the usage of? Can I am getting your affiliate hyperlink to your host? I wish my site loaded up as fast as yours lol Spot on with this write-up, I actually suppose this website needs far more consideration. I’ll most likely be once more to learn rather more, thanks for that info. It’s laborious to seek out educated individuals on this matter, however you sound like you realize what you’re talking about! Thanks WONDERFUL Post.thanks for share..extra wait .. … I am not real excellent with English but I line up this really easy to interpret . This is the correct weblog for anyone who wants to find out about this topic. You understand a lot its nearly onerous to argue with you (not that I really would want…HaHa). You undoubtedly put a brand new spin on a subject thats been written about for years. Great stuff, just nice! I agree with most of your points, but a few need to be discussed further, I will hold a small talk with my partners and maybe I will look for you some suggestion soon. Thanks to the information, I rarely like it. keep up the fantastic work , I read few posts on this internet site and I conceive that your web site is very interesting and holds bands of good info ...
https://ovaismehboob.com/2011/06/07/consuming-wcf-service-in-wp7/?replytocom=3112
CC-MAIN-2020-05
refinedweb
7,155
67.35
Adding SVGs to PDFs With Python and ReportLab Adding SVGs to PDFs With Python and ReportLab Adding graphics into PDFs programmatically has never been easier! Read on to check out how to do it using Python in this post! Join the DZone community and get the full member experience.Join For Free. You can use svglib to read your existing SVG files and convert them into ReportLab Drawing objects. The svglib package also has a command-line tool, svg2pdf, that can convert SVG files to PDFs. Dependencies The svglib package depends on ReportLab and lxml. You can install both of these packages using pip: pip install reportlab lxml Installation The svglib package can be installed using one of three methods. Install the Latest Release If you’d like to install the latest release from the Python Packaging Index, then you can just use pip the normal way: pip install svglib Install From Latest Version From Source Control On the off chance that you want to use the latest version of the code (i.e. the bleeding edge/alpha builds), then you can install directly from GitHub using pip like this: pip install git+ Manual Installation Most of the time, using pip is the way to go. But you can also download the tarball from the Python Packaging Index and do all the steps that pip does for you automatically if you want to. Just run the following three commands in your terminal in order: tar xfz svglib-0.8.1.tar.gz cd svglib-0.8.1 python setup.py install Now that we have svglib installed, let’s learn how to use it! Usage Using svglib with ReportLab is actually quite easy. All you need to do is import svg2rlg from svglib.svglib and give it the path to your SVG file. Let’s take a look: # svg_demo.py from reportlab.graphics import renderPDF, renderPM from svglib.svglib import svg2rlg def svg_demo(image_path, output_path): drawing = svg2rlg(image_path) renderPDF.drawToFile(drawing, output_path) renderPM.drawToFile(drawing, 'svg_demo.png', 'PNG') if __name__ == '__main__': svg_demo('snakehead.svg', 'svg_demo.pdf') After giving svg2rlg your path to the SVG file, it will return a drawing object. Then you can use this object to write it out as a PDF or a PNG. You could go on to use this script to create your own personal SVG to PNG converting utility! Drawing on the Canvas Personally, I don’t like to create one-off PDFs with just an image in them like in the previous example. Instead, I want to be able to insert the image and write out text and other things. Fortunately, you can do this very easily by painting your canvas with the drawing object. Here’s an example: # svg_on_canvas.py from reportlab.graphics import renderPDF from reportlab.pdfgen import canvas from svglib.svglib import svg2rlg def add_image(image_path): my_canvas = canvas.Canvas('svg_on_canvas.pdf') drawing = svg2rlg(image_path) renderPDF.draw(drawing, my_canvas, 0, 40) my_canvas.drawString(50, 30, 'My SVG Image') my_canvas.save() if __name__ == '__main__': image_path = 'snakehead.svg' add_image(image_path) Here we create a canvas.Canvas object and then create our SVG drawing object. Now you can use renderPDF.draw to draw your drawing on your canvas at a specific x/y coordinate. We go ahead and draw out some small text underneath our image and then save it off. The result should look something like this: Adding an SVG to a Flowable Drawings in ReportLab can usually be added as a list of Flowables and built with a document template. The svglib’s website says that its drawing objects are compatible with ReportLab’s Flowable system. Let’s use a different SVG for this example. We will be using the Flag of Cuba from Wikipedia. The svglib tests download a bunch of flag SVGs in their tests, so we will try one of the images that they use. You can get it here: Once you have the image saved off, we can take a look at the code: # svg_demo2.py import os from reportlab.graphics import renderPDF, renderPM from reportlab.platypus import SimpleDocTemplate from svglib.svglib import svg2rlg def svg_demo(image_path, output_path): drawing = svg2rlg(image_path) doc = SimpleDocTemplate(output_path) story = [] story.append(drawing) doc.build(story) if __name__ == '__main__': svg_demo('Flag_of_Cuba.svg', 'svg_demo2.pdf') This worked pretty well, although the flag is cut off on the right side. Here’s the output: I actually had some trouble with this example. ReportLab or svglib seems to be really picky about the way the SVG is formatted or its size. Depending on the SVG I used, I would end up with an AttributeError or a blank document or I would be successful. So your mileage will probably vary. I will say that I spoke with some of the core developers and they mentioned that **SimpleDocTemplate** doesn’t give you enough control over the frame that the drawing goes into, so you may need to create your own Frame or PageTemplate to make the SVG show up correctly. A workaround to get the snakehead.svg to work was to set the left and right margins to zero: # svg_demo3.py from reportlab.platypus import SimpleDocTemplate from svglib.svglib import svg2rlg def svg_demo(image_path, output_path): drawing = svg2rlg(image_path) doc = SimpleDocTemplate(output_path, rightMargin=0, leftMargin=0) story = [] story.append(drawing) doc.build(story) if __name__ == '__main__': svg_demo('snakehead.svg', 'svg_demo3.pdf') Scaling SVGs in ReportLab The SVG drawings you create with svglib are not scaled by default. So you will need to write a function to do that for you. Let’s take a look: # svg_scaled_on_canvas.py from reportlab.graphics import renderPDF from reportlab.pdfgen import canvas from svglib.svglib import svg2rlg def scale(drawing, scaling_factor): """ Scale a reportlab.graphics.shapes.Drawing() object while maintaining the aspect ratio """ scaling_x = scaling_factor scaling_y = scaling_factor drawing.width = drawing.minWidth() * scaling_x drawing.height = drawing.height * scaling_y drawing.scale(scaling_x, scaling_y) return drawing def add_image(image_path, scaling_factor): my_canvas = canvas.Canvas('svg_scaled_on_canvas.pdf') drawing = svg2rlg(image_path) scaled_drawing = scale(drawing, scaling_factor=scaling_factor) renderPDF.draw(scaled_drawing, my_canvas, 0, 40) my_canvas.drawString(50, 30, 'My SVG Image') my_canvas.save() if __name__ == '__main__': image_path = 'snakehead.svg' add_image(image_path, scaling_factor=0.5) Here we have two functions. The first function will scale our image using a scaling factor. In this case, we use 0.5 as our scaling factor. Then we do some math against our drawing object and tell it to scale itself. Finally, we draw it back out in much the same way as we did in the previous example. Here is the result: Using SVG Plots From matplotlib in ReportLab In a previous article, we learned how to create graphs using just the ReportLab toolkit. One of the most popular 2D graphing packages for Python is matplotlib though. You can read all about matplotlib here:. The reason I am mentioning matplotlib in this article is that it supports SVG as one of its output formats. So we will look at how to take a plot created with matplotlib and insert it into ReportLab. To install matplotlib, the most popular method is to use pip: pip install matplotlib Now that we have matplotlib installed, we can create a simple plot and export it as SVG. Let’s see how this works: import matplotlib.pyplot as pyplot def create_matplotlib_svg(plot_path): pyplot.plot(list(range(5))) pyplot.title = 'matplotlib SVG + ReportLab' pyplot.ylabel = 'Increasing numbers' pyplot.savefig(plot_path, format='svg') if __name__ == '__main__': from svg_demo import svg_demo svg_path = 'matplot.svg' create_matplotlib_svg(svg_path) svg_demo(svg_path, 'matplot.pdf') In this code, we import the pyplot sub-library from matplotlib. Next, we create a simple function that takes the path to where we want to save our plot. For this simple plot, we create a simple range of five numbers for one of the axes. Then we add a title and a y-label. Finally, we save the plot to disk as an SVG. The last step is in the if statement at the bottom of the code. Here we import our svg_demo code from earlier in this article. We create our SVG image and then we run it through our demo code to turn it into a PDF. The result looks like this: Using svg2pdf When you install svglib, you also get a command line tool called svg2pdf. As the name implies, you can use this tool to convert SVG files to PDF files. Let’s look at a couple of examples: svg2pdf /path/to/plot.svg This command just takes the path to the SVG file that you want to turn into a PDF. It will automatically rename the output to the same name as the input file, but with the PDF extension. You can specify the output name though: svg2pdf -o /path/to/output.pdf /path/to/plot.svg The -o flag tells svg2pdf requires that you pass in the output PDF path followed by the input SVG path. The documentation also mentions that you can convert all the SVG files to PDFs using a command like the following: svg2pdf -o "%(base)s.pdf" path/to/file*.svg This will rename the output PDF to the same name as the input SVG file for each SVG file in the specified folder. Wrapping Up The svglib is the primary method to add SVGs to ReportLab at the time of writing this post. While it isn’t full featured, it works pretty well and the API is quite nice. We also learned how to insert a plot SVG created via the popular matplotlib package. Finally, we looked at how to turn SVGs to PDFs using the svg2pdf command line tool. Related Reading - A Simple Step-by-Step Reportlab Tutorial - ReportLab 101: The textobject - ReportLab – How to add Charts and Graphs - Extracting PDF Metadata and Text with Python Published at DZone with permission of Mike Driscoll , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/adding-svgs-to-pdfs-with-python-and-reportlab
CC-MAIN-2019-22
refinedweb
1,660
67.76
Matthew Dillon wrote: > commit 48e7b118aed5eb70d42bdbf2ca5a938ef1f371b6 > Author: Matthew Dillon <dillon@apollo.backplane.com> > Date: Sat Dec 5 11:45:34 2009 -0800 > > network - Move socket from netmsg ext to netmsg header, add port to socket + + /* + * Do implied connect if not yet connected. Any data sent + * with the connect is handled by tcp_connect() and friends. + * + * NOTE! PROTOCOL THREAD MAY BE CHANGED BY THE CONNECT! + */ + if (nam && tp->t_state < TCPS_SYN_SENT) { +#ifdef INET6 + if (isipv6) + error = tcp6_connect(tp, flags, m, nam, td); + else +#endif /* signals to tcp_mss() that it should use the MSS from the cached rmxp_tao entry (TAO == TCP Accelerated Open), if available. So removing that code breaks T/TCP, but a) T/TCP is off by default b) T/TCP has major security issues that our implementation does not even try to address. Given that pretty much nobody uses T/TCP (Linux doesn't even implement it) I'd suggest we remove it altogether. Otherwise, we should revert at least the change above (of course, I would not be surprised if T/TCP has been broken for years and nobody noticed ;) Aggelos
https://www.dragonflybsd.org/mailarchive/commits/2009-12/msg00058.html
CC-MAIN-2018-13
refinedweb
180
58.82
05 March 2010 23:59 [Source: ICIS news] LONDON (ICIS news)--The first-quarter European polymethyl methacrylate (PMMA) contract price has rolled over from the fourth quarter of 2009, as suppliers favoured volume over price, market sources said on Friday. A major producer said that compared with fourth-quarter prices, it managed to get an increase of €0.05-0.08/kg ($0.07-0.10/kg) for some accounts in the first-quarter contract. The producer added that it had been under “huge pressure” since its competitors failed to move their prices up.According to data from global chemical market intelligence service Another large producer also said the PMMA price was unchanged for first-quarter contracts. The producer said this was the result of no aggressive competitive action from sellers, adding that it seemed “some sellers did not have a clear strategy”. In the distribution market, a reseller said it was now faced with higher prices from its suppliers in ?xml:namespace> “Our suppliers are being very tough and are talking about increases of €0.12/tonne for the second quarter,” the reseller said. “I heard that in Despite widespread confirmation that the value of PMMA failed to move much in the first three months of 2010, European producers of methyl methacrylate (MMA), the feedstock for PMMA, were seeking increases of €100-150/tonne from 1 March, or as contracts would allow. “We will go for a €150/tonne increase to restore workable margins as soon as contracts will allow,” said one European MMA producer. On 19 February, Evonik announced that it was targeting a hike of €120/tonne on MMA from 1 March, or as contracts would allow. Another global major said it needed at least €100/tonne for the second quarter, based on successive price increases in upstream acetone and propylene. Propylene has been “very painful”, the global major said, and MMA demand was not strong at the beginning of the year. If things pick up, it added, Evonik's targeted price increase could be on the low side. Nearly all MMA is polymerised to make homopolymers and copolymers, with the largest application being the casting, moulding or extrusion of PMMA or modified polymers. Acetone is the main feedstock for MMA production. ($1 = €0.74) For more on methyl methacrylate
http://www.icis.com/Articles/2010/03/05/9340183/europe-first-quarter-pmma-rolls-over-at-2.60-2.75kg.html
CC-MAIN-2014-52
refinedweb
383
59.74
AccessModifier 13 May 2003. C++ Choice Probably the most influential set of access modifiers started with C++, which has three. - public: Any class can access the features - protected: Any subclass can access the feature - private: No other class can access the feature A class can also give access to another class or method with the friend keyword - hence the comment that in C++ friends can touch each others' private parts. Java Java based itself on C++. It added the notion of package to the language, and this influenced behavior. - public: Any class - (package): (The default, and doesn't use a keyword in the code) Any class in the same package - protected: Any subclass or class in the same package - private: No other class Notice the subtle distinction between Java's protected and C++'s protected (just to keep things confusing.) C# C# also is based on the C++ model - public: Any class - internal: Any class in the same assembly (default for methods and classes, but may be specified) - protected: Any subclass - protected internal: Any subclass or class in the same assembly - private: No other class (default for fields) In C# an assembly is a physical unit of composition - equivalent to a dll, jar, or binary. C# also has logical units (namespaces) that are similar to java packages, but they don't play in access modifiers. Smalltalk Smalltalk is often considered to be the purest OO language, and predates C++, Java, and C#. It didn't use keywords to control access, but used a basic policy. Smalltalkers would say that fields were private and methods were public. However the private fields don't really mean the same as what they mean in C++ based languages. In C++ et al access is thought of as textual scope. Consider an example with a class Programmer which is a subclass of class Person with two instances: Martin and Kent. In C++ since both instances are of the same class then Martin has access to the private features of Kent. In Smalltalk's world view access is based on objects, so since Martin and Kent are different objects Martin has no business getting at Kent's fields. But again, since everything is object based Martin can get at all his fields even if they were declared in the Person class. So data in Smalltalk is closer to protected than private, although the object scope makes things different in any case. Access control does not control access If you have a field that's private it means no other class can get at it. Wrong! If you really want to you can subvert the access control mechanisms in almost any language. Usually the way through is via reflection. The rationale is that debuggers and other system tools often need to see private data, so usually the reflection interfaces allow you to do this. C++ doesn't have this kind of reflection, but there you can just use direct memory manipulation since C++ is fundamentally open memory. The point of access control is not to prevent access, but more to signal that the class prefers to keep some things to itself. Using access modifiers, like so many things in programming, is primarily about communication. Published Methods I've argued that there is really room for another access type: PublishedInterface. I think there is a fundamental difference between features you expose to other classes within your project team and those you expose to other teams (such as in an API). These published features are a subset of public features and have to be treated differently, so much so that I believe that the distinction between published and public is more important than that between public and private..
https://martinfowler.com/bliki/AccessModifier.html
CC-MAIN-2021-04
refinedweb
619
57.91
/* History.h -- the names of functions that you can call in history. */ /* Copyright (C) 1989, 1992 Free Software Foundation, Inc. This file contains the GNU History Library (the Library), a set of routines for managing the text of previously typed lines.ndef _HISTORY_H_ #define _HISTORY_H_ #ifdef __cplusplus extern "C" { #endif #if defined READLINE_LIBRARY # include "rlstdc.h" # include "rltypedefs.h" #else # include <readline/rlstdc.h> # include <readline/rltypedefs.h> #endif #ifdef __STDC__ typedef void *histdata_t; #else typedef char *histdata_t; #endif /* The structure used to store a history entry. */ typedef struct _hist_entry { char *line; histdata_t data; } HIST_ENTRY; /* A structure used to pass the current state of the history stuff around. */ typedef struct _hist_state { HIST_ENTRY **entries; /* Pointer to the entries themselves. */ int offset; /* The location pointer within this array. */ int length; /* Number of elements within this array. */ int size; /* Number of slots allocated to this array. */ int flags; } HISTORY_STATE; /* Flag values for the `flags' member of HISTORY_STATE. */ #define HS_STIFLED 0x01 /* Initialization and state management. */ /* Begin a session in which the history functions might be used. This just initializes the interactive variables. */ extern void using_history PARAMS((void)); /* Return the current HISTORY_STATE of the history. */ extern HISTORY_STATE *history_get_history_state PARAMS((void)); /* Set the state of the current history array to STATE. */ extern void history_set_history_state PARAMS((HISTORY_STATE *)); /* Manage the history list. */ /* Place STRING at the end of the history list. The associated data field (if any) is set to NULL. */ extern void add_history PARAMS((const char *)); /* A reasonably useless function, only here for completeness. WHICH is the magic number that tells us which element to delete. The elements are numbered from 0. */ extern HIST_ENTRY *remove_history PARAMS((int)); /* Make the history entry at WHICH have LINE and DATA. This returns the old entry so you can dispose of the data. In the case of an invalid WHICH, a NULL pointer is returned. */ extern HIST_ENTRY *replace_history_entry PARAMS((int, const char *, histdata_t)); /* Clear the history list and start over. */ extern void clear_history PARAMS((void)); /* Stifle the history list, remembering only MAX number of entries. */ extern void stifle_history PARAMS((int)); /* Stop stifling the history. This returns the previous amount the history was stifled by. The value is positive if the history was stifled, negative if it wasn't. */ extern int unstifle_history PARAMS((void)); /* Return 1 if the history is stifled, 0 if it is not. */ extern int history_is_stifled PARAMS((void)); /* Information about the history list. */ /* Return a NULL terminated array of HIST_ENTRY which is the current input history. Element 0 of this list is the beginning of time. If there is no history, return NULL. */ extern HIST_ENTRY **history_list PARAMS((void)); /* Returns the number which says what history element we are now looking at. */ extern int where_history PARAMS((void)); /* Return the history entry at the current position, as determined by history_offset. If there is no entry there, return a NULL pointer. */ extern HIST_ENTRY *current_history PARAMS((void)); /* Return the history entry which is logically at OFFSET in the history array. OFFSET is relative to history_base. */ extern HIST_ENTRY *history_get PARAMS((int)); /* Return the number of bytes that the primary history entries are using. This just adds up the lengths of the_history->lines. */ extern int history_total_bytes PARAMS((void)); /* Moving around the history list. */ /* Set the position in the history list to POS. */ extern int history_set_pos PARAMS((int)); /* Back up history_offset to the previous history entry, and return a pointer to that entry. If there is no previous entry, return a NULL pointer. */ extern HIST_ENTRY *previous_history PARAMS((void)); /* Move history_offset forward to the next item in the input_history, and return the a pointer to that entry. If there is no next entry, return a NULL pointer. */ extern HIST_ENTRY *next_history PARAMS((void)); /* Searching the history list. */ /* Search the history for STRING, starting at history_offset. If DIRECTION < 0, then the search is through previous entries, else through subsequent. If the string is found, then current_history () is the history entry, and the value of this function is the offset in the line of that history entry that the string was found in. Otherwise, nothing is changed, and a -1 is returned. */ extern int history_search PARAMS((const char *, int)); /* Search the history for STRING, starting at history_offset. The search is anchored: matching lines must begin with string. DIRECTION is as in history_search(). */ extern int history_search_prefix PARAMS((const char *, int)); /* Search for STRING in the history list, starting at POS, an absolute index into the list. DIR, if negative, says to search backwards from POS, else forwards. Returns the absolute index of the history element where STRING was found, or -1 otherwise. */ extern int history_search_pos PARAMS((const char *, int, int)); /* Managing the history file. */ /* Add the contents of FILENAME to the history list, a line at a time. If FILENAME is NULL, then read from ~/.history. Returns 0 if successful, or errno if not. */ extern int read_history PARAMS((const char *)); /* Read a range of lines from FILENAME, adding them to the history list. Start reading at the FROM'th line and end at the TO'th. If FROM is zero, start at the beginning. If TO is less than FROM, read until the end of the file. If FILENAME is NULL, then read from ~/.history. Returns 0 if successful, or errno if not. */ extern int read_history_range PARAMS((const char *, int, int)); /* Write the current history to FILENAME. If FILENAME is NULL, then write the history list to ~/.history. Values returned are as in read_history (). */ extern int write_history PARAMS((const char *)); /* Append NELEMENT entries to FILENAME. The entries appended are from the end of the list minus NELEMENTs up to the end of the list. */ extern int append_history PARAMS((int, const char *)); /* Truncate the history file, leaving only the last NLINES lines. */ extern int history_truncate_file PARAMS((const char *, int)); /* History expansion. */ /* Expand the string STRING, placing the result into OUTPUT, a pointer to a string. Returns: 0) If no expansions took place (or, if the only change in the text was the de-slashifying of the history expansion character) 1) If expansions did take place -1) If there was an error in expansion. 2) If the returned line should just be printed. If an error ocurred in expansion, then OUTPUT contains a descriptive error message. */ extern int history_expand PARAMS((char *, char **)); /* Extract a string segment consisting of the FIRST through LAST arguments present in STRING. Arguments are broken up as in the shell. */ extern char *history_arg_extract PARAMS((int, int, const char *)); /* Return the text of the history event beginning at the current offset into STRING. Pass STRING with *INDEX equal to the history_expansion_char that begins this specification. DELIMITING_QUOTE is a character that is allowed to end the string specification for what to search for in addition to the normal characters `:', ` ', `\t', `\n', and sometimes `?'. */ extern char *get_history_event PARAMS((const char *, int *, int)); /* Return an array of tokens, much as the shell might. The tokens are parsed out of STRING. */ extern char **history_tokenize PARAMS((const char *)); /* Exported history variables. */ extern int history_base; extern int history_length; extern int history_max_entries; extern char history_expansion_char; extern char history_subst_char; extern char *history_word_delimiters; extern char history_comment_char; extern char *history_no_expand_chars; extern char *history_search_delimiter_chars; extern int history_quotes_inhibit_expansion; /* Backwards compatibility */ extern int max_input_history; /* If set, this function is called to decide whether or not a particular history expansion should be treated as a special case for the calling application and not expanded. */ extern rl_linebuf_func_t *history_inhibit_expansion_function; #ifdef __cplusplus } #endif #endif /* !_HISTORY_H_ */
http://opensource.apple.com/source/gdb/gdb-1344/src/readline/history.h
CC-MAIN-2014-52
refinedweb
1,218
50.23
I’ve been considering a redesign for my art league’s website for a long time and finally got started. The goals include 1) making it more colorful, 2) adding some visually interesting hover effects to the items in the Quick Links sidebar, 3) finding a better way to spotlight our “artist of the Month”, and 4) laying the groundwork to make it into a responsive layout. If/when I roll out the new design, it’ll also include a badly needed cleanup of the markup and CSS, which have both gotten very disorganized as I’ve made additions/changes over the years. I’ll use HTML5 and various CSS3 goodies, not worrying too much that the less capable browsers won’t be able to display some of these. For now, I’ve concentrated on the first three of those goals. The two test pages (HOME and DEMONSTRATOR) are at hbartleague.com/hidden/redesign_2012-summer/002-A. Haven’t started on other pages yet. For comparison, the current design is at hbartleague.com. Thanks Couple of comments for the design: The drop shadow on the top is just so overbearing. I’d reduce the opacity quite a bit and make it more subtle. On the sidebar items, it seems like you are using CSS3 for the transitions. Maybe use an animation for the return (it’s pretty abrupt to transition and then snap back instantly). The links on the top seem disproportionate from the rest of the text and I kind of find it hard to read with their current sizes. I would make those shift sizes using media queries so they would be that size when the browser collapses all the way down. The footer is a shade of what I might call “painful” blue. It just hurts to look at while trying to read the text. I’d say either increase the font size or make it a either much darker with lighter text or much lighter with darker text. I feel like the header just needs to artistic love. It just feels kind of blank up there. Even a texture of some kind would be helpful. Good start! The second is much better, but some things to consider that i would consider high priority: 1. Improve the Nav a little, it is kind of small and not very easy to identify… 2. footer font-size is nearly illegible, consider making that a little larger, 12px should be the minimum size in my opinion. Sorry for the belated reply – been away from the computer for a couple of days. I appreciate the suggestions – very helpful and exactly the sort of feedback I was hoping for. I’ll see if subtle a texture or gradient adds some visual interest to the header. I am bound to use that logo and font, so can’t change much there. Will address the drop shadow. Regarding the top nav, I think reducing the number of links is a good idea. Maybe move “Join” and “History” to the footer, and possibly “Shows” to the sidebar. That’ll give me room enough to increase the text size of the top nav. And with a reduced number of top links, I could use the same colors seen when hovering in the sidebar. That’ll be a more consistent look. And yes, I’m using CSS3 transitions for the sidebar hover effect. I’m not sure I want to go with animation; first I’ll experiment with different times for the start and end of the transitions to smooth things a little. One odd thing I’ve noticed is that the transitions move the distances I want them to in Firefox and Opera, but in Safari and Chrome the second and fourth ones move different distances. Yeah, the footer text is too small. I’ll work on a different arrangement of the footer items – maybe move the actual email links below their descriptions, and indent them a little. Could also replace the text “email” with an icon. I’ll reserve smaller text for small screen devices if I have to. I feel somewhat bound to use that blue you see in the footer, because it’s one of the 2 blues used in the “official” logo. Thanks again. You must be logged in to reply to this topic.
http://css-tricks.com/forums/topic/possible-redesign-critique-wanted/
CC-MAIN-2013-48
refinedweb
722
80.21
Frequently Asked Questions General Can CXF run with JDK 1.8/Java 8? Yes. CXF supports Java 8. The latest 3.x version is built using JDK 1.8. Can CXF run with JDK 1.7/Java 7? Yes. CXF supports Java 7. Since Java 7 contains the 2.2.x versions of both JAXB and JAX-WS API jars, using CXF with Java 7 is much easier than with Java 6. CXF 3.2 no longer supports Java 7 and requires Java 8 or newer. Users are strongly encouraged to start moving to Java 8. Can CXF run with JDK 1.6? JDK 1.6 incorporates the JAXB reference implementation. However, it incorporates an old version of the RI. CXF does not support this version. As of 1.6_04, this is easy to deal with: you must put the versions of JAXB RI (the 'impl' and 'xjc' jars) that we include with CXF in your classpath. As of this writing, these are version 2.2.10. CXF 3.1 no longer supports Java 6 and requires Java 7 or newer. Can CXF run with JDK 1.5? Yes for CXF 2.6.x and older. Keep in mind though that Java 2 SE 5.0 with JDK 1.5 has reached end of life (EOL). CXF 2.7.x no longer supports Java 5. In order to upgrade to 2.7.x, you must be using Java 6 (or newer). There is one more planned release for the 2.6.x series of CXF. After that, there are no more planned releases of CXF that will support Java 5. Users are strongly encouraged to start moving to Java 7 and to start migrating to newer versions of CXF. Can CXF run without the Sun reference SAAJ implementation? In many cases, CXF can run without an SAAJ implementation. However, some features such as JAX-WS handlers and WS-Security do require an SAAJ implementation. By default, CXF ships with the Sun SAAJ implementation, but CXF also supports axis2-saaj version 1.4.1 as an alternative. When using a Java6 JRE, CXF can also use the SAAJ implementation built into Java. Are there commercial offerings of CXF that provide services, support, and additional features? Several companies provide services, training, documentation, support, etc... on top of CXF. Some of those companies also produce products that are either based on Apache CXF or include Apache CXF. See the Commercial CXF Offerings page for a list of companies and the services they provide. Is there an Apache CXF certification program? No, but Oracle's SCDJWS certification covers the web services stack and related areas. Note, that the popular SCJP certification is a prerequisite to the SCDJWS. Also, check out the SCDJWS Forum at the Java Ranch for healthy discussions in regards to the certification. Study notes can be found at SCDJWS 5.0 Study Guide, WikiBooks and Ivan A. Krizsan Study Notes. Java Ranch also provides and information page in regards to the certification. JAX-WS Related The parts in my generated wsdl have names of the form "arg0", "arg1", ... Why don't the parts (and Java generated from them) use the nice parameter names I typed into the interface definition? Official answer: The JAX-WS spec (specifically section 3.6.1) mandates that it be generated this way. To customize the name, you have to use an @WebParam(name = "blah") annotation to specify better names. (You can use @WebResult for the return value, but you'll only see the results if you look at the XML.) Reason: One of the mysteries of java is that abstract methods (and thus interface methods) do NOT get their parameter names compiled into them even with debug info. Thus, when the service model is built from an interface, there is no way to determine the names that were using in the original code. If the service is built from a concrete class (instead of an interface) AND the class was compiled with debug info, we can get the parameter names. The simple frontend does this. However, this could cause potential problems. For example, when you go from developement to production, you may turn off debug information (remove -g from javac flags) and suddenly the application may break since the generated wsdl (and thus expect soap messages) would change. Thus, the JAX-WS spec writers went the safe route and mandate that you have to use the @WebParam annotations to specify the more descriptive names. How can I add soap headers to the request/response? There are several ways to do this depending on how your project is written (code first or wsdl first) and requirements such as portability. - The "JAX-WS" standard way to do this is to write a SOAP Handler that will add the headers to the SOAP message and register the handler on the client/server. This is completely portable from jax-ws vendor to vendor, but is also more difficult and can have performance implications. You have to handle the conversion of the JAXB objects to XML yourself. It involves having the entire soap message in a DOM which breaks streaming. Requires more memory. etc... However, it doesn't require any changes to wsdl or SEI interfaces. - JAX-WS standard "java first" way: if doing java first development, you can just add an extra parameter to the method and annotate it with @WebParam(header = true). If it's a response header, make it a Holder and add the mode = Mode.OUT to @WebParam. - wsdl first way: you can add elements to the message in the wsdl and then mark them as soap:headers in the soap:binding section of the wsdl. The wsdl2java tool will generate the @WebParam(header = true) annotations as above. With CXF, you can also put the headers in their own message (not the same message as the request/response) and mark them as headers in the soap:binding, but you will need to pass the -exsh true flag to wsdl2java to get the paramters generated. This is not portable to other jax-ws providers. Processing headers from other messages it optional in the jaxws spec. - CXF proprietary way: In the context (BindingProvider.getRequestContext() on client, WebServiceContext on server), you can add a List<org.apache.cxf.headers.Header> with the key Header.HEADER_LIST. The headers in the list are streamed at the appropriate time to the wire according to the databinding object found in the Header object. Like option 1, this doesn't require changes to wsdl or method signatures. However, it's much faster as it doesn't break streaming and the memory overhead is less. How can I turn on schema validation for jaxws endpoint? For the client side You may also do this programmatically: For the server side Starting with CXF 2.3 you have the additional option of using the org.apache.cxf.annotations.SchemaValidation annotation. Are JAX-WS client proxies thread safe? Official JAX-WS answer: No. According to the JAX-WS spec, the client proxies are NOT thread safe. To write portable code, you should treat them as non-thread safe and synchronize access or use a pool of instances or similar. CXF answer: CXF proxies are thread safe for MANY use cases. The exceptions are: Use of ((BindingProvider)proxy).getRequestContext() - per JAX-WS spec, the request context is PER INSTANCE. Thus, anything set there will affect requests on other threads. With CXF, you can do: and future calls to getRequestContext() will use a thread local request context. That allows the request context to be threadsafe. (Note: the response context is always thread local in CXF) - Settings on the conduit - if you use code or configuration to directly manipulate the conduit (like to set TLS settings or similar), those are not thread safe. The conduit is per-instance and thus those settings would be shared. Also, if you use the FailoverFeature and LoadBalanceFeatures, the conduit is replaced on the fly. Thus, settings set on the conduit could get lost before being used on the setting thread. - Session support - if you turn on sessions support (see jaxws spec), the session cookie is stored in the conduit. Thus, it would fall into the above rules on conduit settings and thus be shared across threads. - WS-Security tokens - If use WS-SecureConversation or WS-Trust, the retrieved token is cached in the Endpoint/Proxy to avoid the extra (and expensive) calls to the STS to obtain tokens. Thus, multiple threads will share the token. If each thread has different security credentials or requirements, you need to use separate proxy instances. For the conduit issues, you COULD install a new ConduitSelector that uses a thread local or similar. That's a bit complex though. For most "simple" use cases, you can use CXF proxies on multiple threads. The above outlines the workarounds for the others. The generated wsdl (GET request on the ?wsdl address) doesn't contain the messages, types, portType, etc... What did I do wrong? Usually this means the wsdl at that address contains the service and binding, but uses a <wsdl:import> element to import another wsdl (usually at ?wsdl=MyService1.wsdl type address) that defines the types, messages, and portType. The cause of this is different targetNamespaces for the Service Interface (mapped to the port type) and the service implementation (mapped to the Service/Binding). By default, the targetNamespace is derived from the package of each of those, so if they are in different packages, you will see this issue. Also, if you define a targetNamespace attribute on the @WebService annotation on one of them, but not the other, you will likely see this as well. The easiest fix is to update the @WebService annotation on BOTH to have the exact same targetNamespace defined. Spring Related When using Spring AOP to enable things like transactions and security, the generated WSDL is very messed up with wrong namespaces, part names, etc... Reason: When using Spring AOP, spring injects a proxy to the bean into CXF instead of the actual bean. The Proxy does not have the annotations on it (like the @WebService annotation) so we cannot query the information directly from the object like we can in the non-AOP case. The "fix" is to also specify the actual serviceClass of the object in the spring config: or:
https://cwiki.apache.org/confluence/display/CXF/FAQ
CC-MAIN-2017-34
refinedweb
1,725
66.54
Webpack is the best module bundler I've ever used. Just this week I used it to reduce the JS footprint of an app from 906KB to 87KB for mobile visitors. An 800KB difference! Webpack's core premise is that you can require('./foo') your JavaScripts. That sea of <script> tags or concatenated code and variables injected into the global namespace is gone. In theory we've had that for a while. RequireJS and its AMD have been out for a while, but it's always felt clunky. Don't think I've used it more than once in the wild. Too clunky. Browserify was better. Using CommonJS it brought the same syntax we've had in node.js into the browser. Less clunky and easier to reason about. But to my knowledge, not all that popular in the wild. The problem with Browserify is that, at least in my experience, it takes a lot of setup to work well. And it's kind of slow. Compile times run into the many seconds range. Webpack is fast. Like super fast. Compiling a huge codebase rarely takes more than a second. Every time I change something it's already compiled by the time I refresh the browser. Magic. Lazy loading your code Another great feature of Webpack is that it can perform a sort of minimal bundle coverage for your dependency graph. Instead of making one huge bundle that's got all your code, it makes a few smaller discrete ones. Users no longer have to download all the code for /admin_panel when they're looking at /user_profile. Not only that, Webpack automagically downloads the required javascripts when needed. This is called lazy loading and it is magic. It's easy in theory, whenever you require a dependency like this: require.ensure([], function () {var more_code = require("./more_code");}); Webpack will say "Oop, that's an optional dependency!". It will package everything that might get required by that require call into a separate bundle. In this case just ./more_code.js. If your path includes a variable, Webpack understands that too and puts everything that could match in a bundle. So with require('./'+module_name+'View.js'), for instance, it will bundle all JavaScript files that end with View into one file. Webpack splits your bundle on every require.ensure call. Then when your code needs something, Webpack makes a jsonp call to get it from your server. This is great in development. Not great in production. In my tests on a Heroku Rails app, doing a HEAD call to our app took some 200ms, while the same call to amazon's CloudFlare took only 10ms. Our app is an order of magnitude slower than a proper static file server. Oops. Lazy Loading with CDN support First of all, you have to get Webpack working with Rails. The best guide I've found comes from Justin Gordon, here. He takes you through the whole process in detail. The nutshell version goes like this: - Use npm to install Webpack and its things - Configure it in webpack.rails.config.js - Add a call to webpack -w --config=webpack.rails.config.jsto your startup sequence (I like using grunt startof some sort) - Remove all //= requirein application.jsand replace with //= require generated/rails-bundle.js - Make an assets.rake task that looks like this gist You can find the details in Gordon's long blogpost. It's really good. Although personally I didn't go as far as moving all my JavaScripts out of app/assets and I didn't set up hot code loading for local dev. Now, to make Webpack understand that static assets should be loaded from a CloudFlare CDN, you have to change its output settings. config.output = {filename: in_dev() ? "rails-bundle.js" : "[id].[chunkhash].rails-bundle.js",path: "app/assets/javascripts/generated",publicPath: getCDN() + "/assets/generated/",}; There's two tricks here: - We tell Webpack to fingerprint files with [chunkhash]unless we're in local development. This gives us long-term caching because of unique filenames - We prefix the publicPathwith a CDN URL. This "tricks" Webpack into loading them from there The in_dev and getCDN make the config easier to understand. They look like this: function getCDN() {var CDNs = {staging: ".cloudfront.net",production: ".cloudfront.net",preproduction: ".cloudfront.net",};if (!in_dev()) {return "//" + CDNs[process.env.RAILS_ENV.toLowerCase()];}return "";}function in_dev() {return !process.env.RAILS_ENV || process.env.RAILS_ENV == "development";} They both look at the RAILS_ENV environment variable to decide which situation Webpack is running under. It works great, but does mean that we have two places to set the config for CDN. Here and the regular Ruby config. Now, because our files are fingerprinted, Rails's asset pipeline no longer knows how to insert them. The filename 0.some_weird_hash.rails-bundle.js changes every time Webpack compiles our code. The simplest fix I've found is to copy the file in that assets.rake task. Like this: FileUtils.copy_file(Pathname.glob(path+'0.*.rails-bundle.js').first,File.join(path, 'rails-bundle.js')) Now the main bundle file will always have the same name in production and application.js will be able to include it so you don't have to hack the asset pipeline. Perfect. Benefits There are many reasons you'd want to use Webpack in your Rails application. My main motivation was that our users were just loading way too much JavaScript. They don't need the desktop homepage app when they're on mobile. And they definitely don't need the whole user dashboard when they're looking at the homepage. For mobile users the change has been significant. From downloading all of 906KB before, to just 87KB. Sure, there's still some overhead on top of that for libraries, but we can load those from 3rd party CDNs now so they could already be cached. The other huge benefit is that our code is easier to deal with. Instead of shoving everything into a huge carefully namespaced global object, we can just require what we need. Life is️
https://swizec.com/blog/webpack-lazy-loading-on-rails-with-cdn-support/
CC-MAIN-2022-27
refinedweb
1,006
68.16
Pokémon (ポケモン Pokemon , English pronunciation: /ˈpoʊkeɪmɒn/[1]; often simplified as "Pokemon"[2]) is a media franchise published.[3] Pokémon properties have since been merchandised into anime, manga, trading cards, toys, books, and other media. The franchise celebrated its tenth anniversary on February 27, 2005, and as of 23 April 2008[update], cumulative sales of the video games (including home console versions, such as the "Pikachu" Nintendo 64) have reached more than 186 million copies.[4] The name Pokémon is the romanized contraction of the Japanese brand Pocket Monsters (ポケットモンスター Poketto Monsutā ), each individual species name; in short, it is grammatically correct to say both "one Pokémon" and "many Pokémon". In November 2005, 4Kids Entertainment, which had managed the non-game related licensing of Pokémon, announced that it had agreed not to renew the Pokémon representation agreement. Pokémon USA Inc. (now Concept The concept of the Pokémon universe, in both the video games and the general fictional world of Pokémon, stems from the hobby of insect collecting, a popular pastime which Pokémon executive director Satoshi Tajiri-Oniwa enjoyed as a child.[7]. Video games Generations franchise started off in its first generation with its initial release of Pocket Monsters Aka and Midori ("Red" and "Green", respectively) for the Game Boy in Japan. When these games proved extremely popular, an enhanced Ao ("Blue") version was released sometime after, and the Ao version was reprogrammed as Pokémon Red and Blue for international release. The games launched in the United States on September 30, 1998. The original Aka and Midori versions were never released outside of Japan.[9] Afterwards, a further enhanced version titled Pokémon Yellow: Special Pikachu Edition was released to partially take advantage of the color palette of the Game Boy Color, as well as to feature more elements from the popular Pokémon anime. This first generation of games introduced the original 151 species of Pokémon (in National Pokédex order, encompassing all Pokémon from Bulbasaur to Mew), 2003 release of Pokémon Ruby and Sapphire for Game Boy Advance and continued with the Game Boy Advance remakes of Pokémon Red and Blue, Pokémon FireRed and LeafGreen, and an enhanced version of Pokémon Ruby and Sapphire titled Pokémon Emerald. The third generation introduced 135 new Pokémon (starting with Treecko and ending with Deoxys) for a)..[10] The Nintendo DS "touch screen" allows new features to the game such as cooking poffins with the stylus and using the "Pokétch". New gameplay concepts include a restructured move-classification system, online multiplayer trading and battling via Nintendo Wi-Fi Connection, the return (and expansion) of the second generation's day-and-night system, the expansion of the third generation's Pokémon Contests into "Super Contests", and the new region of Sinnoh, which has an underground component for multiplayer gameplay in addition to the main overworld. Pokémon Platinum, the enhanced version.[11].[12].[13].. Pokédex.[15] In other media Anime series[16] (known as Satoshi in Japan) a Pokémon Master in training, as he and a small group of friends[16].[17] The series follows the storyline of the original games, Pokémon Red and Blue, in the region of Kanto. Accompanying Ash on his journeys are Brock, the Pewter City Gym Leader, and Misty, the youngest of the Gym Leader sisters from Cerulean City. Pokémon: Adventures in the Orange Islands follows Ash's adventures in the Orange Islands, a place unique to the anime, and replaces Brock with Tracey Sketchit, an artist and "Pokémon watcher". The next series, based on the second generation of games, include Pokémon: Johto Journeys, Pokémon: Johto League Champions, and Pokémon: Master Quest, following the original trio of Ash, Brock, and Misty in the western Johto region. The saga continues in Pokémon: Advanced Battle, based on the third generation games. Ash and company travel to Hoenn, a southern region in the Pokémon World. Ash takes on the role of a teacher and mentor for a novice Pokémon trainer named May. Her brother Max accompanies them, and though he isn't a trainer, he knows large amounts of handy information. Brock (from the original series) soon catches up with Ash, but Misty has returned to Cerulean City to tend to her duties as a gym leader (Misty, along with other recurring characters, appears in the spin-off series Pokémon Chronicles). The Advanced Battle series concludes with the Battle Frontier saga, based on the Emerald version and including aspects of FireRed and LeafGreen. The ) - To the Overcoming of Space-Time (2009) - Note: Given release dates are for the original Japanese releases Soundtracks There have been several Pokémon CDs that. Pokémon Trading Card Game The Pokémon Trading Card Game is a collectible card game with a goal similar to a Pokémon battle in the video game series. Players use Pokémon cards, with individual strengths and weaknesses, in an attempt to defeat their opponent by "knocking out" his or her Pokémon cards.[20] The game was first published in North America by Wizards of the Coast in 1999.[21] However, with the release of Pokémon Ruby and Sapphire Game Boy Advance video games, Manga There are various Pokémon manga series, four of which were released in English by Viz Media, and seven of them released in English by Chuang Yi. The manga differs greatly from the video games and cartoons in that the trainers, though frowned upon, were able to kill the opponent's Pokémon. - Manga released in English - The Electric Tale of Pikachu (a.k.a Dengeki Pikachu), a shōnen manga created by Toshihiro Ono. It was divided into four tankōbon, each given a separate title in the North American and English Singapore versions: The Electric Tale of Pikachu, Pikachu Shocks Back, Electric Pikachu Boogaloo, and Surf’s Up, Pikachu. The series is based loosely on the anime. - Pokémon Adventures (Pocket Monsters SPECIAL in Japan), a shōnen manga based on the video games. - Magical Pokémon Journey (a.k.a. Pocket Monsters PiPiPi ★ Adventures), a shōjo Monsters (not released by Viz) - Pokémon: Jirachi Wish Maker (not released by Viz) - Pokémon: Destiny Deoxys (not released by Viz) - Pokémon: Lucario and the Mystery of Mew (the third movie-to-comic adaptation) - Pokémon Diamond and Pearl Adventure! -. - Pokémon Get aa ze! by Asada Miho - Pocket Monsters Chamo-Chamo ★ Pretty ♪ by Yumi Tsukirino, who also made Magical Pokémon Journey. - Pokémon Card Master - Pocket Monsters Emerald Chōsen!! Battle Frontier by Ihara Shigekatsu - Pocket Monsters Zensho by Satomi Nakamura Criticism and controversy Morality.[23] The Vatican, however, has countered that the Pokémon trading card game and video games are "full of inventive imagination" and have no "harmful moral side effects".. [25].[26] In 2001, Saudi Arabia banned Pokémon games and cards, alleging that the franchise promoted Zionism in violation of Muslim doctrine.[27][28] Pokémon has also been accused of promoting cockfighting[29] and materialism.[30] In 1999, two nine-year-old boys sued Nintendo because they claimed that the Pokémon Trading Card Game caused their problematic gambling.[31] Health On December 16, 1997, more than 635 Japanese children were admitted to hospitals with epileptic seizures. It was determined that.[32] It was determined in subsequent research that these strobing light effects cause some individuals to have epileptic seizures, even if the person had no previous history of epilepsy.[33] This incident is the most common focus of Pokémon-related parodies in other media, and was lampooned by The Simpsons episode "Thirty Minutes over Tokyo"[34] and the South Park episode "Chinpokomon",[35].[36] Cultural influence Pokémon, being a popular franchise, has undoubtedly left its mark on pop culture. The Pokémon characters themselves have become pop culture icons; examples include the U.S. magazine Time in 1999. The Comedy Central show Drawn Together has a character named Ling-Ling which is a direct parody of Pikachu.[37] Several other shows such as ReBoot, The Simpsons, South Park, The Grim Adventures of Billy & Mandy, Robot Chicken and All Grown Up!. In November 2001, Nintendo opened a store called the Pokémon Center in New York, in New York's Rockefeller Center,[38] modeled after the two other Pokémon Center stores in Tokyo and Osaka and named after a staple of the videogame series; Pokémon Centers are fictional buildings where Trainers take their injured Pokémon to be healed after combat.[39] The store sold Pokémon merchandise on a total of two floors, with items ranging from collectible shirts to stuffed Pokémon plushies.[40].[41] See also References - Books - Tobin, Joseph, ed. Pikachu's Global Adventure: The Rise and Fall of Pokémon. Duke University Press., February, 2004. ISBN 0-8223-3287-6. - Notes - ^ Sora Ltd.. Super Smash Bros. Brawl. (Nintendo). Wii. (in English). (2008-03-09) "(Announcer's dialog after the character Pokémon Trainer is selected (voice acted))" - ^.. ", TimeAsia (Waybacked). - ^ MacDonald, Mark; Brokaw, Brian; Arnold; J. Douglas; Elies, Mark. Pokémon Trainer's Guide. Sandwich Islands Publishing, 1999. ISBN 0-439-15404-9. (pg73) - ^ "Pokémon Green Info on GameFAQs" gamefaqs.com. Retrieved February 23, 2007 - ^ Lucas M. Thomas (April 4, 2007). "The Countdown to Diamond and Pearl, Part 4". IGN.. Retrieved on 2008-06-29. - ^ "Cubed3 Pokémon Battle Revolution Confirmed for Wii" and soon Pokémon Mistery Dungeon 2: Darkness Exploration Team, and Time Exploration Team Cubed3.com. URL Accessed June 7, 2006. - ^ "「ポケットモンスター」シリーズ最新作 2009年秋 ニンテンドーDSで発売決定!" (in Japanese). Nintendo.. Retrieved on 2009-05-08. - ^ Pokémon Ruby review (page 1) Gamespy.com. Retrieved May 30, 2006. - ^ Pokémon Yellow Critical Review Ign.com. Retrieved March 27, 2006. - ^ Official Pokémon Scenario Guide Diamond and Pearl version p. 30-31 - ^ a b Pokémon anime overview Psypokes.com. Retrieved May 25, 2006. - ^ Pokémon 10th Anniversary, Vol. 1 - Pikachu, Viz Video., June 6, 2006. This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)
http://www.answers.com/topic/pok-mon
crawl-002
refinedweb
1,621
54.32
Microsoft Starts Legal Fight Over Lindows Name 670 actappan writes: "Whether or not Lindows is real, this article on CNET News.com indicates that Microsoft intends to sue them into oblivion. Looks like supression remains the best way to promote innovation." cyberlawyer adds: "Some of you may remember that MS originally had great difficulty obtaining a trademark for the generic term 'Windows' but was eventually able to pay off those who had filed letters of protest to the granting of the mark including Sun, Oracle, and Borland. As a trademark lawyer I (unhappily) have to admit that Lindows probably has a weak case. Of course it's never too late to bring a cancellation action based on genericide ;-)" CodeWheeney contributes a link to coverage at Yahoo, too. Innovation (Score:5, Funny) <tongue-in-cheek> There's nothing quite as innovative as an operating system with the sole goal of reimplementing APIs from other operating systems until it can run their binaries. </tongue-in-cheek> Nice of Microsoft... (Score:3, Insightful) Microsoft is making sure this company gets a lot of publicity. -- The U.S. government causes problems, then pretends to solve them by creating more: What should be the Response to Violence? [hevanet.com]'. new name (Score:2) "I'm running eLindows.tv on my computer; it's great! I can run Mikrosoft Office XB, Itneret Exploder, and, uhmm, all those Linux programs I run too!" Re:Innovation (Score:3, Insightful) But with VMWare you have to buy/own a Windows license, which kind of nullifies the price advantage. Re:Innovation (Score:3, Interesting) AFAIK, Lindows runs Windows on top of Linux (just like Win4Lin?), instead of emulating a whole i386 machine. Obviously, this will perform much better than VMware on a low-end machine. I haven't tried Lindows or Win4Lin, but I have tried both VMware and Wine, and for obvious reasons, Wine is a ton faster; after all, Wine is just another implementation of the APIs. Now tell me, if you're stuck on a 550 as I've been (until tomorrow, 1600+, w00t!), wouldn't you be happy to hear that there's a native Win-on-Linux solution supposedly in the works so you can stop faking a whole other machine? To) Oh, I thought not.. Re:To me it's fair (Score:2) If you said it, would it make the common computer user think of Windows? Re:To me it's fair (Score:3, Funny) Oh well.. i thought it was funny. Re:To me it's fair (Score:5, Funny) No really, maybe not that, but I think that this lawsuit is the best kind of publicity they could possibly get. They should capitalize on it by changing the name to something subtle that jabs at microsoft but still makes clear what it does. If only Sun was behind it- they could call it "Sunroof". Howzabout "windshield"? "Glass Joe" (Include MAME with punchout standard Wait, I've got it. "MirrorGlass" Have a picture of a mirror on the box, with the reflection of a window tinted with microsoft colors in the background, and a penguin waving in. How cute is that? Mr. Robertson, this idea is mine but you may take it and run with it. Hell, I'll sign papers and even let you take it proprietary. I'd love to see that image on a shelf and have some clerk at Compusa have to explain what it means to a customer. "Well, that's tux the penguin, and he's looking through some windows at you, but they're not *microsoft* windows, because microsoft sued the company, so they're just regular old windows. Well, not really since Windows(TM) is a registered trademark of Microsoft. But anyway, it runs programs built for windows, even though it's not Windows(TM)" alternate name? (Score:4, Funny) Nah, too subtle. --LP Re:To me it's fair (Score:5, Funny) Aintdows - no this is clearly not m$ Bindows - all your Dindows - gungas client Eindows - scary physics edition Findows - Sharks gotta have an OS too Gindows - Shaken, stirred... Hindows - new religious sect Jindows - straight up moonshine version 1.0 Kindows - the deep south hillbilly OS Lindows - err.. oops thats taken Mindows - Ho Chi's apple based abicus Nindows - Trent Reznor enhanced edition Oindows - complete with matzah balls Pindows - an OS so simple your PHB could use Qindows - soo bloated you need five beowulf clusters just to boot Rimdows - the ass lickers edition Sindows - the ultimate pr0n OS(aka Pr0ndows) Tindows - the only os without a shrinkrap Unindows - what you really want to rm -rf Vindows - aint this one taken too? W.. ahh f them Xindows Yindows - Fengshue cosmic edition Zindows - a narcoleptic edition (formerly known as Win95) yuk yuk...:To me it's fair (Score:2) I'd laugh Re:To me it's fair (Score:2) My view is the same... I don't see a problem with Winux or Lindows but if they lost to Microsoft then they could face the same thing with Torvalds. However I only think torvalds would exercise this right if he thought it was going to affect the name of Linux. Lawsuits. (Score:3, Funny) --saint Torvalds sueing next? (Score:3, Insightful) If not Lindows.. (Score:4, Funny) Oh, wait...damn... Re:If not Lindows.. (Score:2, Funny) Or even "Winux". As in "Shhh... Be vewy vewy quiet. I'm hunting Winux." You think Elmer Fudd would sue? Oh my God, they are such a threat to Microsoft ! (Score:2, Funny) Microsoft is in grave danger of trademark dilution here. I mean, if they were a monopoly and the vast majority of OS users ran Windows, I wouldn't say, but as a fragile company with such a small name recognition, it's important that potential new customers don't get confused by a sneaky new startup who just wants to make a free buck out of a competitors honestly earned and well deserved success ... Microsoft's Claim is Legit (IAAL) (Score:4, Troll) Anyways, Microsoft's claim is entirely legitimate. 'Lindows' is in the same "industry" as 'Windows', and is intentionally abusing the popularity of Windows for its own benefit. This is the same as coming up with an electronics company called Panasoanic -- there is the potential for legitimate consumer confusion. I know it's unpopular to side with Microsoft on something, but for once they're in the right here. As a copyright/trademark lawyer, I'm hoping the courts make the right decision and force Lindows into a name change. - Dave Brennins:Microsoft's Claim is Legit (IAAL) (Score:5, Funny) Well... Apple is a pretty common word too, but i don't see that one being invalidated either. No, they should rename it to "I Can't Believe It's Not Windows!" Re:Microsoft's Claim is Legit (IAAL) (Score:3, Funny) Re:Microsoft's Claim is Legit (IAAL) (Score:2) The question is, are they really abusing it, or just using it. Is it inherently wrong for me to frame my product in terms of its benefits over another product? I really don't see how. Is their a name a reference to Windows? Of course! Will it hurt Microsoft? Probably - if they have a good enough product. Should Microsoft be able to quash that type of thing in court? Probably not. Re:Microsoft's Claim is Legit (IAAL) (Score:2) Probably not. But if you replace "frame" with "name" and "inherently" with "legally" then I would probably say yes.. Re:Microsoft's Claim is Legit (IAAL) (Score:3, Funny) Now anyone can use it... - Robin How about Sony vs Sanyo? (Score:2) Re:Microsoft's Claim is Legit (IAAL) (Score:2) Yeah, but not in the way you think. They allow windows programs to run on linux. I don't think I would call that abusing, though.. Re:Panasoanic isn't Lindows. (Score:3, Insightful) Besides, it doesn't matter. It's close enough that, by the "reasonable man" standard, it's nearly exact. Hrm. (Score:2) Anyway, these windows people should change their name because its stupid if nothing else. Re:Microsoft's Claim is Legit (IAAL) (Score:2) Obviously you've never worked in PC support PC Mag hit it on the head (Score:5, Funny) It's software that combines Linux and Windows without violating any trademark or copyright--although I bet Microsoft will sue at some point. Guess they were right! Great publicity (Score:2) Oh my God! (Score:2) How you can fight back. (Score:5, Funny) Hopefully this will cause Microsoft to lose the trademark name 'Windows' because it will become generic from over usage.. Yes (Score:2) Coming to America (Score:2, Funny) They're Mac Donalds; I'm Mic Dowell's. They've got the Golden Arches; We've got the Golden Arcs. They got the Big Mac; We got the Big Mic. We both have 2 all beef pattties, special sauce, but they have a sesame seed bun. Our buns have no seeds. What about MS??? (Score:2, Funny) Think about it, Michael Mann could make The Insider, Part Two. Instead of taking on tobacco companies it would be about monopolizing software companies. Anthony Michael Hall can play Bill Gates again, and we can bring Russel Crowe back and have the two go at it in some kind of virtual reality gladiator thing. Re:What about MS??? (Score:2) heh. "Any press is good press" (Score:5, Interesting) They dont even let the product launch! (Score:2) Already attacking the competition before the competition even exits the gate. Must be Microsofts new strategy, kill all companies who have threatening names! The Solution ! ? ! (Score:2, Funny) ;) Just look at Killustrator (Score:2) how paranoid can you be? (Score:2, Interesting) Not saying i agree with it though. And as logical as it seems, if they win, they're proving that they stifle competition through any means available. The suit asks the court to order the start-up to stop using the Lindows name and also seeks unspecified monetary damages How can they sue for money? has Lindows actually damaged them in someway? If they want them to change the name, fine, let them try, but how much can they really ask for? Maybe i'll sue all the Jason's in the world for using MY name. No, i'll sue anyone who's name ends with -son. get them to change their names AND give me money for my effort. Pronunciation (Score:2) Lawsuits (Score:2, Interesting) Speaking on the topic, M$ seems to just sue their "potential competitors", in fact small companies with (for most) great ideas. Theses companies can't afford the costs for the lawsuit and are forced to close. After that, M$ stole a good idea (mabye from that company), put the Microsoft name on it and sell it. M$ is going to be everywhere (this is their dream). From PCs to game consoles, telephones, etc. I expect TVs soon (Heh, they should try vacuum cleaner... a good way to suck...). Can we call that "monopoly" ? Will they sue the dictionnaries because there is the words "Windows" and "office" in it ? Do people will wake up when they will live on planet "Windows Earth" ? how bout Linopoly (Score:2) In unrelated news ... (Score:2, Funny) Similarly, Microsoft Corp. has decided to sue the cattle industry for allowing their cows to graze in meadows[tm], and the sun for casting shadows[tm]. omg! fake comments! (Score:2, Interesting) michael robertson has a comment on december 10th and then all comments are from a user called "reply".. they're mostly posted on the same day.. they pretend to be from lost of various people praising the upcoming system.. if you check reply's profile the email is "comments@lindows.com".. i have not seen as ridiculous in a long time!. Re:Even if Windows is a TM; It is Generic (Score:2) Note that Bayer also held another trademark at that time: Heroin. But I digress. Bayer still holds the trademark aspirin in countries that were not signatories to the Treaty of Versailles. On one hand, Bayer's aspirin is not a good example of trademark abandonment because of the Treaty. On the other hand, at the turn of the last century, Bayer was the pharma equivalent of Microsoft (600 kilo gorilla). What I'd like to know is why there's no namespace conflict between Windows and X-Windows. Was there a deal done at some point? k. List of Generic Marks and Depends on Consumer (Score:5, Interesting) So, the test would not be whether Microsoft or a particular judge considers that a mark is a generic term, but if the mark becomes a generic term in the minds of consumers. Perhaps a party could present evidence such as surveys or the online and published usage of a term in a generic sense as a means to describe the thing?. This is a GOOD thing (Score:2) Until now, I'd guess that about 1% of the computer using population has heard of Lindows. What better publicity is there than getting into a lawsuit with Microsoft? Making CNET and ZDNET headlines is a dream come true for them. This lawsuit also legitimizes the development work they've done as far as the public is concerned. After all, Microsoft wouldn't sue them if they weren't a real threat. And finally, they're going to be forced to ditch the knock-off name. That surely wasn't helping them any. Re:This is a GOOD thing (Score:2) I wonder.. (Score:2) "Lindows" WOULD confuse most people (Score:4, Insightful) a) Microsoft's version of Linux b) Linux for Microsoft Windows c) Microsoft Windows for Linux or some other permutation thereof that implies an official connection with/endorsement by Microsoft. Cheers, IT What about the Kompany? (Score:2) Last Resort (Score:2, Funny)." Sobbing, he continued: "Oh why do they Force us to sue them! It hurts me right here, (Murchinson placed his hand over his heart and looks to the heavens) whenever we have to sue them." Murchinson then, with the tears still streaming down his cheeks and shaking his head, pushed a lonely red button on his desk. Moments later, cruel hordes of fur clad lawyers on enormous horses, gravely swinging rusted and blood stained battle axes, thrusting their hardened leather shields toward the brooding skies, with packs angry mastifs biting and growling at their hideously spurred heals, rode ravenous toward yet another glorious conquest. Murchinson listened as the horrible clamor of the viciously armed force recedeed in to the wind. Finally he concluded the interview, "if only they hadn't forced me to do this, if only we could have worked something out..." Thinking of that poor man, Murchinson, nearly brings tears to my eyes as well. It's just tragic how he so truly didn't want to sue them, but had to... sigh... It just breaks my heart.] Next up: Jesus to sue MS for rights to "XP" (Score:5, Funny) "The monogram of My Name, formed of the two first letters when written in Greek, "X" and "P" [Chi and Rho], has been in use for well over a thousand years in numerous countries. I am therefore insisting that Microsoft cease using "XP" on its products, as that is tantamount to Taking My Name In Vain." Added Christ, "I mean it. Don't make me come down there..." Sue them into oblivion? (Score:4, Informative) this article on CNET News.com indicates that Microsoft intends to sue them into oblivion. Umm, no... Fucking slashdot editors... I'm through. I contribute to slashdot no more. This is my last post. Re:Sue them into oblivion? (Score:2) Italics is the submitter. Would you rather the editors alter your words? Re:Sue them into oblivion? (Score:2) Italics is the submitter. Repeat after me: Slashdot is the publisher. Send Slashdot anything, any infantile ranting will do as long as it slams Microsoft. I'd say fuck it, but slashdot continues to amuse me and occupy my attention occasionally. This is however the full extent of the esteem in which I hold this place though. Ahem...(coff...clueless...coff) (Score:2, Interesting) Fucking slashdot editors... I'm through. I contribute to slashdot no more. This is my last post. reply to from bankey: Repeat after me: Italics is the submitter. Would you rather the editors alter your words? Thus spake the Moose: I only have to say, the one article out of 30 that was accepted was of the title: Microsoft article on Salon.com those were my words. and the "this article on salon.com" were mine as well. Everything after that I was *grilled*, *filleted* and *slow roasted* for words that were not mine. (I said "interesting idea"..editors says "extensions of MS further monopoly"...granted it was alluded to in the article, but WTF. For the most part, editors don't get grilled on Clearly "bankey" has no clue what an "editor" does...edits, mangles, destroys, clarifies, distills and after all that puts all the above adjectives in a blender and then diseminates it to a readership. All I can say to ya'll submitting to If your article is not thought provoking, inflammitory, the cause of a flame war, MS bashing, Linux bashing or in any way counter to any type of groupthink, RI/MP aa hating...well, I seriously doubt you will utter the words "What was I thinking/smoking" when I submitted *that*. . Re:Sue them into oblivion? (Score:2) I have an idea... (Score:2, Funny) Another idea (Score:2) Re:Another idea (Score:2) Oh, wrong industry... On another note, it's the "X Window System", not "X Windows". That's why your favorite client/server bitmapped display manager's name doesn't infringe on a Microsoft trademark. Re:Another idea (Score:2) X is older than MS Windows. That suit wouldn't have a leg to stand on. Re:Another idea (Score:2) Licrosoft Licrosoftion Lictosuction... OH MY GOD. They stole all the fat and put it into their products. That explains the bloat. Counterattack... (Score:2) Lindows got what they wanted. (Score:2, Insightful) They probably would have used something like Winux if they hadn't wanted to be a target. How could they not be advised by someone that this would be trademark infringement when naming their company? In other news, microsoft sues death. (Score:2) Death sued for sounding too close to Microsoft's flagship product's name. Steeve Ballmer (CEO): A lot of people on the internet do jokes about blue screen of DEATH, when people die, we hear about Widows, people KILL their systems after installing non-certified drivers, DEADLY VIRUS are crippling our systems, all this will change. You know how our stupid our userbase is, If people are stupid enough to buy an OS for 300$ instead of going for an OEM version, these same people could be associating death with our flagship product, Windows, we fought really hard to get the trademarks for that name, heck, I even had to look like a complete monkey to get public awareness on our side, Death will either have to cease to exist or change some of it's naming convention. Microsoft will fight death.... to the death if we have to god damn it!. Concerning Partial Use of w-i-n-d-o-w-s letters (Score:2) I've often read that in order to maintain a copyright, one must actively defend it. Of such things are MacDonald's "cease and desist" cases against restaurants in Scotland run by someone with the last name of MacDonald. Such non-prosecution of a known commercial company using just such a partial name link-in can only damage their case in prosecuting someone else who only proposes to also use some letters to do the same thing. Thoughts? Bob- Windows has become a generic term. (Score:2) Although the name is really XFree86 the common name is X-Windows. How many OSs now run X-Windows? How many platforms? Can you say damn near all of 'em boys and girls? I knew that you could. MS has failed to vigorously defend the name Windows. I think the case could be made they've lost rights to that trademark.. Call it LNW ( Lindows is Not Windows) (Score:3, Interesting) Hell, call it Windows! (Score:3, Interesting) Well, the Great Grey Lady from the Big Apple objected strenuously to this, so the Infocommies started a contest for a new newsletter name. One contributor suggested, "Call it the New YORK Times. Let's really piss 'em off!" Millions for defense, but not one penny for tribute, I say! This is just Wine with a price tag, right? (Score:4, Interesting) >>But with VMWare you have to buy/own a Windows >>license, which kind of nullifies the price >>advantage. >Use Wine [winehq.com] then. I'm betting that's exactly what Lindows is. A friend and I were discussing Wine's license recently, specifically wrt the percieved lack of contributions from Transgaming's WineX (a DirectX centered fork from Wine --) back into the original codebase. It appeared to us that Wine has a pretty open license much like X11's (). The only real stipulation is the following: 15 The above copyright notice and this permission notice shall be included in 16 all copies or substantial portions of the Software. So how tough would it be to wrap up Wine in a box with a $99 price tag (price from Lindows' FAQ page:) So to sum, take open sourced but not "RMS Free" (aka, GPL'd) code, name the result something Microsoft will have a problem with for the free press (as has been mentioned about a million times already), and *poof*, you've got the makings of a 90's style IPO. Re:I bet (Score:2). Re:Anyone remember X-Windows? (Score:5, Interesting) Bill Gates is the God of Innovation (Score:2) Why? Well look at his website and see why. Gates is GOD [microsoft.com] Hes the true founder of open source. The creator of the GUI. The creator of the first web browser. The creator of the instant messager. The creator of the word proessor. The creator of C++, Visual Basic, J++. The man who literally fuels the success of the world wide web. The man who helped steve jobs create MacOS. The Creator of DOS. And Finally the man who created (x)indows. So before you go starting any corperation which steals from this mans innovation, think twice, or you might just get sued. Open Source people, watch out, because you are next. Just wait for the launch of Microsoft Linux and the original founder of open source Bill Gates will take Linux as hes done with everything else into the mainstream. I know (Score.
http://slashdot.org/story/01/12/20/237217/microsoft-starts-legal-fight-over-lindows-name?sbsrc=thisday
CC-MAIN-2013-48
refinedweb
3,846
74.19
What is Java? Java is a programming language and a computing platform developed by Sun Microsystems (present-day Oracle) in 1995 which was spearheaded by James Gosling. Java is a ‘high-level class-based object-oriented programming language’ that is used in a distributed environment. Java is easily readable and understandable and offers a higher level of abstraction. There is a funny story as to how the name Java came into place. The programming language was named Oak initially because there was a big oak tree outside James Gosling’s office. It was later changed to Java because all the developers who were involved in the creation of this programming language drank coffee made using the Java bean variety from Indonesia. The biggest jewel in the crown of Java is that it is ‘cross-platform which allows you to write the code in one platform and can run the code written in different setups’. Java allows you to write complete applications that can run on standalone computers and or in a distributed environment. You will need Java installed on your computer if you want several functions to run on the internet. Java can be used to build web applications, standalone applications, mobile applications, and enterprise applications. Sun Microsystem released Java as open-source in 2006. In 2007 Sun Microsystems finished the process of making the core code of Java open source and free. Above 3 billion devices/applications run on Java. Java has also continuously evolved over the years. With each update, new features were added to enhance the performance and the efficiency of the language. Why Java? The Java programming language is class-based, concurrent, and object-oriented. The basic principle of Java which ‘Write Once Read Anywhere (WORA)’ makes it more appealing to freshers as well as experienced workers. Over the years there has been only an increase in the number of programmers using Java to build applications in different domains. The following pointers will throw light on the advantages of the Java programming language - User friendly - Java is an easily readable language and offers different levels of abstraction for different users. Beginners can start with Core Java and after mastering it they can move towards Advanced Java. - Platform Independence - Unlike the languages like C C++, Java code is once written can be run on machines with different Operating Systems like Linux, Windows, or Mac OS. This is because Java comes with a separate runtime environment JRE. - Versatility - Java can be used to build web applications, android applications, and even in data science and machine learning applications. They offer high scalability and also provide concurrent solutions. - Documentation - One of the major strengths of Java is its documentation. Since Java is an open-source language, you have access to a well-documented guide which you can refer to when you have to clarify doubts. - Community support - Java has strong and robust community support and one can rely on the Java community to find help if they are in a programming rut. - Powerful IDEs - Java has excellent IDEs like Eclipse, Netbeans, and IntelliJ which makes coding in java easy by a great deal. In addition to this, there are a vast number of automation tools and debuggers available. - High performance - Just-in-time compiler in Java programming language enhances the execution of the code. This is possible because the compiler will compile a method block only when the method is called. This makes java a fast and time-efficient language. Java - Overview To gain complete knowledge about how a Java code is compiled and executed, the following terms should be familiarised as a prerequisite JVM Java Virtual Machine or JVM which acts as an engine that builds the runtime environment for running the Java code. JVM is the main reason that java can function as a cross-platform language. It helps in converting high-level code to bytecode. Each OS needs a different kind of JVM but the output of the JVM produced by different software will all be the same. JRE JRE or the Java Runtime Environment is software that runs on the operating system which makes the execution of Java applications possible. It accumulates the implementation of other core libraries and that of JVM. The JRE will consist of the components like deployment technologies, user interface toolkits, integration libraries, java database connectivity (JDBC), java native interface (JNI), Collections, Java archives (JAR), regular expressions, etc. JDK Java development kit or JDK is the software development environment that provides a set of tools and libraries to develop java applications and applets. If you are just interested in running a java program on your computer just installing JRE would suffice. But if you want to write java code and do actual programming, you have to install JDK. Java - Environment setup The environment setup for the Java programming language follows different steps for different operating systems. To download and set up the local environment for Java programming language in your local Windows machine, the following steps are to be executed. - Java SE is available for download at. You have to create an account and download the SE that is suitable for your machine. - Once you have downloaded the SE, you have to run the installer which installs both JDK and JRE. - By default, the JDK is installed in the path "C:\Program Files\Java\jdk version where JDK version refers to the current update of the version of JDK installed. - The directory in which the JDK is installed is called the JAVA_HOME. - The path along with the bin of the JDK is called the Java classpath or simply PATH. - Both these must be set as environment variables to enable the installation of directories. - To set the environment variables, launch Control Panel -> System and security -> Systems -> Advanced System Settings on the left pane. - Click on the environment variables button - Under system variables, find the variable path and click on edit, and include the new Java path. - Create a new system variable called JAVA_HOME and set the path up to the directory in which JDK is installed. - This will enable your local windows machine to write and compile Java code. You can verify it by running the javac -version command on the terminal. To download and set up Java on a Linux machine, the following steps are to be followed - Create a directory in which JDK will be installed. The terminal commands are $ cd /usr/opt $ sudo mkdir java - Extract the downloaded package in the java directory using the following commands $ cd /usr/opt/java $ sudo tar xzvf ~/Downloads/jdk-version-Linux-x64_bin.tar.gz - Set the Ubuntu Machine for using the newly installed Java, by following the below commands. $ ls -ld /usr/bin/java* $ ls -ld /etc/alternatives/java* Verify the installation using the following command $ javac -version In order to download and set up java in MacOS, the following steps are to be followed - Navigate to and login then accept the license agreement. - Then look for the last stable update of JDK for the macOS and click on the Download button. - Click on the disk image file and follow the instructions to install the JDK. - Remove the disk image file. - Now verify the installation with the javac -version command on the terminal Java - Basic syntax A Java program is a collection of objects which can be invoked by calling methods that have the definition for these objects. A Java program generally consists of the following parts. In this Java Programming Tutorial, let us go through the basic syntaxes that are used, - Object - The object is also an instance of the class. It possesses both state and behavior which are defined by data members and methods respectively. - Class - A class is a basic structure or blueprint which has a collection of objects with common behavior. - Methods - A method is a Java Function that is used to alter the state of the data member and perform logical operations on it. A Class can have many methods that can be invoked by the respective call statements. - Instance variables - The instance variables are also known as data members which are created by assigning a specific value to them. All the objects have a distinct set of instance variables. The Rules to follow when writing the Java Program are given below Case sensitive language - Java is a case sensitive language and it indicates that there is a difference when the same word is written in uppercase and lowercase. Proper attention should be paid to make sure the correct case is used or the compiler will throw a lot of errors. Example: JAVA is different from java Naming conventions - Java language follows a set of rules while naming the following - Class name - The class name should always begin with an uppercase letter. If there are multiple words in a class name then the first letter of each word should be in uppercase. Example: Animals, MyFirstJavaCode are all valid class names. - Method name - The method name should always begin with a lowercase letter. If there are multiple words in the method name then a camelcase naming convention should be followed. Example: findArea(), display() are both valid method names - Variables - The variables can start with letters a-z or A-Z or with an underscore (_). It is preferable to avoid using single variables. Example: _area, area, Area are all valid variable names. - Java file name - Java file name must match exactly with the class name of the program. When saving, the file name must be saved as class name.java. If there are multiple classes in the program the filename will be the same as the class name of the public class present in the program. Example:SampleJavaProgram.java will be the file name when the class name is SampleJavaProgram Objects and Classes Java is an object-oriented programming language. Everything in java is denoted in terms of classes and objects with their respective attributes and behaviors. Objects in Java The object depicts the real-life entity. It is also an instance of the class. Typically an object consists of the following - Identity - This gives an object a unique name and allows the object to interact with other objects. - Behavior - This is the response of an object to the set of instructions (methods) invoked by the other objects. - State - This reflects the properties of an object which are determined by the attributes and methods. Declaring and initializing an object When an object is created, an instance of the class is said to be created. All the objects share the same attributes and behavior of the class. But the state ( the value of these attributes and behaviors) is unique to each object. A class can have multiple objects. An object is declared with the class name and the object name. An object with the given name will be created and assigned a null value. An object is said to be initialized only when memory is allocated for the object with the new keyword. The new keyword also calls the constructor and the memory allocation happens for the object. Classes in Java A class is a user-defined prototype based on which objects are created. It defines the set of attributes and behavior that is common for all objects. Commonly a class will consist of the following components : - Modifiers - A class should be in public or default access. - Class keyword - The class keyword is used to create a class. - Class name - The class name should begin with an uppercase letter. - Superclass(optional) - The name of the class’s parent class. A class shall extend only one parent. - Interfaces(optional) - A list of interfaces implemented by the class which is separated by a comma. A class can implement different interfaces at the same time. - Body - The body of the class surrounded by curly braces { }. Data fields define the attributes of the class and are usually in private access. Methods define the behavior of the class and generally have a public mode of access so that they can be called from the main method. Constructors are used for initializing the objects in the class and have public access in order to be invoked from the main method using the new keyword. Constructors Constructors in Java look similar to a method. But the constructors don't have a return type. Constructors are used to initializing an object and have a public mode of access generally. The constructors have the name as the name of the class. Typically constructors are called when the object is initialized using the new() keyword. This makes sure that memory is allocated for the respective object and the attribute values are initialized. The constructor is invoked when the object is initialized or instantiated for the first time. Rules for writing Constructors - Constructors have the same name as the class name in which it resides. - The constructors should not have a return type. - Access modifiers can be specified to control access to the constructors. Types of Constructors - Default constructor or No argument constructor - As the name suggests the no-argument constructor has no parameters defined. If there is no constructor specified explicitly, then the compiler itself creates a default constructor that assigns values 0 and null, etc depending on the data type to the object which is being instantiated. - Parameterised Constructor - A constructor that has parameters defined is called the parameterized constructor. If you want to define the values of the objects explicitly then a parameterized constructor should be used. If there are 2 constructors in the code, then the default constructor is invoked when parameters are not passed. - Copy Constructor - A copy constructor is the one that initializes values of current the object using another object in the same class. A copy constructor is invoked when the object is passed as the argument to the function or when the object is copied to return it from the function. Java Training in Chennai at FITA Academy provides you extensive training of the Java Programming concepts with real-time examples and in-depth practice on code scripting under the mentorship of Expert Java Professionals. Basic Data Types The variables in java are used to hold values of different types. Based on the data type the operating system reserves memory space and decides what should be stored in the data type. Here in this Java Tutorial for Beginners let us see some of the Basics Data types. The Data type is further classified into two groups and they are - Primitive Data Types - int, short, byte, float, char, boolean, and double - Non Primitive Data Types - Arrays, Classes, and Strings The eight various primitive data types in Java are, Numbers The Primitive Number types are further divided into two different groups and they are - Integer types - This stores positive, negative, or whole numbers (such as 123 or -456), without any decimals. The Valid categories are short, long, int, and byte. - Floating Point Types - It interprets the numbers that are within the fractional part that consists of more than one decimals. The two varieties are float and double. Types of Integer Byte A Byte Data type stacks the whole numbers which range from -128 to 127. This can be used in the position of the other integer types and int to save the memory when you are confident that the values are within the limit -128 and 127 Int An int data type is capable of storing the complete whole numbers from the - -2147483648 to 2147483647. Usually, an int data type is the commonly chosen data type if the created variables are within the numeric value. Short A Short Data type hoards the Whole Numbers which ranges from 32768 to 32767: Long Long Data type shall hoard the whole numbers that range from -9223372036854775808 to 9223372036854775807. It is utilized when an int is not larger for storing the value. Important Note: You must end a value with "L": Floating Point types You can use the Floating-point type when you require the number that is with a decimal, for instance, 9.99 or 3.14515. Float A Float Data type stacks the fractional number that ranges from 3.4e−038 to 3.4e+038. Important Note: You must end the value with "f": Scientific Numbers There are possibilities that the floating-point number shall also be the scientific number along with an "e" denotes the power of 10: Booleans The Boolean Data types are termed with a Boolean keyword and it takes values of true or false: A boolean data type is declared with the boolean keyword and can only take the values true or false: Java Notes for beginners The Boolean values are generally used in conditional testing Characters A char data type is primarily used for accumulating the characters that are single. These characters should mostly be encircled by single quotes such as 'A' or 'D' Strings A String Data type is utilized for storing the sequence of the characters (text). The String values should always be surrounded by double-quotes Non-Primitive Data Types These Data types are also called the reference types since they refer to the objects. The distinction between primitive & non-primitive data types, - A Primitive type is the predefined one in the Java language. A Non- primitive type is developed by the programmer and it is not illustrated by Java. - Usually, a primitive type consists of values, whereas the non-primitive kind begins with the uppercase letters. - The Non-primitive types are applied in the call methods for performing a few operations, however, the primitive types cannot. - The Primitive types begin with the lowercase letter, whereas the non-primitive kinds begin with an uppercase letter. - The Capacity of the primitive type relies on data type, whereas the non-primitive type consists of a similar size Java Variables The variable aids us with named storage which the programs can manipulate. All the variables in Java comprise a specific type and it resolves the layouts and size of the variable memory & the array of values that are stored in a memory along with the set of operations that is applied to the variable. It is mandatory to declare the variables prior to using it. Mentioned below are the basic form of the variable declaration - The Data type is Java's Data Types and it is the variable in the name of a variable. For declaring the above one plus variable of the mentioned type, you can make use of the comma-separated list. Below are the valid example of the variables that are used for declaring and initializing Java- For Example In this Java Basic Tutorial, we will explain to you different variables that are available in Java and the three different types of variable are - Local variables - Instance variables - Class/Static variables Local Variables - A Local variable is usually declared in block, methods, or constructors. - The Local Variables are developed when the constructor, methods, or the block is entered and the variable is eliminated once when it steps out from the block, constructor, and method. - The Access Modifiers cannot be applied for the local variables. - A Local Variable is deployed at the stack level internally. - The Local variable is visible only if it is within a declared block, method, or constructor. - You will not have the default values for the local variables, and the local variables should be declared and the initial values should be allowed prior to using it. Example The Age is the local variable. It is defined inside the cubAge() method and the scope of it is limited only within this method. To use age without initializing, so that it would be given as Error during the time of compilation. Instance Variable - The Instance Variable is declared in the class, however outside the block, method, or constructor. - If space is allowed for the objects in the heaps, the slot for every instance variable value is built. - An Instance variable is developed when the object is developed with the application of the keyword 'new' and it is destroyed if the object is destroyed. - The Access Modifiers are provided for the instance variables. - The Instance Variables are declared when the class level after or before use. - An Instance variable is visible for all the methods, blocks, and constructors in a class. Usually, this is recommended to make these variables private for the access level. Yet, the visibility for the Subclasses is provided for the variables with the utilization of the access modifiers. - The Instance variables do have default values. For Boolean, it is either false, for Number, it is 0, and for the object references, it is null. The values shall be assigned at the time of declaration or within a constructor. - An Instance variable is accessed straightly by calling the variable name inside a class. And within the static method, it must be termed using a fully qualified name. ObjectReference.VariableName. Static/ Class Variables - Over here you will have exactly one copy of all the class variables for a specific class, irrespective of how many objects are developed from it. - The Static Variables are used rarely apart from declaring them as the constants. The Variables and Constants are termed as private/public, static, and final. The Constant variables do not change right from the beginning of a value. - A Class Variable is also called the Static Variables and this is declared along with a Static keyword in the class, however, outside the block, method, or constructor. - The Static Variables are stacked in the Static Memory. A Static Variable is used for building the program right from the beginning and it is terminated as soon as the program stops. - The Visibility is more familiar for the instance variables. Yet, the most static variables are termed public as they are found for the users of a class. - The Default values are similar to that of the instance variables. For Booleans - it is false, for object references - this is null, and for numbers - it is 0. - Mostly the values are assigned while the declaration or within a constructor. Further, the values are assigned to the special static initializer blocks. - The Static Variables shall be pervaded while calling a class name ClassName.VariableName. - While declaring the class variable as the Public Static final, then the variable names i.e the constants everything is used with the upper case. - When the Static Variables don't seem to be final and public, then the naming syntax is similar to that of the local and instance variables. Modifier Types A Modifier is a keyword that you can add to the definition of changing the meanings. The Java language consists of a broad range of modifiers, which includes the following, - Java Access Modifiers - Non Access Modifiers You can use the modifier that includes the keywords, definition of the method, variable, or class. Access Control Modifiers Java enables you with more access modifiers for setting the access levels to the variables, methods, constructors, and classes. The four different access levels include - Noticeable only to the Class (private) - Noticeable to the package and the default also no modifiers are required. - Noticeable to all the subclasses and packages. - Noticeable to World (public) Non- Access Modifiers - Java supports you with a number of non-access modifiers for achieving other functionalities. - A Final Modifier is used in finalizing the performance of methods, classes, and variables. - A Static Modifier is utilized in developing the class variables and methods. - The Volatile and Synchronized Modifiers are used in threads. - An Abstract Modifier is used for building abstract methods & classes. Java Operators Java provides a wide range of operators for manipulating the variables. In the given below example, you can make use of the + operator for adding together two values, For Instance, Though an operator is commonly used for adding together the two values as per the example given above, you can use it for adding together the variable and the value, or the variables and the other variable. In this Tutorials on Java, let us see the different types of operators that are used, - Arithmetic operators - Comparison operators - Assignment operators - Bitwise operators - Logical operators Arithmetic operators The Arithmetic operators are predominantly used for performing general mathematical operations. Comparison Operators The Comparison Operators are used in comparing values Assignment operators The Assignment operators are used for assigning the values to the variables. From the below example, you can use the Assignment Operator (=) for assigning the value 75 to variable x: For Example The groups of Assignment operators include Bitwise operators The Bitwise Operators are used in executing the Binary logic with bits of the long integer or the integer. Logical operators The Logic Operators are used for identifying the logic between values or variables. Loop Control There may come a few circumstances where you should execute the block of code numerous times. Generally, the statements will be executed in consecutive order. Like, the first statement will be executed first, then the next by second, and so on. The Programming languages offer different control structures that permit you for the most complex execution paths. In general, the Loop Statements permits us to execute a particular statement or a group of statements at different times, and given below is the common form of loop statement that is used in most programming languages. There are three types of Loops in Java and, they are - for loop - while loop - do-while loop for loop The for loop in Java is used for iterating a specific factor of the program manifold times. Supposing the total number of iterations is fixed, then it is advisable to use for loop statements. Syntax The Syntax of a for loop is- Given below is the flow of Control in the for loop: - Usually, the initialization step is executed first, and it is performed only once. This step permits you to initialize and declare the loop control variables and usually, these steps complete with a semicolon (;) - Later, the Boolean Expression is calculated regularly. After that, if it is true, then the body of a loop needs to be executed. In case if it turns out to be False, then the body of the loop shall not be executed along with the control jumps for the next statement past a for loop statement. - The body of a for loop is executed, then the control bouncebacks in updating the statement. A for loop statement permits you to update the loop control variables. Also, it is possible that this statement shall be left blank with a semicolon in the end. - A Boolean expression is estimated again. In case, if it is true, then the loop executes and repeats the process of the body of a loop; later this step is updated and also the Boolean Expression. Once the Boolean expression is concluded to be False, then the for loop statement terminates itself. While Loop The while loop statement frequently executes the target statement in the Java Programming language until the conditions that are provided are true. Syntax The Syntax of a While Loop is- Over here, statements(s) shall be either a single or block of statements.A condition can be any expression and the true will be a non-zero value. While executing, supposedly the boolean-expression outcome is true, then the inside actions of the loop can be executed. And this will prolong until the expression returns to true. Meanwhile, the conditions turn out to be false, then the program control shall pass it to the line automatically succeeding a loop. The important point of a While loop is that a loop may not run always. If the expression is tested and if you get the result as false, then the loop of the body shall be skipped and the very first statement after the while loop shall be executed. do...while loop The do...while loop is identical to a while loop, except if a do...while loop is assured for executing at least once. Syntax Following is The Syntax of a do... While Loop- Check whether the Boolean expression occurs at the end of a loop, then the statement in a loop executed before a Boolean is tested. When the Boolean expression is true, then the control bounces back to the do statement and the statement in a loop is executed once again. This process is repeated further until the Boolean expression results in false. Loop Control Statements A Loop control statement modifies the execution from the normal sequence. In case the execution flees a scope, then the automatic objects which were created in that particular scope are destroyed. A Loop control statement modifies the execution from the normal sequence. In case the execution flees a scope, then the automatic objects which were created in that particular scope are destroyed. In this Java Tutorials for Beginners we have discussed the different Control statements supported by Java are as follows - continue statement - break statement - continue statement - In the for loop, the continue keyword induce control automatically and moves to the update statement. - In do/while loop or while loop, it controls automatically and dives to a Boolean expression. A Continue keyword is used in any loop control structures. This causes a loop to jump fastly to the other iteration that is in the line of the loop. Syntax Break Statement A break statement in the Java Programming language is used for two different purpose - In case, the break statement is confronted inside a loop, then the loop is instantly terminated, and now the program control proceeds with the next statement that follows the loop. - It can be used for terminating in case of switching the statement Syntax Java Online Course at FITA Academy provides in-depth training of the Java programming concepts with complete hands-on training and real-time practices under the guidance of Expert Java professionals. Decision Making A Decision-Making structure includes one or more than one condition for evaluating or testing the program with the statement or the statements that shall be executed when the condition is identified as true, and in addition to that, the other statements can be executed when the condition is identified as false. Below is the general method of typical decision-making structure which is found in predominant programming languages. The Java Programming language supports us with the following types of Decision-making statements. - if...statement - nested if statement - if...else statements - switch statement if...statement The if statement encompasses the Boolean expression that is followed by more than one statement. Syntax When the Boolean Expression is assessed to be true, then a block of code which is inside the if statement is executed. In case if it is not, then the first group of code after the end of an if statement shall be executed (on closing the curly braces) nested if statement It is permissible to nest if-else statements i.e you can make use of else if or one if statement inside the other else if or if statement. Syntax if....else statement The 'if statement' is superseded by the optional else statement, that executes while a Boolean expression is false. Syntax Whenever the boolean expression is assessed to true, then supposing the block of code is executed, contrarily the else block of code shall also be executed. An if...else if...statement The if statement is succeeded by the optional else if...else statement, that is useful in testing different conditions using the single if.. else if statement. - While using the if, else if statement there are few things that must be kept in the mind. - An if shall have one or zero else's and it should come after the any else if's - The if can have zero for many else if's and it should come before an else. - If an else is succeeded, then none of the rest else's or the else if's would be tested. Syntax Switch Statement The Switch Statement permits the variables to be tested for equal opportunity against the set of values. All Values are called the case, and the variable that is going to be switched on is verified for each case. Rules that are applied to the Switch Statement - A Variable is used in the Switch statement that can be strings, integers, enums, and Convertible integers like (byte, char, short) - The value for the case should be the same as the data type just like a variable in the switch and it should be literal or a constant. - It is possible that one can have more case statements within the switch. All the case is followed by a value that can be compared to and to the colon. - In case the break statement is reached, then it's switch terminates the control flow hops to next line that supersedes a switch statement - In case when a variable is switched to the equal case and the statements which follow the case would execute continuously till the break statement is being reached. - The Switch statement shall consist of an optional default case, that should appear at the end of a switch. A default case could be used in performing the task when no single case is true. The break is not required in a default case. - It is not mandatory that all the cases are required to have a break. If at all there are no break cases found, then the flow of control falls through the subsequent cases till the break is reached. For Example Output Java Numbers Usually, when you work with Numbers, you can use the primitive data types namely long, double, byte, and int. Anyhow, in the development, you can come across circumstances where you must use the objects rather than primitive data types. To reach this, Java provides the Wrapper Classes. The Wrapper Classes ( Long, Byte, Double, Float, Integer, and Short) are the different subclasses of an abstract class number. An Object is the Wrapper Class that consists of Wraps and its corresponding primitive data types. For converting the primitive data types to objects it is called boxing and it is to be looked after by a compiler. Hence, while using the Wrapper class you should pass the value of a primitive data type for the constructor of a Wrapper Class. Now, the Wrapper Object shall be converted to a primitive data type and this practice is known as unboxing. The Number Class is a unit of java.lang package. The different types of Numbers in Java are When b is designated as the integer value, and now the compiler boxes the integer as b is the integer object. Further, b is now unboxed so it can be added as an integer. Number Methods In this Java Tutorial for Beginners let us go through the instance methods and all the subclasses which a Number class implements Java Characters Usually, whenever you work with the characters, you can use the data types char. For Example In the development, you can come across the situation where you should use the objects rather than the primitive data types. To reach this, Java gives the wrapper class character of a primitive data type char. This character class provides a more useful class i.e (static methods) to manipulate the characters. Then, you can build the Character object within a Character constructor. Character ch = new Character('a'); A Java Compiler can create the Characters of the Object for you in certain circumstances. For instance, in case if you pass the primitive char into the method that seeks an object, then the compiler immediately converts the char to the Character for you. It is called autoboxing or unboxing in case the conversion turns the other way. Escape Sequences The character that is anticipated by the backslash (\) is the escape sequence and it consists of unique meaning to a compiler. The below table clearly depicts the Java Escape sequences: The different Character Methods are: Java Training in Bangalore at FITA Academy provides comprehensive training in the Java programming language from the basics to the advanced topics under the guidance of Expert Java Professionals from the programming industry. Java Strings The Strings are used for stacking shorter text. The String Variable consists of the compilation of characters that are bounded by the double quotes: Example We can build the variable type of string and assign them to the value Create the variable of type String and assign it a value: String Length The String Length in Java is the object that consists of the methods which can exhibit operations on the strings. For instance, the length of the string is found with a length() method: More String Methods More string methods are found and they are toLowerCase() and toUpperCase(). To identify a Character in the String An indexOf() method send back the index (the exact position) of the first occurrence of the described text in the string (inclusive of the whitespace): Java Notes for Beginners: Java usually counts a position from zero So, - 0 is the 1st position - 1 is the 2nd position - 2 is the 3rd position and so on... String Concatenation The Operators could be used between the strings for combining them and it is called concatenation. Java Notes for Beginners: We have inserted an empty text (" ") for developing the space between the First & Last Name on the print. Further, you can also use a contact() method for concentrating the two strings Special Characters As it is important that the strings should be written within a quote since Java would misinterpret this as the string and provoke the error: The best solution for avoiding a problem is by using a backslash escape character. A backslash (\) escape character help in modifying the special characters into the string characters: A sequence \" adds the double quote to the string: The sequence \' aids in adding the single quote in the string The sequence \\ helps in inserting the single backslash to a string: This \\ sequence adds a single backslash to a string: This \\ sequence adds a single backslash to a string: Java Notes for Beginners: It utilizes the + operator for concatenation & addition. The Numbers are added and the strings are concatenated. When you add to numbers, the outcome will be a number When you add the 2 strings, the outcome will be the string concatenation: Java Arrays Generally, Java enables a Data Structure, an array, that stacks in the fixed-size of the consecutive elements of similar kinds. The Arrays are used for storing the acquisition of data and it is useful to consider that an array as the compilation of variables of the same types. Rather than declaring the individual variables like number0, number99.... and so on, you can declare 1 array variable like numbers and utilize number[0], a number[99]... and so to interpret the individual variables. In this Java Programming Tutorial, we will make you understand how to create the arrays, declare the array variables, and the process of arrays that are used in indexing the variables. For creating an Array It is possible to create the array using a new operator by following the syntax- Syntax The above statement helps in two things Creates the array utilizing a new dataType[arraySize] It designates the reference of a newly created array and the variable of the arrayRefVar. You can create, declare, and assign the reference of an array to a variable and can be merged into one statement as given below Processing Arrays At the time of Processing the array elements, you can either use the foreach or a for a loop as all the elements in the array are of a similar type and the size of an array is known. The Foreach Loop JDK 1.5 popularized the new for loop that is known as enhanced for loop or foreach loop that allows you to pass through the entire array orderly without using the index variable. Passing the Arrays to the Methods Just like how you pass on the primitive type of values to the methods, also you could pass the arrays to the methods. You can call the above by passing the array. For instance, the following statement calls the print array method for displaying 5,7,2,8,9 and 2- Java Date and Time Java supports the Date class that is available in java. util package, this class encloses the current time and date. A Date Class hold two constructors as given below Below are the methods of Data Class Get the Current Date & Time It is the easiest method for accessing the current Date & Time in Java. You can make use of the simple Date object along with the toString() method for printing the current date & time as given below: Date Comparison Mentioned below are the three different types for comparing two dates - You can use the following methods before (), equals(), and after(). As the 12th month comes prior to the 18th for instance the new date ( 95, 3,12). Before (new date 95,3,18) returns true. - You can also use the getTime() for knowing the total number of milliseconds that have passed by since January 1, 1970 midnight for both the objects and you can compare these values. - You can also use the compareTo() form, which is characterized by the implemented Date and Comparable interface. the printf Time and Date formatting is performed easily using a printf method. You can make use of the two-letter format and begin with the t and it ends in one of the letters of a table that are shown in the code below. Java Notes for Beginners It is not advisable to supply the date numerous times to format all the parts. So, you can use the format string that indicates an index of an argument that should be formatted. On doing that the index should automatically follow the % and it should be eliminated by a $. Time and Date Conversion Characters Java Training in Hyderabad at FITA Academy provides you holistic training of the Java programming language from its fundamentals to the complex topics under the guidance of real-time Java professionals. Java Regular Expression The Regular Expressions are a unique sequence of characters that helps you to find or match the set of strings, or other strings, upon using a specialized syntax that is held in the pattern. It can be used for editing, searching, or for manipulating the text or data. In this Java Tutorial for beginnerslet us see in detail the Regular Expressions that are used in Java. Usually, the java. util. regex pack comprises of three classes and they are - Pattern Class - PatternSyntax Exception - Matcher Class - Pattern Class - The Pattern Objects is an organized interpretation of regular expressions. A Pattern Class usually gives no Public Constructors. For creating the pattern, initially, you should call one of the public static compile() methods that send back the Pattern object. These methods usually accept a regular expression as a first argument. - PatternSyntax Exception - These are the objects that are the unchecked exception that notifies the syntax error on the regular on regular expression pattern. - Matcher Class - The Matcher Objects are the engine that represents the patterns and it executes the match operations counter to the input string. Likewise a Pattern Class, the Matcher illustrates the no public constructors. You can gain the Matcher Object by calling the matcher() method on the Pattern object. Capturing Groups This is one of the ways of treating different characters as one unit. It is created by placing characters that should be organized inside the set of parentheses. For instance, a regular expression (Apple) creates one group that contains the letters "A", "P", "P", "L", and "E". Capturing the groups is counted by the opening parentheses from left to the right side. In these expressions ( (A) (B(C))), for instance, - ((A)(B(C))) - (A) - (B(C)) - (C) Identifying the groups that are available in these expressions are called a group count method on the matcher object. A group count method sends back the int that displays the total number of captures that the group presents in a matcher's pattern. Also, there is a special group, group 0, that always interprets the complete expression. And this group will not be accommodated in the total report of the groupCount. Syntax of Regular Expression Different Methods of a Match Classer The Index methods give us resourceful index values that showcase the exact match where it is found in an input string. - public int start) - It helps in sending back the start index of a previous match. - public int end() - It refutes the offset only after when the last characters are matched. - public int start(int group) - This sends back the beginning of an index that is laterally captured to the given group at the preceding match operations. - public int end(int group) - It sends back the offset of the last character of concatenation and it is clocked by the set that is given at the older match operation. Study Methods A Study method reviews an input string and it sends back the Boolean that points out whether the pattern is found or not. Some of the common study methods are listed below - - public boolean find() - It helps in identifying the progression of an input sequence that adheres to the pattern. - public boolean lookingAt() - It tries to match the complete region against a pattern. - public boolean matches() - It tries to match the complete region against a pattern. - public boolean find (int start) - It helps in resetting the matcher and then it tries to identify the concatenation of input sequence that coheres to the pattern, that begins at the beginning of a specific index. PatternSyntax Exception Class Methods This is an unchecked exception that denotes the syntax error on the regular expression pattern. A PatternSyntaxException class gives the below methods for identifying where it went wrong - public int getIndex () - It restores the error index - public String getMessage () - This sends back the multi-line string that consists of a description of a syntax error along with its index. Also, it sends back the erroneous regular expression pattern and the visual indication of an error-inside within a pattern. - public String getDescription () - It helps in reclaiming the description of the error. - public String getPattern () - It restores the erroneous usual expression pattern. Java Methods It is the compilation of statements that are organized together for performing the operations. You can call the System.out.println() method, for instance, the system can execute different statements for displaying the message on the console. In this Java Fundamental Tutorial, you will comprehend how to create your methods with or without returning the values for invoking a method and to apply the method of abstraction to the program design. For creating the Method Consider the below example Syntax Over here, - int - return type - public static - modifier - methodName - It is the name of a method - int a, int b - It is the list of parameters - a, b - It is the formal parameters A Method definition consists of a method body and method header. Syntax The above syntax also includes: - returnType - This method shall return the value - modifier - It explains the access kind of method and this is optional for use. Parameter List It is the list of parameters, that is the number, order, and type of parameters of the method. These are the optional methods that consist of zero parameters. method body An method body describes what a method performs with the statements. nameOfMethod It is the method name. A method signature includes the parameter list and the method name. Method Calling Generally, to use a method, it must be called. There are two types through which a method can be called, to be precise, a method that returns the value and a method that doesn't return anything. The steps of method calling are simple. In case assuming that the program calls a method, then the program controls are also relocated in the called method. Then, a called method sends the control to a caller on two conditions and they are: It sends back the statement that is executed. It reaches to a method that uses the ending closing brace. A method that returns the void is termed as the call to statement. For Example This method sends back the value that shall be understood using the below example int result = sum (6,9) A Void Keyword A Void Keyword permits us to create methods that do not return the value. Here, in the example given below, let us take into consideration the void method for the methodRankPoints. It is a void method that can't refute the value. To call, the void method, it should be a statement, i.e it must be methodRankPoints (255.7); This is the Java Statement that ends with the semicolon as shown below Passing the Parameters by Value When you work under a calling process, the arguments should be passed. It should be in the structure that is similar to the corresponding parameters in the method of the specification. A parameter shall be passed utilizing the reference or value. While passing the Parameters, the value indicates calling the method with the parameter. This is how the argument value is passed to the parameter. Method Overloading In case if a class has more than two methods with the same name, but with different parameters, this is known as method overloading. This is a contrast to an overriding method. When using the overriding method, you should have the same method type, name, and the number of parameters. By using the Command-Line Arguments At times, you may need to pass a few information to the program where you run them and it is accomplished by passing the command-line arguments to main(). The Command-Line argument is the piece of information that directly pursues a program's name on a command line when it is executed. Also, it is easy to access the command-line arguments from the Java Program. It stacks the Strings in a String array that is passed to the main () The Finalize () Method It describes the method that shall be invoked just before the object's final elimination by using a garbage collector. A method is called for finalize () and this can be utilized for checking that the object is eliminated neatly. For instance, You may use the finalize() for making sure that the open file is gained by an object that is closed. In case if you want to add the finalizer to a class, you must describe the finalize() method. The Java runtime calls the method while it is about to recover the object of that class. Inside a finalize() method, you can mention the actions that should be executed before the object that is going to be eliminated. Over here the keyword is shielded with the specifier that halts the access to the finalize() by code that is described outside the class. It denotes that you can determine when or how they finalize would be executed. For instance, by any chance, if your program ends prior to the process of garbage collection, then at any cost the finalize () can not be executed. Java I/O The Java I/O package comprises almost all the classes you may require to execute the input and out in Java. And all these streams interpret the input source and the output destination. The Java stream in the Java.io package backs the data like an object, localized characters, and primitives. Stream The Stream is the sequence of the Data. In Java, you can stream the composed of bytes. In Java, there are 3 streams that are developed automatically and all these streams are hooked with a console. - System.in - Standard Input Stream - System.out - Standard Output Stream - System.err - Standard error stream The Code that is used for getting the Input from the Console is int i=System.in.read();//returns ASCII code of 1st character System.out.println((char)i);//will print the character The Code for printing the error message and output to the consoles are System.out.println("simple message"); System.err.println("error message"); The difference between the Input Stream and Output Stream Input Stream The Java Application basically uses the input stream for reading the data from the source, it can be anything like a socket, peripheral, an array, or a file. Output Stream The Java Application uses the Output Stream for writing the data for a specified destination, it can be anything like a socket, peripheral, an array, or a file. The Diagram that is given below helps you to comprehend the Java Input and Output Stream easily Input Stream Class An InputStream Class is the abstract class and it is a superclass of all the classes that depicts the input stream of the bytes. The Useful Approaches of the Input Stream Output Stream Class It is an abstract class. This is the Superclass of the gross classes that interprets the output of the stream of bytes. The output stream acquires output bytes and returns back a few to them for syncing. The Useful Approaches of the Output Stream Java Exceptions Usually, the Exceptions are the problems that crop up during the time of an execution program. Suppose if an Exception takes place the normal flow of a program will be disrupted and that application terminates unnaturally, that is not advisable, hence these exceptions should be handled carefully. The Exception can take place because of numerous reasons. Below is the situation where the exception takes place, - In case if the user has entered into unauthorized data (data i.e invalid) - If a Network Connection has been lost in between the communication or if the JVM is out of memory. - A file that is needed to be opened but if you are unable to identify it. The Exceptions here are developed because of the program error, user error, or other physical resources that failed in some manner. Based on this situation, in this Java Programming Tutorial, we have categorized the Exception into three and you should comprehend them to know how these exception handling work in Java, - Checked Exceptions - Unchecked Exceptions - Errors Checked Exceptions The Checked Exceptions is the exception that is verified by the compiler during the time of compilation and these are termed as the compile-time exceptions. One can not avoid these exceptions just like that, the programmer must take care of these exceptions. Unchecked Exceptions The Unchecked Exceptions are the exceptions that take place at the time of execution. It is termed as the Runtime Exceptions. It includes the programming bug-like errors and other irregular uses of the API's The Runtime Exceptions are neglected during the compilation. Errors These are not the kind of exceptions, but these are the problems that appear beyond the limit of a programmer or a user. The Errors are generally ignored in your code as you can hardly perform anything about the error. For instance, if the Stack Overflow takes place, the error would set in and also it shall be ignored during the time of compilation. Exception Hierarchy Every Exception class is the Subtypes of java. lang. Exception class. An Exception class is the hierarchy of a throwable class. Apart from an exception class, there are additionally subclasses that are termed as Error and it is borrowed from a Throwable class. The Errors are the abnormal conditions that take place at times of severe failures and these can not be managed by Java programs. The Errors are navigated to denote the errors that are generated by the runtime environment. For instance, An, JVM is usually out of Memory. Usually, the programs may not be able to recover from the errors. An Exception class consists of two main subclasses and they are the RuntimeException Class and IOException class. Now let us see a brief description of the Exception Methods - public Throwable getCause() - It helps in sending back the cause of exceptions that are depicted by the Throwable Object. - public void printStackTrace() - It helps in printing the result of the toString() with a stack trace of the error output stream or System.err. - public String toString() - It refutes the name of a class that is concatenated to the result of the getMessage(). - public String getMessage() - This refutes the detailed text of the exception that has taken place. It is the text which is booted on a Throwable constructor. - public Throwable fillInStackTrace() - It fills in the Stack of the Trace of this Throwable object with the present stack trace and it adds to the before the information on the stack trace. - public StackTraceElement [] getStackTrace() - It refutes the arrays that comprise the element from a stack trace. An element on the index 0 epitomizes the top of a call stack. The ending element in an array epitomizes the type of the bottom call stack. Catching Exception This approach captures the Exception using the blend of catch and tries keywords. The Codes that are not beyond the try/catch block are termed as the protected code and the syntax for using a try/catch seems more or less similar. A Code that is prone to an exception that is placed on the try block. Whenever the exception takes place, it is managed by the catch block that is related to it. All the Try blocks must be followed automatically using a catch or the final block. The Catch Statement engages in asserting the kinds of exceptions that you are trying to capture. If the exception takes place on the protected code, then the catch block that supersedes a try is verified. If a type of exception has taken place in a catching book is listed, then the exceptions are passed for catching the block that is more like the argument than it is on to the passed method of a parameter. Multiple Catch Blocks The Try Block shall be superseded with different catch blocks. Throw/ Throws keyword If a method can not tackle the checked exception, then the method should term itself using the throw keyword. A throw keyword crop ups at the conclusion of the method signature. Common Exceptions In Java, you can describe two categories of errors and exceptions. - JVM Exceptions - It is the Exceptions/ errors that are logically and exclusively thrown by a JVM. For instance: NullPointerException, Class Cast Exception, an ArrayIndexOutOfBoundsException. - Programmatic Exception - These are the exceptions that are thrown directly by an application or by the API Programmers. Example: IllegalStateException, IllegalArgumentException. Java Training in Coimbatore at FITA Academy provides extensive training of the coding and scripting practices of the Java code under the mentorship of expert Java professionals with certification. Java Inner Classes In this Java Tutorial, like the Variables, Methods of the class can also have the other class as an owned member. Writing the class in something else is permitted in Java. A Class that is written in is known as a nested class. The Class that occupies an inner class is termed as an outer class. Syntax Below is the Syntax for the inner demo and outer demo in the nested class The Nested Classes are further classified into two types and they are - Static Nested Classes - Non-Static Nested Classes - Static Nested Classes - It is the Static Member of a Class - Non-Static Nested Classes - It is the Non-static Member of the Class Static Nested Classes The Static Inner Class is the Nested class that a Static Member of the Outer Class. This shall be accessed past without calling the outer class and by using the other static members. Unlike the Static Members, the static nested may not need to have access to the instance methods and variables of an outer class. Syntax Inner Classes ( Non-Static Nested Classes) The Inner Classes are the Security systems in Java. It is a known fact that the class shall not be combined with an access modifier in private, however, when you accept a class in the process of another class, later the inner class shall be made as private. It is also used for ingressing the private members of the class. It is of three kinds based on the kinds and they are - Inner Class - Method-local Inner Class - Anonymous Inner Class Inner Class Building an inner class is more simple. All you need is to write the class in a class. In contrast to a class, the inner class shall be private and when you declare the inner class is private, you can not access it from the object that is outside the class. Accessing Private Members As specified earlier, the inner classes are used for pervading the private members of the class. In case, the class consist of a private member for pervading them, then write the inner class inside it and then return them to the private members from the method in the inner class, for example, getValue() and certainly from the other class you can call a getValuw() method to the inner class. Method-Local Inner Class One of the prime features of Java is that you can write the Class in a method and it would mostly be of local kind. The Local variables and the scope of an inner class are curbed within a method. The Method-Local inner class shall be called only with a method where an inner class is described. Anonymous Inner Class The Inner class that is stated without the class name is called Anonymous Inner Class. In the event of Anonymous Inner Class, it should be stated and called at the same time. Usually, it is used while you are required to nullify a method of the interface or class. Java Inheritance The Java Inheritance is the System on which the object obtains every behavior and properties of the parent object. This is the most important part of the OOPs. The theory behind the inheritance in Java is you can build new classes that are built on the existing classes. If you derive from the existing class, then you can rephrase the fields and methods of a parent class. Further, you can also enumerate fields and methods in the current class as well. Usually, the inheritance depicts the IS-A relationship that is comprehended as the parent-child relationship. Reason to use the Inheritance in Java - It is used for Method Overriding - Code Reusability Different Terms used in Inheritance - Child Class/Sub Class - It is a class that derives from the other classes. This is also known as the derived class, child class, or the extended class. - Class - The Class is the set of objects that have general properties. It is the blueprint or template from where the objects are built. - Parent Class/ Super Class - It is the Class from where the subclass derives the features. This is also termed as the Parent Class or the Base class. - Reusability - The very name itself indicates the structure of reusability and it allows you to reuse the methods and fields of an existing class where you can build the new class. Also, you can use similar methods that are already described in the earlier class. The Extended keywords denote that you are building the new class that is inherited from the existing class. The term "extends" denotes an increase in functionality. The term Java is the class that is derived and it is called the superclass or parent, and this new class is termed as the subclass or child. Java Overriding In this Java Programming Tutorial let us see about the Overriding. In case if the class is derived from the method of its superclass, then you have the possibility to override a method that is given and that which is not marked as the final. The advantage of the overriding is the capacity to describe the behavior that is specific to a subclass type and it means the subclass shall deploy the parent class method on the basis of the needs. Based on the object-oriented terms, overriding here indicates overriding the performance of the existing method. Rules should be followed for implementing the overriding method - The Methods that are stated as final can not be overridden. - An Argument list must be completely similar to that of an overridden method. - An Instance method shall be overridden only when it is derived from the subclass. - A return kind must be similar or the subtype of a return type should be stated on the original overridden method on the superclass. - The Methods that are stated as the Static shall not be overridden however, it can be redeclared again. - When the method is unable to be derived then it shall not be overridden. - An access level shall not be restrictive and it can not be overridden for the method's access level. For instance: When the Superclass method is not stated publicly, then overriding the method on the subclass shall not be either protected or private. - The Subclass is a completely different package that can override only a non-final method and it is stated as protected or public. - The possibility of overriding a constructor is impossible. Java Polymorphism Polymorphism means the capacity of the object to hold any form. The general use of polymorphism takes place in OOPs where the parent class reference is used for indicating the child class object. All the Java objects shall pass above one IS-A test that is termed to be polymorphic. Almost, every object in Java is Polymorphic since the object shall pass the IS-A test on its own kind and for a class object. The most important factor is that it is possible only to access the object via a reference variable. And the reference variable is of only one type. If it is stated once, then the kind of reference variable shall not be modified. A reference variable shall be assigned again to the other objects that are given is not stated as the final. The kind of reference variable shall find the methods through which it can invoke the object. The reference variable shall indicate any object of the stated type or the subtype of the stated type. The reference variable shall be stated as the interface or class type. For Example public interface Vegetarian{} public class Animal{} public class Elephant extends Animal implements Vegetarian{} Over here, the Elephant is termed to be polymorphic as it consists of different inheritance. The below is true for the above-given examples: - A - Elephant IS - A Animal - A - Elephant IS - A Vegetarian - A - Elephant IS -A Elephant - A - Elephant IS - A Object If you apply the reference variable details to the Elephant Object reference, then the following declarations are considered to be legal For Example Elephant e= new Elephant(); Animal a = e; Vegetarian v = e; Object o = e; All reference variables e, a, v, o, indicate the same Elephant object in the stack. Java Abstraction Abstract Class The Class that is stated within the abstract keyword is called the Abstract class in Java. You have abstract and non-abstract methods. In this Java Tutorial, let us first get a clear understanding of what abstraction is all about. Abstraction It is the method of hiding the operation details and displaying only the functionality to a user. It displays only the vital things to a user and shields the internal facts, like sending a message where you enter a text and send it. The users wouldn't be aware of the internal processing of a message delivery process. Abstraction permits you to concentrate on the performance of the object rather than the process that is involved in a performance. Methods to reach Abstraction There are two different methods for achieving abstraction in Java and they are - Abstract Class (0 to 100%) - Interface (100%) The Abstract Class in Java The Class that is stated as the abstract is called the abstract class. You can have abstract & non-abstract types. It should be extended and this method should be implemented. And it can't be called. Java Notes for beginners - The Abstract Class should be stated with the abstract keyword. - It consists of Abstract & Non-abstract types. - This shall not be called. - An Abstract class could have static and constructor types as well. - It has final methods that shall force a subclass for not changing the body of a method. Rules of the Java Abstract Class - The Abstract Class should be stated within the abstract keyword - The Abstract Methods have final methods. - The Abstract Class shall have the constructor and static methods - The Abstract Class can have the Abstract and Non-Abstract types - This can be called. For Example of Abstract Class abstract class A {} Abstract Practice in Java The Method that is stated as abstract and that which does not have the operation is called the abstract method. For Example abstract void printStatus();//no method body and abstract Example Over here, the Cycle is the abstract class that consists of only one abstract process run. The Implementation is given by the Lady Bird Class. abstract class Cycle{ abstract void run(); } class Lady Bird 4 extends Bike{ void run(){System.out.println("running safely");} public static void main(String args[]){ Cycle obj = Lady Bird4(); obj.run(); } } Java Encapsulation Encapsulation is one of the basics of the four OOP concepts. The rest are abstraction, polymorphism, and inheritance. The Encapsulation in Java is the structure of wrapping data and code that are performing on the data together as a single unit. Over here the variables of the class are shielded from the other classes and it is possible to pervade via the methods of its present class. Hence, this is called Data hiding. For achieving the Encapsulation in Java - It should state the variables of the class as the private. - It gives the Public Getter and Setter methods for viewing and modifying the values of variables. Advantages of Encapsulation - It Declares the variables of the class as the private - It enables public getter and setter manners to view and modify the values of the variables. Java Interfaces The Interface is the reference kind in Java. It is identical to the class and the compilation of the abstract methods. The class shall instrument interface, and derive the abstract methods to an interface. Further with the Abstract methods, the interface consists of the static method, nested type, constants, and default methods. The Method bodies only prevail for the default and static methods. Writing the interface that is identical to Class. The class defines the behavior and characters of an object. The interface comprises the attributes of the class instruments. Except for the Class which instruments an interface is abstract, of all methods of an interface that has to be described in a class. The Interface is similar to that of a class in the below methods: - An Interface is usually written in the file that is within the .java extension of a similar interface that adheres to the name of a file. - The Interface shall consist of more number of methods. - Usually, the byte code of the interface surfaces in the .class file. - Interfaces that surface in the packages and to its interrelated bytecode file should be found in the directory structure that adheres to the package name. Yet the interface is so different from the class in multiple ways and it includes - It is not possible to call the interface - All the methods in the interface are abstract. - An Interface will not comprise any instance fields. A field that appears on the interface must be stated both in Final and Static. - The interface does not consist of any constructors. - The interface shall expand the different interfaces. - An interface is not only expanded by class, but it is instrumented by the class. - The Interface can extend different interfaces. Declaring an Interface The interface keyword is used for declaring the interface Implementing Interfaces When the class instruments the interface, you can use the class as the signing contract and adhering to perform a particular behavior of an interface. In case, when the class fails to execute all functions of an interface, then the class should mention itself as abstract. The Class utilizes the implement keyword for instrumenting the interface. An implement keyword surfaces in a class declaration and it expands the portion of a declaration. In case the overriding methods are described in the interface, then numerous rules have to be followed - An implementation Class itself must be abstract and so the interface type may not be required to implement. - A signature of an interface method and similar return type or a subtype must be maintained while overriding methods. - A checked exception may not be declared on the implementation methods apart from the ones which are declared by a subclass or interface method of the ones that are stated by an interface method. In case if the implementation interfaces, then there are numerous rules - A class can instrument above one interface at a time. - A class shall extend can broaden only one class and it implements more than one interface. - The Interfaces shall extend the other interface in the same way as that of the class would extend the other class. Extending Interfaces The Interface shall extend other interfaces that are similar to the way that the class shall extend to the other class. An extended keyword is utilized for expanding the interface and a child interface that derives the method of a parent interface. Extending Different Interfaces The Java Class shall broaden only one of the parent classes. The different inheritances are not permitted. The Interfaces are no class, and the interface shall extend above one parent interface. An Extend keyword is utilized only once and a parent interface is stated in the comma-separated list. Tagging Interfaces The general use of an extending interface takes place when a parent interface does not consist of any methods. Java Training in Gurgaon at FITA Academy upskills your knowledge of Java Coding and its structure extensively under the training of real-time Java Professionals with certification. Packages The Packages are utilized in Java to avoid the conflicts in access and control for making the location/ searching and the use of the classes, annotation, enumerations, and interfaces. The Package shall also be defined as the set of related types like (enumerations, classes, interfaces, and annotations ) that give access to the namespace and protection management. Other existing packages that are comprised in Java are - java.io - The Classes for the Output and Input functions are wrapped in the packages - java-lang - These packages are the fundamental classes The Programmers are defined on the package of bundle groups such as interfaces/ classes, etc. This is a good method of practice that is associated with classes that deploy a programmer and you can identify the interfaces, annotation, enumerations, and classes. As the package develops the new namespace there will not be any conflicts in names and also the names that are in other packages. Utilizing these packages is simpler to provide entry control and it is easy to find the other associated classes. Creating Packages When you create a package, you must select the name of a package and consists of the package statement with a name on the top of all the source file that consists of the enumerations, interfaces, classes, and annotation types you are planning to add into a package. A package statement should be the starting line of the source file. There can also be only a single package statement on every source file and this is applicable for all kinds of files. Supposedly if the package statement is not utilized then the interfaces, annotation, class, and enumeration types would not be placed on a present default package. For compiling the Java programs along with the package statements you can use the -d option as given below javac -d Destination_folder file_name.java Then the folder with the provided package the name is developed on the definite destination and this is gathered on the class files that and would be placed on the folder. The Import keyword If the class is required you must utilize the other class on a similar package, the package may not be used. The Classes on a similar package can identify the other without any exclusive syntax. Structure of the Directory Package The two important results that take place in the class of the package - The name of a package that turns out to be part of the name of a class and should be mentioned in the earlier section. - The name of a package should adhere to the directory structure and it should cohere with the bytecodes that reside along with it. Below is the simple way to handle your files in the Java Place the Source code to an interface, enumeration, or the annotation kind on the text file that the name is simpler than the name of a kind and on the extension is.java. For Instance, // File Name : Apple.java package fruit; public class Apple { // Class implementation. } Output ....\fruit\Apple.java The qualified class and the pathname shall be as follow: - Class name -> fruit. Apple - Pathname -> fruit\Apple.Java ( in the windows system) Usually, a company uses the reverse Internet domain name for the package names. For Example If a company has the Internet domain name - windows.com, then all the package names shall begin with the com. windows. All the components of a package name must adhere to the subdirectory. Advanced Java Tutorial Data Structures The Data Structures that are given by the Java utility package are robust and can execute a range of functions. The Data structures comprise of the below classes and interface, - Vector - Stack - Enumeration - BitSet - Hashtable - Properties - Dictionary Vector A Vector class has more resemblance to the traditional Java array, apart from that it can also manipulate as needed, and it can consist of new elements. Just like an array, the elements of the vector object are controlled through the index in a vector. The best part of using a vector class is you need not care about setting it to the particular size on creation, it can adjust (shrink/enlarge) itself when it is required. Stack A Stack Class deploys the LIFO method i.e Last-in-first-out, a stack of elements. Over here the Stack is the vertical stack object, where you can pass the new elements and get them stacked on the above of others. In case if you pull the element off a stack, then this moves to the top. To make it easier, the closing element that you add to a stack is the one that appears like the back off. Enumeration It is not a data structure by itself, but enumeration is more important within the conditions of the other data structures. An Enumeration is an interface that describes the means for recovering the successive elements from the data structure. For Example, Enumeration describes the method that is termed as the nextElement and it is used for obtaining the immediate (next) element in the data structure queue that consists of different elements. BitSet A Bitset Class deploys the set of flags or the bits that could be fixed and cleared individually. It is more useful in case you need to keep up to the set of Boolean values. You can easily designate the bit all the value and it can be fixed or cleared as appropriate. Hashtable A Hashtable Class gives the space for organizing the data on the basis of the user-defined key structure. For instance, on your address list hash table, you can sort, save the data based on the key like ZIP code instead of the particular person's name. The exact meaning of the keys in respect of the hash table is completely on the application of a hash table and the data it consists of. Properties The Properties are the subclass of the Hashtable. It is used for handling the list of values on which a key is a string and the string is also the value. Apart from this, the properties class is used by other Java Classes. Dictionary A Dictionary class is the abstract class that describes the data structures to map the keys to the corresponding values. It is more useful in a situation where you are unable to get access to the data through a specific key instead of the integer index. As the Dictionary Class is abstract, it gives a framework that is key-mapped data structure instead of the particular implementation. Java Collections A Collection in Java is the framework that gives the architecture for the store and it modifies the objects in the groups. The Java Collection shall accomplish all the operations that you must perform on the data like insertion, sorting, searching, deletion, and manipulation. The Collection of the framework was curated to match the different goals like A Framework permits numerous types of collection for working in an identical manner and it has a high degree of interoperability. A Framework has to perform extremely well. The Implementation of the basic collections like ( linked lists, trees, dynamic arrays, and hashtables) must be highly efficient. A Framework was created to extend or adapt to the collection in an easier manner. Finally, the complete collection of a framework is curated on a set of common interfaces. There are numerous standard implementations like TreeSet, LinkedList, and HashSet, and these interfaces are given so that you can use them as they are and also deploy them on your collection when you choose. The Collection of Framework is the unified architecture that is used for manipulating and interpreting the collections. All the Collections of the framework comprises of the following, - Algorithms - It is the common method that is used for performing computations like sorting or searching an object that shall instrument the collection interfaces. - Interface - It is the abstract data type that interprets the collections. The Interfaces permit the collections to modify freely irrespective of the facts of the representation. It is a common thing that in the Object-oriented language, an interface would build the hierarchy. - Implementations - It is the detailed implementation of the collected interface and it is the renewable data structures. Further, a framework describes different map classes and interfaces. Although the maps are not collections that are in appropriate use of a term, however, it is completely combined with collections. The Collection Interface In this Java Programming Tutorial let us see the collections framework that describes numerous interfaces. - Map - It aids in mapping the distinct keys to the values. - Collection Interface - It allows you to work with a set of objects and it is on the top list of the collection hierarchy. - Set - It extends the Collection for handling sets that contain a distinct element. - SortedSet - It helps in extending the Set for managing the sorted sets. - Enumeration - It is the legacy interface that describes the method through which you shall enumerate elements from the collection of objects. The bequest interface is antiquated through Iterator. - Sorted Map - It helps in enlarging the Map for managing the keys in ascending order. The Collection Classes Java supports you with you a set of benchmark collection classes that instrument the Collection interfaces. The Classes gives complete implementation that is used as it is and another abstract class, that gives the skeletal implementations which are used as the beginning points to create the definite collections. Some of the Standard Collections are mentioned below - Abstract List - It aids in extending the AbstractCollection and it deploys more List Interfaces. - Abstract Collection - It deploys a more Collection interface. - Abstract Sequential list - This Extends the AbstractList for the use by the collection that applies sequence instead of the random access of elements. - Array List - It instruments the influential array on broadening the AbstractList. - Linked List - It instruments the linked list on expanding the AbstractSequentialList. - HashSet - It Extends the AbstractSet to make use of the hash table. - TreeSet - This deploys the Set that is stored in the tree. It extends the AbstractSet. - Linked HashSet - It lengthens the HashSet and permits the insertion-order iterations. - Abstract Map - It deploys maximum Map Interface. - Tree Map - It Expands the Abstract Map for utilizing the tree. - Hash Map - This broadens the AbstractMap to utilize the Hash table. - Weak Map - It expands the AbstractMap for utilizing the hash table that is in the limit of weak keys. - IdentityHashMap - This broadens the AbstractMap and it makes use of the reference equality while compared to the documents. - LinkedHashMap - This Expands the Hashmap for permitting the insertion-order iterations. Below are the classes that give the skeletal implementation of the core collection interface for reducing the effort that is needed to deploy them, - AbstractSet - AbstractMap - AbstractCollection - AbstractList - AbstractSequentialList Collection Algorithm A Collection Framework explains various algorithms that are practiced on the maps and collections. It is characterized as Static methods in the limits of the collection class. The various methods that shall throw the ClassCast Exception that takes place when the experiment is made for analyzing the incompatible types or the UnsupportedOperationException, that takes place while the attempt is made for changing the unmodifiable collections. Iterator Usually, you may need to wheel to the elements in the collections. For instance, you may be required to demonstrate all elements. The simplest way for performing this is that you can deploy the iterator, which is actually the object that instruments either the Listiterator interface or the Iterator An Iterator permits you to go through the collection or remove elements. The ListIterator shall broaden the Iterator for allowing modification of the elements and the traversal of the list. Comparator Both the TreeMap and the TreeSet stack the elements in a sorted manner. The Comparator demonstrates accurately everything about sorted order. It basically helps you to sort for the provided collection by any number or through multiple ways. Furthermore, this interface shall be applied for sorting any instance of whatever class. Java Networking It is the notion of bridging two or above computing devices in sync so that you can share the resources. The Java Socket Programming gives the competency to share data between various computing devices. The two major benefits of Java Networking are, - You share the resources easily. - You have the Centralized Software Management kit. Java Networking Technology In this Java Tutorial let us see the broadly-used Java Networking Terminologies - Port Number - MAC Address - Protocol - IP Address - Socket - Connection-oriented protocol Port Number The Port number is utilized distinctly for determining various applications. It operates as the endpoints of communication between applications. A port number is related to the IP address to communicate between two applications. MAC Address The MAC address ( Media Access Control) is the distinct identifier of the Network Interface Controller. The Network Node shall have a different NIC however everything with the distinct MAC. Protocol The Protocol is the compilation of rules that are generally used in communication. For instance - FTP, Telnet, TCP, SMTP, and POP. IP Address The IP Address is the distinct number that is designated to the node of the network. For example - 192.168.0.1 and this is compiled of the octets that length between 0 to 255. This is the logical address that could not be changed. Socket The Socket is the endpoint between the two ways of communication. Connection-oriented protocol On the Connection-oriented protocol, the affirmation is sent by a receiver. Even though it is slow, this is reliable. For Example - TCP. Connectionless protocol In the Connectionless protocol, the acknowledgment will not be sent by a receiver and it is not reliable but this is faster. For Example - UDP. Java Socket Programming The Java Socket Programming is utilized for communication between the applications which run on different JRE. The Java Socket Programming can either be one that is connectionless or connection-oriented. The ServerSocket Classes and Socket is used for connecting the oriented socket programming. For connection-less socket programming, you can use the DatagramPacket and DatagramSocket. A Client in Socket Programming should know two pieces of information and they are - Port Number - IP Address of the Server Over here, we have demonstrated the example for one-way server and client communication. And here the Client dispatches the message to a server, and the server reads the text and later prints it. Exactly, here two classes are utilized and they are ServerSocket and Socket. A Socket Class is used for communicating to the Server and Client. With this, you can write and read the message. A ServerSocket Class is utilized at the server-side. An Accept() method of a ServerSocket Class halts the console till the client is connected. Later on the successful connection to the client, it sends the instance of the Socket to the server-side. Socket Class The Socket is the endpoint of communications among machines. A Socket Class is used for building the socket. Some of the important methods Server Socket Class A ServerSocket Class is utilized for building the server socket. It is the object that is used in building the communication along with the clients. Some of the important Methods are Java URL The abbreviation of the URL is Uniform Resource Locator. This directs to the resource that is on the World Wide Web. For instance, An URL Consists of numerous information and they are - Port Number - It is the optional character. If you write the number 70 is the port number. In case there is no port number that is mentioned on the URL then it returns to -1. - Directory Name or File Name - Here the index. jsp is the name of the file - Protocol - The HTTP is protocol. - IP Address or Server name - is the name of the server. Predominantly used Java URL Class Java InetAddress Class The Java InetAddress Class demonstrates the IP address. A java.net.InetAddress also gives the methods for getting the IP of any hostname. An IP address is illustrated by the 32-bit or by the 128-bit which is the unsigned number. The instance of the InetAddress depicts the IP address that correlates with the hostname. There are two kinds of address types and they are Unicast and Multicast. A Unicast is the qualifier of the single interface where a Multicast is the qualifier of the set of interfaces. Further, the InetAddress consists of the cache mechanism that aids in storing the successive and non-successive host-name resolutions. Important Methods of the InetAddress Class Java - Sending Email For sending the e-mail you should use the Java Application that is close enough, to begin with, where you must have installed the Java Activation Framework or the Javamail API to your machines. For downloading the recent version you can use Java's standard Website. On downloading it successfully, you can unzip those files in a newly developed top-level directory where you can find out the total number of the jar files for both applications. All you have to do is that you must add the activation.jar and mail.jar to your CLASSPATH. Java Multithreading Java is considered to be one of the multi-threaded programming languages that means you can build a multi-threaded program using Java. The Multi-threaded program consists of two or more parts that shall run simultaneously and all the parts shall manage the various tasks simultaneously by making optimal use of the resources that are provided and when the computer has numerous CPUs. The word multitasking means where there are numerous processes that share common operations of all resources like CPU. The Multi-threading widens the concept of multitasking into applications where you shall partition few applications in the limit of the single application and into individual threads. All the threads are capable of running collar OS aids in partitioning the operating period in the applications as well as to all the threads that are within an application. Generally, the multi-threading allows you to write in a manner where different activities shall proceed with the same program altogether. The Wheel of the Thread Usually, a thread undergoes different stages. The flow-chart that is given below precisely explains the lifecycle of the thread. Below are the detailed description of the Thread Lifecycle - New - The New Thread take starts its life cycle with the new states. It prevails in this state up to when a program takes off with the thread and it is called a born thread. - Runnable - Once the newly born thread begins its function, then the thread shall become runnable. The thread in this state is termed to be runnable as it is executing the tasks. - Waiting - At times, the thread flux back to the runnable state when the other thread signals to the waiting thread to pursue the execution function. - Time Waiting - The Runnable threads shall barge into the restrained waiting state for a definite period. The thread which is in this state shall flux back to a runnable state in case the allotted time interval is expired or while an event is awaiting its occurrence. - Terminated - The Runnable thread moves into the terminated state where it tries to replete the tasks, if not then it would terminate itself. Thread Priorities All the threads in Java have a unique priority that supports operating the systems to identify the sequence through which the threads are being scheduled. The Java thread priorities are within the range of MIN_PRIORITY and MAX_PRIORITY. Where the MIN_PRIORITY is the constant of 1, and the MAX_ PRIORITY is the constant of 10. By default, all the threads are provided priority i.e NORM_PRIORITY which are constant of 5. The threads that are with higher priority are more important to the program and this should be designated to the processor time before the lower-priority threads. Yet, these threads' priorities shall not be guaranteed in the sequence on which the threads are executed; it depends more on the platforms. Now in this Java Developer Tutorial, let us see how to build a thread on In case, when your class is aimed for execution then the thread shall attain this implementation by the Runnable interface. Below are the fundamental steps in creating the thread, Step-1 The first step you must do is that you should implement the run() method by enabling the Running interface. The method that gives the entry point to a thread and you shall upload the entire business logic to this method. Syntax for run() method public void run( ) Step-2 The second type will instantiate the Thread object by using the below constructor- Thread(Runnable threadObj, String threadName); The thread by is the instance of the class that deploys the thread name and the Runnable interface is the name that is provided to a new thread. Step-3 If the Thread Object is developed then you can begin it by calling the start() method, which shall execute the call to the run() method. Syntax for the start() method void start(); Build the Thread by the Extending Thread Class Java Applet Basics The Applet is the Java Program and it runs on the Web browser. The Applet is the complete Java functional application as it consists of the entire Java API at its scraping. Below are the major difference between the applet and the standalone Java application - The Applet is the Java Class that boosts java. applet.Applet class. - The Applets are devised to embed with the HTML Page. - The main () method does not request the applet and the applet class may not define main (). - If the user views the HTML page it consists of the applet and the code of the applet is installed into the respective user's machine. - The JVM is needed for viewing the applet. The JVM shall either be a separate runtime environment or a plug-in to the Web browser. - The Applets have stringent rules that shall be reinforced by a Web Browser. - The Security of the applets is often attributed to the sandbox security when compared to the applet of the child playing on the sandbox with different rules which should be implemented. - The JVM of the User's Machines develops the instance of an applet class and this requests different methods at the time of the applet's cycle. - The Other Classes which the applet requires are downloaded into the single (JAR) Java Archive file. The Applet Cycle The Four methods in Applet Class provides you a framework in which you can build the serious applet: - Start - It is the method that immediately calls the browser calls to init method. - Init - It is the type that is aimed for initialization that is required for the applet. This is called later the param tags that are inside an applet tag that you have processed. - Stop - It is the type that is immediately called when a user moves away from the page in which an applet sits. This can be called frequently on the same applet. - Paint - It is requested automatically later by the start() method, and at any time the applet requires to refurbish itself in a browser. A paint() method is generally obtained from java.awt. - Destroy - It is the method that is called if the browser closes down usually. When the applets are meant to be on the HTML page, then you can not usually leave the resources breach after the user leaves a page that consists of the applet. The Applet Class All Applet is the extension of java. Applet class. The base Applet class gives methods that are obtained from the Applet Class and it shall be called for obtaining the services and information from a browser context. It consists of the following methods: - Fetching the image - Fetching the audio clip - Plays the audio clip - Resizing of the applet - Getting the applet parameters - Getting a Network location of an HTML file that consists of the applet - Getting the Network location of an applet class directory - It prints out the status text in the browser - Initializes an applet - Destroys the applet - Request the information regarding the version, copyright, and author of an applet - It requests the description of a parameter and the applet identifies it - Ceases the execution of an applet - Begins the execution of an applet Further, the Applet Class shall give the interface on which a viewer or the browser gains the information of the applet and the control operation of the applet's execution. An Applet Class that gives the default application of all the methods. Those applications shall be overridden when required. Java Documentation The Javadoc is the tool that comes with a JDK and this is used for developing Java Code documentation in the HTML format from the Java Source code and it needs the documentation of the predefined format. The Java supports the following three types of comments. - //'text - A compiler avoids everything from// till the end of a line. - /*text*/ - A compiler avoids everything from the /* to*/ Java Useful Resources Java Interview Questions The Java Interview Questions and Answers are curated with a special focus to aid both the students and the working professionals who are preparing themselves for different Certification Exams and Job Interviews. This session is a complete compilation of the Important Java Interview Questions and Answers that are asked to fresher and experienced candidates in the interview. Enlist the different access specifiers in Java? - Public - Private - Default - Protected List the Advantages of Package in Java? - The Packages evades the name clashes. - It is easier to locate the related classes. - The Packages give easier access control. - You can also have the hidden classes which are not found outside the used package. Name the types of constructors that are used in Java? Java has two types of Constructors and they are, Default Constructor? Parameterized Constructor Is it possible to make a Constructor final? It is impossible to make the Constructor final. Are the Constructors inherited? No, the Constructors are not inherited. For more: Click the link given belowJava Interview Questions and Answers. Java - Quick Guide - The Java Tutorial - It is the practical set of guides to the programmers who need to use the Java Programming language for creating the applications. - Free Java Download - Download Java to your desktop from the Computer now. - SunDeveloper Network- This is the official website that is used for listing down the API documentation, books, latest Java technologies, and other resources. - JavaTM 2 SDK - This is the Official Site for the JavaTM 2 SDK, the Standard Edition. - Java Tutorial - It is the practical guide for the programmers who need to use the Java Programming language for creating the applications.!
https://www.fita.in/java-tutorial/
CC-MAIN-2021-21
refinedweb
16,790
51.99
Basics This tutorial shows you how to create, compile, and run a Native Client web application. The Native Client module you will create as part of the web app will be written in C++. A Native Client web app consists of at least three entities: - The HTML web page (*.html). The page can also contain JavaScript code and CSS styles, or the scripting and styling code can go in separate .js and .css files. - A Native Client module, written in C or C++. Native Client modules uses the Pepper API, included in the SDK, as a bridge between the browser and the modules. When compiled, the extension for a Native Client module is .nexe. - A manifest file (*.nmf) that is used by the browser to determine which compiled Native Client module to load for a given end-user instruction set ("x86-64" or "x86-32"). What the app in this tutorial does This example shows how to load a Native Client module from a web page and how to send messages between JavaScript and the Native Client module. In this simple app, the JavaScript in the web page sends a "hello" message to the Native Client module. When the Native Client module receives a message, it checks whether the message is equal to the string "hello." If it is, the Native Client module returns a message saying "hello from NaCl." A JavaScript alert displays the message received from the Native Client module. This tutorial uses a Python script to create a set of template files that you can use as a starting point. The template code shows how to set up a simple message handler on the Native Client side. It also provides boilerplate code in the HTML file for adding an event listener to the web page to receive messages from the Native Client module. Communication between JavaScript and a Native Client module Communication between JavaScript code in the browser and C++ code in a Native Client module is two-way: the browser can send messages to the Native Client module. The Native Client module can respond to the message from JavaScript, or it can initiate its own message to JavaScript. In all cases, the communication is asynchronous. A message is sent, but the system does not wait for a response. This behavior is analogous to client/server web communication, where the client posts a message to the server and returns immediately. Step 1: Download and install the Native Client SDK and Python. Follow the instructions on the Download page to download and install the Native Client SDK. Step 2: Download and install Python if necessary. A number of tools in the SDK (including the SCons build tool) require Python to run. Python is typically included on Mac and Linux systems, but not on Windows systems. To check whether you have Python installed on your system, enter the python command on the command line; you should get the interactive Python prompt ( >>>). If you need to install Python, you can get it here:. Select the latest 2.x version. Be sure to add the Python directory (for example, C:\python27 on Windows) to the PATH variable. Step 3: apps that you build (including the app in this tutorial). To start the web server, go to the examples/ directory in the SDK and run the command below: cd examples python httpd.py 5103 After you start the local server you can access it at. Of course, you don't have to use the server included in the SDK – any web server will do. If you prefer to use another web server already installed on your system, that's fine. Step 4: Set up Chrome and verify that Native Client is working. Set up Chrome as follows: - Make sure you are using Google Chrome version 14 or greater. - if the plugin is not already enabled. (You do not need to relaunch Chrome after you enable the Native Client plugin.) After you've set up Chrome, you can verify that everything is working properly as follows: - First, make sure you can run the examples in the SDK Examples. - Then run the examples in the SDK from your local server. If you started the local server as described in Step 3, go to and you'll see a page with links to each of the pre-built SDK examples. Click on one or two of the links and run the examples. Step 5: Create a set of template files for your project. The SDK includes a Python script, init_project.py, that sets up a simple Native Client project using the name and language (C++ or C) you specify. The script creates the following template files in the directory you specify: your_project_name.html, the web page that loads the Native Client module your_project_name.cc(or .c), source code for the Native Client module scons(or scons.bat), a driver script that runs the SCons tool to build your application build.scons, the build instructions for your application The script has the following command line options: - -n your project name (use lowercase letters with underscores and numbers but no uppercase letters) - -d directory for your project - -c create project using C (C++ is the default). Include the -c option if you want to write a Native Client module in C. To create a C++ project named hello_tutorial in the examples/ directory, go to the project_templates/ directory and run the script with the following options: cd project_templates python init_project.py -d ../examples -n hello_tutorial This step creates the directory examples/hello_tutorial with the following files: build.scons hello_tutorial.html hello_tutorial.cc scons In this tutorial, you're adding your project to the examples/ directory because that's where the local server runs, ready to serve your new project. You can place your project directory anywhere on the filesystem, as long as that location is being served by your server. Step 6: Modify the web page to load and communicate with the Native Client module. Next, you'll modify the hello_tutorial.html and hello_tutorial.cc files. When your web page ( hello_tutorial.html) is loaded into the browser, the compiled Native Client module ( hello_tutorial.nexe) will be loaded and run. In this step, you'll modify the moduleDidLoad() JavaScript function to send a "hello" message to the Native Client module. The template files supplied here contain boilerplate code that is used in most Native Client web apps. You'll find these files a useful starting point for future experimentation with other features of the Native Client SDK. The <embed> element (toward the end of hello_tutorial.html) is where the Native Client module is loaded. The init_project.py script has already filled in the ID attribute of your module as well as the name of the associated manifest file: <embed name="nacl_module" id="hello_tutorial" width=0 height=0 The manifest file is described in Step 8 below. The <embed> element is wrapped inside a <div> element with a load event listener that fires when the Native Client module successfully loads. At the end of the moduleDidLoad() function, add the new code below (indicated by boldface type) to send a message to the Native Client module: function moduleDidLoad() { HelloTutorialModule = document.getElementById('hello_tutorial'); HelloTutorialModule.addEventListener('message', handleMessage, false); updateStatus('SUCCESS'); //Send a message to the NaCl module. HelloTutorialModule.postMessage('hello'); } Note that the boilerplate code in this function also adds an event listener for messages received from the Native Client module. The boilerplate handleMessage() function simply prints an alert displaying the data contained in the message event. Step 7: Implement a message handler in the Native Client module. In this step, you modify the Native Client module ( hello_tutorial.cc) to do the following: - Implement the HandleMessage()function for the module instance ( pp::Instance.HandleMessage, in the Pepper Library) - Use the PostMessage()function ( pp::Instance.PostMessage, in the Pepper Library) to send a message to JavaScript. Add the code to define the variables used by the Native Client methods (the "hello" string you're expecting to receive from JavaScript and the reply string you want to return to JavaScript as a response). Add this code to the .cc file after the #include statements: namespace { // The expected string sent by the browser. const char* const kHelloString = "hello"; // The string sent back to the browser upon receipt of a message // containing "hello". const char* const kReplyString = "hello from NaCl"; } // namespace Now, implement the HandleMessage() method to check for kHelloString and return kReplyString. Look for the following line: // TODO(sdk_user): 1. Make this function handle the incoming message. and add these lines to the HandleMessage() method: virtual void HandleMessage(const pp::Var& var_message) { if (!var_message.is_string()) return; std::string message = var_message.AsString(); pp::Var var_reply; if (message == kHelloString) { var_reply = pp::Var(kReplyString); PostMessage(var_reply); } } Step 8: Compile the Native Client module. To compile the Native Client module (hello_tutorial.cc), be sure you are still in the examples/hello_tutorial/ directory and then run the scons script (or scons.bat on Windows). For example: cd examples/hello_tutorial ./scons The scons script produces executable Native Client modules for both x86_32 and x86_64 architectures: hello_tutorial_x86_32.nexe hello_tutorial_x86_32_dbg.nexe hello_tutorial_x86_64.nexe hello_tutorial_x86_64_dbg.nexe The "dbg" .nexes retain symbol information that you can when debugging your module. In addition to building the .nexe modules, the scons script also generates a manifest file for your application. Manifest files include a set of key-value pairs that tell the browser which .nexe module to load based on the end user's processor architecture. The manifest file for this tutorial, hello_tutorial.nmf, looks as follows: { "program": { "x86-64": {"url": "hello_tutorial_x86_64.nexe"}, "x86-32": {"url": "hello_tutorial_x86_32.nexe"} } } Step 9: Try it out. Load the hello_tutorial.html web page into the Chrome browser by visiting the following URL:. After Chrome loads the Native Client module, an alert appears with the message from the module. When you make changes to your program and want to reload the web page to view your changes, be sure to clear the cache so that the new version is loaded. (From the toolbar menu, click the wrench icon, then select Preferences (or Options) > Under the Hood > Clear browsing data.) Optionally, you can run in Incognito mode and use Shift-Refresh to reload the page. Troubleshooting If your application doesn't run, see Step 4 above to verify that you've set up your environment correctly, including both the browser and the server. Make sure that you're running Chrome 14 or greater, that you've enabled both the Native Client flag and plugin and relaunched Chrome, and that you're accessing your application from a local web server (rather than by dragging the HTML file into your browser). For additional troubleshooting information, check the FAQ. Next steps - See the Application Structure chapter and the C++ Reference for details on how to structure your NaCl modules and use the Pepper APIs. - See the source code for each of the Native Client SDK examples (in the examples/directory) to learn additional techniques for writing web applications and using the Pepper APIs. - See the Compiling page for information on how to modify SCons build files and how to debug and test your NaCl modules. - Check the naclports project to see what libraries have been ported for use with Native Client. If you port an open-source library for your own use, we recommend adding it to naclports (see How to check code into naclports).
https://developers.google.com/native-client/pepper16/devguide/tutorial
CC-MAIN-2014-15
refinedweb
1,899
64.41
In this post, I am going to show why I think the internal keyword, when put on class members, is a code smell and suggest better alternatives. What is the internal keyword? In C# the internal keyword can be used on a class or its members. It is one of the C# access modifiers. Internal types or members are accessible only within files in the same assembly. (C# internal keyword documentation). Why we need the internal keyword? .” (C# internal keyword documentation) These are the use cases I saw for using the internal keyword on a class member: - Call a class’s private function within the same assembly. - In order to test a private function, you can mark it as internal and exposed the dll to the test DLL via InternalsVisibleTo. Both cases can be viewed as a code smell, saying that this private function should be public. Let's see some examples Here is a simple example. a function in one class wants to access a private function of another class. class A{ public void func1(){ func2(); } private void func2(){} } class B{ public void func(A a){ a.func2(); //Compilation error 'A.func2()' is inaccessible due to its protection level } } The solution is simple — just mark A::func2 as public. Let's look at a bit more complex example: public class A{ public void func1(){} private void func2(B b){} } internal class B{ public void func3(A a){ a.func2(this); //Compilation error 'A.func2(B)' is inaccessible due to its protection level } } What’s the problem? just mark func2 as public as we did before. public class A{ public void func1(){ ... } public void func2(B b){ ...} // Compilation error: Inconsistent accessibility: parameter type 'B' is less accessible than method 'A.func2(B)' } internal class B{ public void func3(A a){ a.func2(this); } } But we can’t ?. B is an internal class so it cannot be part of the signature of a public function of a public class. Those are the solutions I found, ordered by easiness: - Mark the function with the internal keyword public class A{ public void func1(){ } internal void func2(B b){} } internal class B{ public void func3(A a){ a.func2(this); } } 2. Create an internal interface internal interface IA2{ void func2(B b); } public class A:IA2{ public void func1(){ var b = new B(); b.func3(this); } void IA2.func2(B b){} //implement IA2 explicitly because func2 can't be public } internal class B{ public void func3(A a){ ((IA2)a).func2(this); //use interface instead of class to access func2 } } 3. Extract A.func2 to another internal class and use it instead of A.func2. internal class C{ public void func2(B b){ //extract A:func2 to here } } public class A{ public void func1(){} private void func2(B b){ new C().func2(b); } } internal class B{ public void func3(){ //a is no longer needed new C().func2(this); //use internal class instead of private function } } 4. Decouple the function from internal classes and make it public. This is much dependant on what the function is doing with its inputs. decouple the internal classes can be very easy, very hard and even impossible (without ruing the design). But we don’t have public classes we use interfaces… Let's look at some more real-world example: public interface IA{ void func1(); } internal class A : IA { public void func1(){} private void func2(B b){} } internal class B{ public void func3(IA a){ a.func2(this); //Compilation error IA' does not contain a definition for 'func2' and no extension method 'func2' accepting a first argument of type 'IA' could be found } } Let's see how the previous solutions are adapted to this example: - Mark function with Internal. this means you will need to cast to class in order to call the function so This will work only if class A is the only one that implements the interface, meaning IA is not mocked in tests and there isn’t another production class that implements IA. public interface IA{ void func1(); } internal class A : IA { public void func1(){} internal void func2(B b){} } internal class B{ public void func3(IA a){ ((A)a).func2(this); //cast to A in order to accses func2 } } 2. Create an internal interface that extends the public interface. internal interface IExtendedA : IA{ void func2(B b); } public interface IA{ void func1(); } internal class A : IExtendedA { public void func1(){} public void func2(B b){} } internal class B{ public void func3(IExtendedA a){ a.func2(this); } } 3. Extract A.func2 to another internal class and use it instead of A.func2. 4. Decouple the function from internal classes and add it to the public interface. We can see that the internal keyword is the easiest solution, but there are other solutions using the traditional building blocks of OOP: classes and interfaces. We can see that the 2nd solution — adding an internal interface is not much harder than marking the function with the internal keyword. Why not use the internal keyword? As I showed in the previous examples, using the internal keyword is the easiest solution. But you are going to have a hard time in the future if you will need to: - Move the public class A to another DLL (since the internal keyword will no longer apply to the same dll) - Create another production class that implements IA - Mock IA in tests You may think “But this is just one line of code, I or anyone else can change it easily if needed”. Now you have one line of code that looks like that: ((MyClass)a).internalFunction but if others will need to call this function too this line will be copy-pasted around inside the DLL. My Conclusion I think marking a class member with the internal keyword is a code smell. In the examples I showed above it is the easiest solution, BUT can cause problems in the future. Creating an internal interface is almost as easy and more explicit. Compare to C++ The C++ “friend” keyword is similar to the C# internal keyword. It allows a class or a function to access private members of a class. The difference is it allows access to specific class or function and not all the classes in the same DLL. In my opinion, this is a better solution than the C# internal keyword. Further Reading Practical uses for the "internal" keyword in C# Why does C# not provide the C++ style 'friend' keyword?
https://www.freecodecamp.org/news/is-the-c-internal-keyword-a-code-smell/
CC-MAIN-2020-45
refinedweb
1,079
63.09
Launched in February 2004, Facebook is the world's largest social network, with more than 900 million users using the site to share content with their connected friends. Facebook Platform, launched in May 2007, enables third parties to write applications that integrate with Facebook. The platform initially supported a wide variety of programming languages — including Java — but now it provides native SDKs only for JavaScript and PHP (as well as support for applications on iOS and Android devices). However, the open source RestFB project has been maintaining the Java API since Facebook discontinued it (see Resources). Google App Engine (GAE) is a Platform as a Service (PaaS) that enables registered developers to run applications that they write in Python, Java, or Go on Google's infrastructure. This article shows you how to register a Facebook application, develop it in Java, and deploy it for free on GAE for use by any logged-in Facebook user. (Note that Google imposes a daily limit on the resources that each application deployed on GAE can use.) The simple application you will create lists all of the user's friends with their IDs and profile pictures — similar to the previous look and feel of the friends page on a Facebook user's profile. To develop the application, you need: - A Facebook account - A Google account - The Eclipse IDE with the GAE plug-in installed (see Resources) - Familiarity with using Eclipse Sample code for the application is also available for download. Registering the application The first step is to register your application on both Facebook and GAE. It's a good practice to create the application on both platforms at the same time to ensure that the information you enter matches up as needed. Register a Facebook application Log into Facebook and launch the developer application at. (If you're launching it for the first time, you must grant the application access to your profile in order to continue.) Click Create New App in the top right of the Apps page to display the Create New App dialog, shown in Figure 1: Figure 1. Facebook Create New App dialog box Enter a valid name and an available namespace for the application. The namespace is a single-word unique identifier to be used in a Facebook app URL. (In Figure 1, I've entered My Old Friends as the app name and myoldfriends as the namespace.) Leave the option for free hosting provided by Heroku unselected and click Continue. Type the CAPTCHA code in the next dialog and click Submit to bring up the basic settings dialog for your new application, shown in Figure 2: Figure 2. Facebook application basic settings dialog Note the App ID and App Secret keys at the top of the screen (hidden in Figure 2). Facebook uses them to identify your application. Keep them private and do not allow other developers to use them, because they could be used maliciously without your knowledge. Enter an application domain in the App Domains field. This must be the GAE domain that you'll register for the application on the GAE developer site, so it must end with .appspot.com. For example, in Figure 2, I entered myoldfacebookfriends.appspot.com. Because this domain is no longer available, you'll have to use a different one. Just make sure it matches the application identifier that you use when you register the GAE app. Under Select how your app integrates with Facebook, select Website with Facebook Login and enter a site URL consisting of the GAE domain you entered in the App Domains field, preceded by http://. (In Figure 2, I've entered.) Select App on Facebook and enter the canvas URL for a servlet in which the application will run. For this application, the canvas URL ends with a question mark to indicate that the parameters for the application will be passed in via the request URL sent to the application. The secure canvas URL is the same as the canvas URL, except that https replaces http. Again, the question mark at the end of the URL is important. (The URL for my app's servlet is, so in Figure 2, I've entered? for the canvas URL and? for the secure canvas URL.) This is all you need to do to set up an application on Facebook, but it's a good idea to configure some of the other settings, such as the app's icon and its category, to modify how the application is presented to users. Register the application with GAE With the application now registered with Facebook, you'll next register it with GAE. Sign in on the applications page on GAE — — and click Create Application. Under Application Identifier, enter the same app domain name that you used in the Facebook application basic settings. (The appspot.com portion is provided for you.) You can use whatever you want for the application title, which is used in searches for registered applications. Leave the remaining options with their defaults. Figure 3. The Create an Application dialog for GAE Click Create Application to finish the GAE registration process. Developing the application In Eclipse, create a new GAE project by clicking File > New > Web Application Project, or the New Web Application Project button under the Google Services and Deployment Tools menu. Enter a project name and a package name. Deselect Use Google Web Toolkit. Download the RestFB JAR file (see Resources) and add it to the project's WEB-INF/lib folder. Add the servlet definition for the application to the project's web.xml file. Listing 1 shows the definition I used: Listing 1. Servlet definition <?xml version="1.0" encoding="utf-8"?> <web-app xmlns: <servlet> <servlet-name>myoldFacebookfriendsServlet</servlet-name> <servlet-class>com.Facebook.friends.MyOldFacebookFriendsServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>myoldFacebookfriendsServlet</servlet-name> <url-pattern>/myoldFacebookfriends</url-pattern> </servlet-mapping> </web-app> Note that the <url-pattern> is the same as in the canvas URL in the Facebook application basic settings dialog, but without the question mark. Since October 2011, Facebook applications have required Secure Sockets Layer (SSL) to be enabled; it is disabled by default on GAE. To enable it, add the following line to appengine-web.xml: <ssl-enabled>true</ssl-enabled> The Facebook signed request Using the HTTP POST method, Facebook sends a signed request to the application servlet to create the application's content. This request contains a 64-bit encoded payload containing, among other metadata, the OAuth token for the application for the current user. This is included in the payload of the request only if the user has granted the application access to his or her profile. You need to convert it to a Java object so that the application can use it. Listing 2 shows the source for the Java object of the signed request. I've omitted all appropriate get and set methods for clarity; they're included in the source-code download (see Download). Listing 2. The signed-request object import org.apache.commons.codec.binary.Base64; import org.codehaus.jackson.map.ObjectMapper; public class FacebookSignedRequest { private String algorithm; private Long expires; private Long issued_at; private String oauth_token; private Long user_id; private FacebookSignedRequestUser user; public static class FacebookSignedRequestUser { private String country; private String locale; private FacebookSignedRequestUserAge age; public static class FacebookSignedRequestUserAge { private int min; private int max; } } } You can decode the payload with Base64 in the Apache Commons Codec library. The decoded payload is in JavaScript Object Notation (JSON) and can be converted to a Java object using the Jackson JSON processor. Download their JAR files and add them to the project (see Resources). Add the static helper method shown in Listing 3 to the FacebookSignedRequest class to create the object: Listing 3. Helper method for encoding and decoding the payload public static FacebookSignedRequest getSignedRequest(String signedRequest) throws Exception { String payload = signedRequest.split("[.]", 2)[1]; payload = payload.replace("-", "+").replace("_", "/").trim(); String jsonString = new String(Base64.decodeBase64(payload.getBytes())); return new ObjectMapper().readValue(jsonString, FacebookSignedRequest.class); } Create the servlet Now you can start coding the application that will run in a servlet. Create a new class with the same signature as the <servlet-class> definition in web.xml. First, you need to extract the OAuth token from the payload, using the SignedRequest class, as shown in Listing 4: Listing 4. Extracting the OAuth token(); If the oauthToken object is null, then the user has not granted access to the application and must be redirected to the authentication URL. The URL, which is standard across all applications, is in the following format: KEY&redirect_uri= Namespace/&scope=Permissions API KEY and Application Namespace in the authentication URL are the ones displayed in the Facebook basic settings dialog for the application. Permissions is the list of permissions needed for your application. All applications have basic permissions by default, and you needn't add any others for this article's purposes. (See Resources for a link to the complete list of available permission.) You can customize the look and feel of this page in the Settings > Auth Dialog page of the Facebook developer application. Ordinarily, you would use the servlet HttpServletRequest.sendRedirect() method, but because the application will be running in an <iframe> on the apps.Facebook.com domain, a short JavaScript snippet, shown in Listing 5, changes the location of the browser window to the application URL: Listing 5. JavaScript code redicting to the application URL PrintWriter writer = response.getWriter(); if(oauthToken == null) { response.setContentType("text/html"); String authURL = "" + API_KEY + "&redirect_uri="; writer.print("<script> top.location.href='" + authURL + "'</script>"); writer.close(); } With a valid OAuth token, you can create a DefaultFacebookClient from the RestFB library and use it to retrieve data from Facebook's Graph API using the fetchConnection() call. Go to the Graph Explorer on Facebook at. Select the application you are developing from the drop-down box in the top right and click Get access token to grant access. Click on the various links under the Connections heading to see the results. To get the user's friends, click the friends link and view the results. Note that the value of the URL in the explorer is user id/friends. The connection parameter in the function call would normally use the same value as in the Graph Explorer. But because the application uses the data of the logged-in user, you can replace the user ID with me, making the value me/friends. The call returns the raw Connection type, and because the class type is User, you need to add it as a parameter. The final call is: Connection<User> myFriends = FacebookClient.fetchConnection("me/friends", User.class); The result of the fetchConnection() method call is held in a List of List objects in the Connection class. The Connection class implements the Iterable interface, so you can obtain each List object in the List by using the enhanced for loop: for (List<User> myFriendsList : myFriends) On each iteration of the loop, the myFriendsList object contains the current list of users for that returned data page. Each User object in this list is used to create a line in the table of users the servlet will create. Although the user ID and name can be retrieved from the User object, the profile picture cannot. However, Facebook provides a URL for retrieving the profile picture of any user: ID/picture. Thus, the URL of the profile picture is created by replacing User ID in the URL with the user ID from the User object. Using the same PrintWriter object as before, write a table with a header row to the canvas: writer.print("<table><tr><th>Photo</th><th>Name</th><th>Id</th></tr>"); Iterate through the list of User objects, as just described, then write a new row for this table, using the instance variables from each User object: for (List<User> myFriendsList : myFriends) { for(User user: myFriendsList) writer.print("<tr><td>" + "<img src=\"" + user.getId() + "/picture\"/>" + "</td><td>" + user.getName() +"</td><td>" + user.getId() + "</td></tr>"); } Finally, close the <table> tag, and close the PrintWriter object to complete the servlet: writer.print("</table>"); writer.close(); Listing 7 shows the final servlet doPost() method: Listing 7. The doPost() method public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {(); PrintWriter writer = response.getWriter(); if(oauthToken == null) { response.setContentType("text/html"); String</td><td>" + user.getName() + "</td><td>" + user.getId() + "</td></tr>"); } writer.print("</table>"); writer.close(); } } Deploying the application With the servlet created, the application is ready to be deployed. Click the Google icon in Eclipse and select Deploy to App Engine. After the application is compiled and uploaded to the server, you (and any other Facebook user) can install the application in Facebook at ID/ and see the results. Conclusion This article has demonstrated how to register, implement, and deploy a Facebook application in Java, hosted on the Google App Engine service. Now that you're familiar with the basics, I suggest you experiment with some variations. Instead of writing the HTML for the content directly to the page, you could implement a more traditional Model-View-Controller (MVC) approach by using the standard RequestDispatcher.forward() call to a JavaServer Pages (JSP) page. You could try creating an application that uses some of the extra permissions to data supplied by the Graph API. You pass the list of permissions an application needs to the authentication URL by adding each one to the scope request parameter. RestFB provides a helper method — StringUtils.join() — to create the list correctly. It takes a single String array as a parameter; each entry in the array is a permission name. Finally, you could try to recreate the sample application using the Facebook-java-api Google Code project — an alternative implementation of the Facebook API — instead of RestFB (see Resources). Download Resources Learn - Facebook developer site: This is the home of the Facebook Platform, Graph API Explorer, and other tools for third-party Facebook developers. - Signed Request: Read about the signed_requestparameter's role in authenticating Facebook apps. - Facebook Canvas App Authentication — Java: Check out this blog post on decoding the Facebook signed request. - Facebook Permissions Reference: You'll find the full list of available Facebook app permissions here. - Google App Engine: Check out the GAE website. - "Java PaaS shootout" (Michael Yuan, developerWorks, April 2011: This article compares three major PaaS offerings for Java, including GAE. - Using the Google Plugin for Eclipse: Read about the GAE plug-in for Eclipse and how to install it. - RestFB: Visit the RestFB project site. - JavaScript Object Notation (JSON): Learn more about the format in which the data is transferred between Facebook and your application. - facebook-java-ai: Check out this alternative to RestFB. - Browse the technology bookstore for books on these and other technical topics. - developerWorks Java technology zone: Find hundreds of articles about every aspect of Java programming. Get products and technologies - RestFB: Download RestFB. - Eclipse: Download the Eclipse IDE. - Apache Commons Codec: Download Commons Codec. - Jackson: Download the Jackson JSON processor. - Download IBM product evaluation versions or explore the online trials in the IBM SOA Sandbox and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®. Discuss - developerWorks blogs: Get technical tips and support for RestFB. -.
http://www.ibm.com/developerworks/library/j-fb-gae/index.html
CC-MAIN-2015-06
refinedweb
2,546
54.93
Using Arrows for Dependency Handling What is this about Arrows are, like Monads or Monoids, a mathematical concept that can be used in the Haskell programming language. This post analyzes and explains a certain use case for Arrows, namely dependency tracking. But let’s begin with a quite unrelated quote from Jimi Hendrix. Well my arrows are made of desire From far away as Jupiter’s sulphur mines Arrows are less common than Monads or Monoids (in Haskell code, that is), but they are certainly not harder to understand. This blogpost aims to give a quick, informal introduction to Arrows. As concrete subject and example, we use dependency handling in Hakyll. The problem Hakyll is a static site generator I use to run this blog. The principles behind it are pretty simple: you write your pages in markdown or something similar, and you write templates in html. Then, you render the pages with some templates using a configuration DSL. In Hakyll, the rendering algorithm is more or less like this: - Read a page from a file, say contact.markdown. - Parse and render this page to HTML using pandoc. - Do some further manipulations on the result. - Render the result using a HTML template. - Write the result to _site/contact.html. The catch is, say _site/contact.html is “newer” than contact.markdown and the HTML templates. In this case, we do not want to do anything. Haskell lazyness will not help us a lot here, since we’re dealing with a lot of IO code. Suppose we’re currently reading the page from a file. We know the timestamp of the file we’re reading, but since we don’t yet know the timestamp of the other files on which the final result depends, we don’t know if we can skip this read or not. This means dependency handling should happen on a higher level, above these specific functions – so we need to abstract dependency handling. A general notion of computation So let’s create a wrapper for functions dealing with dependencies. data HakyllAction a b = HakyllAction { actionDependencies :: [FilePath] , actionUrl :: Maybe (Hakyll FilePath) , actionFunction :: a -> Hakyll b } Some explanation might be needed here. You can think of a as the input for our action, and then b is the output. The actionUrl contains the final destination of our computations – this can be Nothing, if it is not yet known. And finally, the actionFunction contains the actual action. The Hakyll is a usual monad stack with IO at the bottom. Categories To qualify as an Arrow, a datatype needs to be a Category. So lets create an instance Category HakyllAction first. There are two functions we need to implement: id: The simple identity category. This is comparable to the Prelude.idfunction. .: Category composition – this is comparable to function composition. The id action has no dependencies, no destination, and simply returns itself. instance Category HakyllAction where id = HakyllAction { actionDependencies = [] , actionUrl = Nothing , actionFunction = return } The . action is not complicated either. The new dependencies consist of all the dependencies of the two actions. For our destination, we use an mplus with the latest applied function first, so it gets chosen over the other destination. x . y = HakyllAction { actionDependencies = actionDependencies x ++ actionDependencies y , actionUrl = actionUrl x `mplus` actionUrl y , actionFunction = actionFunction x <=< actionFunction y } The <=< operator is right-to-left monad composition. Arrows To make our action a real Arrow, we need to implement two more functions: arr: This should lift a pure function (thus, with an a -> bsignature) into HakyllAction, so we have the type signature (a -> b) -> HakyllAction a b. first: This is a function that should operate on one value of a tuple. This all happens “inside” HakyllAction– perhaps an illustration will explain this better. You can see how f :: a -> bapplies to an (a, c)tuple, where fis applied on the first value. Illustration of arrow first instance Arrow HakyllAction where arr f = id { actionFunction = return . f} first x = x { actionFunction = \(y, z) -> do y' <- actionFunction x y return (y', z) } Actually using it Now that we’ve put all this trouble into creating an Arrow, we might as well use it. Let’s examine some functions from Hakyll and see how they fit. createPage :: FilePath -> HakyllAction () Context render :: FilePath -> HakyllAction Context Context writePage :: HakyllAction Context () Arrows are usually combined using the >>> operator. This gives us: test = createPage "contact.markdown" >>> render "templates/default.html" >>> writePage This creates a HakyllAction but doesn’t actually do anything. We still need to run it. And now we can see the benefits of this method, since we can write a function that does dependency checking on the combined functions. runHakyllActionIfNeeded test The actual implementation of runHakyllActionIfNeeded goes more or less like this: runHakyllActionIfNeeded :: HakyllAction () () -> Hakyll () runHakyllActionIfNeeded action = do url <- case actionUrl action of (Just u) -> u Nothing -> error "At this time, an URL should be set!" valid <- isFileMoreRecent url $ actionDependencies action unless valid (actionFunction action ()) The isFileMoreRecent function checks if the first file is more recent than all of the other files. Profit! We have now developped a more robust and better dependency checking system, and learned something about Arrows. If you are interested, the complete code that led to this blogpost is available on here on GitHub. There are also a few other interesting things to consider: These questions are left as an exercise to the reader. On a sidenote, kudos to BCoppens for proofreading through this post.
https://jaspervdj.be/posts/2010-03-26-arrows-dependencies.html
CC-MAIN-2016-30
refinedweb
904
56.05
Hello,Following is a simple example that works when the dl library isn't involvedat link time, but fails when it is.Here are relevant versions:Linux pile.franz.com 2.0.34 #1 Fri May 8 16:05:57 EDT 1998 i586 unknown/lib/libc-2.0.7.so/lib/libc.so.6/lib/ld.so.1.9.5Here's how to reproduce the problem:1. Make a file called ctest.c, containing:#include <stdio.h>#include <stdlib.h>#include <schedbits.h>intfoo( void *arg ){ fprintf( stderr, "in foo, pid = %d\n", getpid() ); call_sub(); sleep( 2 );}main(){ void *stk = malloc( 1024 * 1024 ); clone( foo, stk, CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, 0 ); fprintf( stderr, "in main, pid = %d\n", getpid() ); sleep( 2 );}2. Make a file called csub.c, containing:#include <dlfcn.h>int call_sub(){#ifdef WILL_BREAK_CLONE dlopen( "somefile", RTLD_LAZY );#endif return 0;}3. Make sure that LD_LIBRARY_PATH contains the current directory4. Compile, link, and run to show that it there is no problem with clone()when the dl library is not involved:cc -c csub.cld -shared -o libcsub.so csub.occ -o ctest ctest.c -L. -lcsubctest(You will see: in main, pid = xxx in foo, pid = xxx+1)5. recompile, relink, and rerun to show that clone() fails when the dllibrary is involved:cc -c -DWILL_BREAK_CLONE csub.cld -shared -o libcsub.so csub.o -ldlctest(You will see: in main, pid = xxx) - the cloned thread will have died before running foo...I would appreciate any information that would help me get around this...Thanks,Steve-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/1998/9/24/4
CC-MAIN-2017-13
refinedweb
282
69.48
Almost there - if you save 2 receipts on the same day they will overwrite. In the "first" Rename Finder Items put the date *before* the filename. Add a second Rename Finder and put the "Seconds from 12M" *after* the filename. Now you will have DATE-title-SECONDS so you can save many receipts without overwriting, even if they are on the same day, with the same title. Nice tweak, and nice hint. Quick and easy to implement! I reworked the workflow from Tiger since the Leopard script is Phyhon! The first Finder Rename adds HH-MM-SS The second adds date YY-MM-DD And then saved it to ~/library/pdf services/ (add the folder if missing) or to the folder library/pdf services/ for all users. The automator script looks like this:[link:] Sadly, I just discovered this the other day, when looking at my receipts, expecting to find 50-60 of them, found less than a dozen. Thanks for the tip. Yeah, that's frustrating that Apple just has it set to overwrite. I made the changes here and in the comments and it's much better. It'd be nice to just be able to pick the file name though, in addition to adding date/time info. Not every web page has a sensible title! I just put an alias called "Receipts" in ~/Library/PDF Services that points to my desired destination folder. This creates an item in the PDF Services menu of the same name. As far as I can tell, PDFs sent to to this menu item are automatically renamed to avoid overwriting. THANK YOU. I had wanted to do this and fiddled around on multiple occasions but never quite worked it out. And I certainly did *not* know that this script was silently overwriting my files. (That's really a boneheaded move on Apple's part.) So, thanks a million for pointing that out! I have been making heavy use of that script over the past few years and I am sure many things have gone missing as a result. --- "Knicks suck, Yankees suck, Mets suck..." "...Krypton sucks" You should really report the overwriting as a bug to Apple. Having such a service overwriting older receipts with the same name surely defeats the sole purpose of that service. Adding date/time or a sequence number is not only a nice idea, it's actually required for such things, especially when the file name is taken from the title of a webpage. This is all nice, and the recommendation to send in a bug report is good, but just do what scbarton said and put an alias to any folder in the PDF Services folder. Select that folder in the PDF dropdown and it will append a 2, 3, 4, etc as needed as new files with same name are added. Really, the hint should be don't use Save to Web receipts folder, but instead, add an alias to any folder and save the web receipts there. --- Jim Actually the alias is not a good solution for a multi-user machine. Adding the date and time to the automated PDF workflow is a good idea except it may be written over in a future update/bugfix. Maybe making a copy and changing the name and disabling the real web-receipts button would also be wise. on a multi-user machine you'd put the new aliases in /Users/username/library/PDF Services, not in /Library/Pdf Services. that way the folders would point to folders iset up for each individual user. the automator workflow just insured the Web Receipts folder gets created in the first place. personally i have 3 folders: Banking, Web and Billing Receipts. There's now a "Save PDF to Web Receipts Folder.pdfworkflow" which no app on my computer can open. Inside this package is an application called "tool" which happens to have been written in Python. Apple now just tacks numbers onto the end of the filename if it isn't unique. Sort of ineloquent, but it works. At least it doesn't overwrite data anymore, but the filenames are less useful than the Tiger hint made them. The 10.5 workflow (as of 10.5.1) seems to choke on webpages with the / character in the title, like "Order / Thank You." How do I get this to work on 10.5 When I go into /library/ PDF services I get a workflow that looks like this: Save PDF to Web Receipts Folder.pdfworkflow It won't open in automator, even when I open it arbitrarily. Please HELP! As pointed out, this is now a python script in 10.5. You can directly edit the "tool" file at /Library/PDF Services/Save PDF to Web Receipts Folder.pdfworkflow/Contents to add the behaviour you want. I, for example, changed the default save folder, and added/changed these lines: import datetime # Create a YYYYMMDD string today = datetime.datetime.now().strftime("%Y%m%d_") destFile = today + title + ".pdf" This is the first time I've edited python, but it seems to work. I'd like to be able to review and edit the name of the file before it gets saved -- to include the name of the product I just bought or registered rather than a generic title from Kagi, for example. Seems like this would be trivial but I have no Automator experience at all. Anybody have a suggestion about how to make a "Save to Web Receipts" that also lets you change the name? "Save as PDF..." alone would be OK except that it doesn't default to the "Web Receipts" folder. vi "/Library/PDF Services/Save PDF to Web Receipts Folder.pdfworkflow/Contents/Tool" import sys def safeFilename(filename): filename = filename.lstrip('.') filename = filename.replace(os.path.sep, '-') if len(filename) == 0: filename = "Untitled" elif len(filename) > 245: filename = filename[0:245] return filename import sys import datetime def safeFilename(filename): filename = filename.lstrip('.') filename = filename.replace(os.path.sep, '-') if len(filename) == 0: filename = "Untitled" elif len(filename) > 200: filename = filename[0:200] now = datetime.datetime.now() stamp = now.strftime("_%Y%m%d_%H%M%S") return filename + stamp Visit other IDG sites:
http://hints.macworld.com/article.php?story=2007020700285547
CC-MAIN-2015-48
refinedweb
1,037
73.37
Pattern SynonymsPattern Synonyms Most language entities in Haskell can be named so that they can be abbreviated instead of written out in full. This proposal provides the same power for patterns. See the implementation page for implementation details. Tickets should include PatternSynonyms in their Keywords to appear in these summary lists. Open Tickets: There is a list of closed tickets at the bottom of the page. Motivating exampleMotivatingUni-directional (pattern-only) synonyms The simplest form of pattern synonyms is the one from the examples above. The grammar rule is: patternconid**varid1 ... varidn <-pat patternvarid1consymvarcononymsSimplyconid**varid1 ... varidn =pat patternvarid1consymvaronymsExplicitlyconid**varid1 ... varidn <-pat wherecfunlhs**rhs where cfunlhs is like funlhs, except that the functions symbol is a conid instead of a varid. Example, P :: CProv => CReq => t1 -> t2 -> ... -> tN -> t where t1, ..., tN are the types of the parameters var1, ..., varN, t is the simple type (with no context) of the thing getting matched, and CReq and CProv are type contexts. CReq can be omitted if it is empty. If CProv is empty, but CReq is not, () is used. The following example shows cases: data Showable where MkShowable :: (Show a) => a -> Showable -- Required context is empty pattern Sh :: (Show a) => a -> Showable pattern Sh x <- MkShowable x -- Provided context is empty, but required context is not pattern One :: () => (Num a, Eq a) => a pattern One <- 1 A pattern synonym. As with function and variable types, the pattern type signature can be inferred, or it can be explicitly written out on the program. Here's a more complex example. Let's look at the following definition: {-# LANGUAGE PatternSynonyms, GADTs, ViewPatterns #-} module ShouldCompile where data T a where MkT :: (Eq b) => a -> b -> T a f :: (Show a) => a -> Bool pattern P x <- MkT (f -> True) x Here, the inferred type of P is pattern P :: (Eq b) => (Show a) =>Dynamic This is because we generate the matching function at the definition site. Typed pattern synonymsTyped pattern synonyms So far patterns only had syntactic meaning. In comparison Ωmega has typed pattern synonyms, so they become first class values. For bidirectional pattern synonyms this seems to be the case data Nat = Z | S Nat deriving Show pattern Ess p = S p And it works: *Main> map S [Z, Z, S Z] [S Z,S Z,S (S Z)] *Main> map Ess [Z, Z, S Z] [S Z,S Z,S (S Z)] Branching pattern-only synonymsBranching pattern-only synonyms *N.B. this is a speculative suggestion! * Sometimes you want to match against several summands of an ADT simultaneously. E.g. in a data type of potentially unbounded natural numbers: data Nat = Zero | Succ Nat type UNat = Maybe Nat -- Nothing meaning unbounded Conceptually Nothing means infinite, so it makes sense to interpret it as a successor of something. We wish it to have a predecessor just like Just (Succ Zero)! I suggest branching pattern synonyms for this purpose: pattern S pred <- pred@Nothing | pred@(Just a <- Just (Succ a)) pattern Z = Just Zero Here pred@(Just a <- Just (Succ a)) means that the pattern invocation S pred matches against Just (Succ a) and - if successful - binds Just a to pred. This means we can syntactically address unbound naturals just like bounded ones: greetTimes :: UNat -> String -> IO () greetTimes Z _ = return () greetTimes (S rest) message = putStrLn message >> greetTimes rest message As a nice collateral win this proposal handles pattern Name name <- Person name workplace | Dog name vet too. Record Pattern SynonymsRecord Pattern Synonyms See PatternSynonyms/RecordPatternSynonyms Associating synonyms with typesAssociating synonyms with types See PatternSynonyms/AssociatingSynonyms COMPLETE pragmas See PatternSynonyms/CompleteSigs Closed TicketsClosed Tickets Closed Tickets:
https://gitlab.haskell.org/ghc/ghc/wikis/pattern-synonyms?version=46
CC-MAIN-2019-13
refinedweb
599
51.99
by Peter Daukintis The cinema screen used for the presentations at the Fulham Vue Cinema. This was the client-based tech day: Windows 7 The Windows 7 Code Pack provides a managed wrapper around the COM-based interfaces provided for many of the Windows 7 differentiating client application features. These include jump lists, file dialogs, sensor APIs Direct3D, linguistic services, etc.. Sensor support in Windows 7, such as .NET 4.0 location APIs, light sensors, etc. WPF 4.0 Another great presentation from Ian Griffiths on WPF 4.0 and supporting features in VS2010. VS2010 has improved data-binding, better intellisense and an improved designer. The new features include a User Interface for styles and resources, a user interface for editing data-binding and drag and drop support for the data sources window. It is also possible to get ViewModel data-binding support from within VS2010 using the xaml namespace in the following way: d:DataContext=”{d:DesignInstance vm:myViewModel}” where d is the xamlnamespace and vm is the namespace with your view model in. This will then allow the data-binding to be wired up via the new user interface elements. This is pretty cool and should decrease the chances of making mistakes when setting up the bindings manually. Graphics - Text rendering now provides much more control for the developer and the trade-offs between clarity and fidelity can be carefully selected in the code. - More things are offloaded to the GPU – although no details are available on what?? - Pixel Shader 3.0 support - Bitmap caching support - Animation easing support (from Silverlight) - Layout rounding - Windows 7 features have been added – multi-touch support, Windows 7 dialogs, jump lists via xaml. Data-Binding - Run text items can now be data-bound (didn’t work in 3.5) - Dynamic objects can be data-bound (support for c# dynamic keyword) - Input Binding (keypress, mousedown, etc) now works. - Datagrid, calendar, date picker now all built-in controls. - Now includes Visual State Manager (from Silverlight). WPF strategy of using triggers, although very flexible considered less designer-friendly. WPF XAML parsing has been improved and the internals are now more visible. Also, there is now one parser that is used by the various clients; blend, visual studio, etc – this was not the case previously! Technorati Windows Live
https://peted.azurewebsites.net/microsoft-uk-tech-days-day-4/
CC-MAIN-2018-13
refinedweb
382
56.25
When an application has an unhandled exception or a system routine calls abort (e.g. strcpy_s with a buffer size too small) the system is left to deal with the mess. Usually part of the process is displaying a crash dialog to the user notifying them that the application has unexpectedly terminated, creating a crash dump file, and possibly checking with Microsoft for a solution if it’s a known problem. Sometimes however, you’d prefer that your application not crash in such an “in your face” manner, such as when you spawn child or helper processes as part of a larger system which can manage the process termination on its own. The first thing you should do is focus on making your application NOT crash in the first place! However, there are times when things may be beyond your control (e.g. making calls to a third-party library that sometimes wigs out on you) or you just want it as a fail safe in case. You can use you own unhandled-exception handler for instance, to perform custom logging. There are several manners in which the system and the C-runtime can notify the user of an abnormal application termination. In order to suppress all of them, most of the time, I use the following routines: You’ll note that I said most of the time because this method doesn’t guarantee that all of them will be suppressed. Other libraries you call could overwrite your handlers for one. Secondly, the /GS (Buffer Security Check) compiler flags causes the CRT to directly invoke Dr. Watson in the case of a buffer-overrun for security purposes. To prevent and detect other libraries from overwriting your exception filter, you can use API hooking. When it comes to the CRT directly calling Dr. Watson, this is by design and Microsoft has no plans of changing it. Here’s the important parts of the source. The entire project is available here. #include <stdio.h> #include <stdlib.h> #include <Windows.h> // Function Declarations void suppress_crash_handlers( void ); long WINAPI unhandled_exception_handler( EXCEPTION_POINTERS* p_exceptions ); int main( int argc, char* argv[] ) { // Suppress C4100 Warnings for unused parameters (void*)argc; (void*)argv; suppress_crash_handlers( ); abort( ); return -1; } void suppress_crash_handlers( ) { // Register our own unhandled exception handler // SetUnhandledExceptionFilter( unhandled_exception_handler ); // Minimize what notifications are made when an error occurs // SetErrorMode( SEM_FAILCRITICALERRORS | SEM_NOALIGNMENTFAULTEXCEPT | SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX ); // When the app crashes, don't print the abort message and don't call Dr. Watson to make a crash dump. // _set_abort_behavior( 0, _WRITE_ABORT_MSG | _CALL_REPORTFAULT ); } long WINAPI unhandled_exception_handler( EXCEPTION_POINTERS* p_exceptions ) { // Suppress C4100 Warnings unused parameters required to match the // function signature of the API call. (void*)p_exceptions; // Throw any and all exceptions to the ground. return EXCEPTION_EXECUTE_HANDLER; }
http://www.zachburlingame.com/2011/04/silently-terminate-on-abortunhandled-exception-in-windows/
CC-MAIN-2017-17
refinedweb
449
53.31
I am a Python newb and I am working on a project to automate a very time consuming project. I am using openpyxl to access a .xlsx to extract information that will eventually be converted into distance and direction bearings to be used with arcpy/arcgis, meaning I am using Python 2.7. I can access the data and make the first round of changes, but I cannot get my write command integrated into my loop. Currently it saves the last row's data to all cell's in the given range in the new .xlsx. Here is my code: #Importing OpenPyXl and loads the workbook and sheet import openpyxl wb = openpyxl.load_workbook('TESTVECT.xlsx') ws = wb.get_sheet_by_name('TEST') #allows to save more than once write_only = False cell_range = ws['C'] #sorts through either the rows/columns and slices the required string") #This part save the very last row to all rows in available columns #need a way to integrate the save functionality so each row is unique for rowNum in range(2, maxRow): ws.cell(row=rowNum, column=3).value = keep for rowNum in range (2, maxRow): ws.cell(row=rowNum, column=1).value = parID for rowNum in range (2, maxRow): ws.cell(row=rowNum, column=2).value = Lline #Only prints the very last keep entry from the .xlsx print keep print "all done" #Saving does not write all of the the 'keep, parID, and Lline' records #There is an issue with the for loop and integrating the write portion of #the code. wb.save('TESTMONKEYVECT.xlsx') Your intuition was correct, you need to combine the loops. The first loop goes through each row, and saves the parID, Lline, and keep over the last value in each of those variables. After the loop, they only have the values from the last row because that was the only row to not have another one come along after it and overwrite the values. You can solve this by combining the actions into a single loop.") ws.cell(row=rowNum, column=3).value = keep ws.cell(row=rowNum, column=1).value = parID ws.cell(row=rowNum, column=2).value = Lline
https://codedump.io/share/oAjKY9ELnCjg/1/using-python-and-openpyxl-to-loop-through-a-xlsx-but-loop-only-save-the-last-row39s-data
CC-MAIN-2017-17
refinedweb
357
70.84
HTML Tidy was initially developed as a tool to clean up HTML, but it is an XML tool, too. This hack shows you how to use HTML Tidy to make your HTML into XHTML. HTML Tidy was initially developed at the W3C by Dave Raggett (). Essentially, it's an open source HTML parser with the stated purpose of cleaning up and pretty-printing HTML, XHTML, and even XML. It is now hosted on Sourceforge (). You can download versions of Tidy for a variety of platforms there. Example 2-10 shows an HTML document, goodold.html, which we will run through HTML Tidy. <HTML> <HEAD>> Assuming that Tidy is properly installed, you can issue the following command to convert goodold.html to the XHTML document goodnew.html using the -asxhtml switch: tidy -indent -o goodnew.html -asxhtml goodold.html The -indent switch indents the output, and the -o switch names the output file. Tidy will issue warnings if necessary, and provide tips that encourage accessibility. The new file goodnew.html looks like Example 2-11. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns=""> <head> <meta name="generator" content= "HTML Tidy for Windows (vers 1st January 2004), see" /> > What was once HTML is now well-formed, indented XHTML 1.0, an XML-ized version of HTML [Hack #61] . A document type declaration was added that references the strict XHTML 1.0 DTD, as well as a namespace declaration (), and all of the tag names were changed to lowercase. Other than a meta tag in the head section, the rest of the document remains similar to the original HTML. There's a lot more possible with Tidy; for example, it gives you considerable control over character encodings with switches like -ascii, -utf8, and -utf16; allows you to replace the deprecated elements FONT, NOBR, and CENTER with CSS equivalents using the -clean or -c switch; lets you accept XML as input with -xml; and even converts XHTML back to HTML (-ashtml).
https://etutorials.org/XML/xml+hacks/Chapter+2.+Creating+XML+Documents/Hack+22+Convert+an+HTML+Document+to+XHTML+with+HTML+Tidy/
CC-MAIN-2021-31
refinedweb
331
74.59
A Simple Example: Google Guice Inversion of Control (IoC) By Jason Tee and Cameron McKenzie TheServerSide.com A Pretty Darned Simple IoC Example with Google Guice Watch this Google Guice Tutorial as a Video CBT Tutorial 1: Getting Started with Google Guice: Environment Setup and Configuration Tutorial 2 (This One): The Simplest IoC Example You've Ever Seen, with Google Guice In TheServerSide tutorial on basic Inversion of Control with Spring, we took a little class named the GameSummary and had the mighty Spring Container spit out instances to us. The GameSummary class just represented the results of a Rock-Paper-Scissors game, so it has simple String properties such as clientChoice, serverChoice, result and date to represent the time the game was played. Throw in the requisite setters and getters, along with a little toString method, and you feel like you’re dealing with a few klocs of code. For the sake of brevity, I’ve taken out the setters and getters for now, and a couple of arrays used in the Spring article have been stripped out, as they’re not really needed right now. Really, all we want is just a simple class, so here is what the GameSummary class we're going to use in this example looks like (updated with a new package statement that makes sure it’s part of the guice folder structure): package com.mcnz.guice; public class GameSummary { private String clientChoice, serverChoice, result; private java.util.Date date = null; public String toString() { return clientChoice + ":" + serverChoice + ":" + result + ":" + date; } } Getting Guice to Do IoC And how do we get Google Guice to inject this cute little POJO into our application? Well, it’s simple. We just ask the Guice injector to get us an instance: package com.mcnz.guice; import com.google.inject.*; public class GumRunner { public static void main(String args[]){ Injector injector = Guice.createInjector(); GameSummary gs = injector.getInstance(GameSummary.class); System.out.println(gs.toString()); } } Running the Code Here's how you can run the code if you're following from the previous tutorial: C:\_mycode>c:\_jdk1.6\bin\javac -classpath "C:\_guice\*" C:\_mycode\com\mcnz\guice\*.java C:\_mycode>c:\_jdk1.6\bin\java -classpath "C:\_guice\*";C:\_mycode com.mcnz.guice.GumRunner And what happens when we compile and run our code now? Well, when we run our application, we get the following output: null:null:null:null Sure...It's a Very Simple Guice Example Okay, sure, that’s not the most glamorous output in the world, but what do you expect? We just asked Google Guice to give as an object that had all of its properties set to null. What, did you expect, fireworks and Champaign? Anyways, the point is, doing inversion of control with Google Guice is pretty darned easy, and if you made it this far, you’ve managed to configure a development environment to run and test Google Guice applications, and you’ve also seen one of the simplest examples of Inversion of Control that you’ll ever see; and that’s a darned good start. But it’s just a start. Stay tuned, because there are more Google Guice tutorials to come. Watch this Google Guice Tutorial as a Video CBT Tutorial 1: 28 May 2010
http://www.theserverside.com/tutorial/A-Simple-Example-Google-Guice-Inversion-of-Control-IoC
CC-MAIN-2016-22
refinedweb
546
60.35
I. Enter round three, wherein the author fumbled around for other windows the monitor blank timeout message could be posted to, and eventually the author found that posting the message to the mysterious window.) If you look more carefully at what the author stumbled across, you'll see that the "solution" is actually another bug. It so happens that the numerical value -1 HWND_BROADCAST: #define HWND_BROADCAST ((HWND)0xffff) It so happens that internally, the window manager supports (HWND). Re HWND_TOPMOST vs. HWND_BROADCAST issue… I searched MSDN for HWND_TOPMOST and found a handful of items where it’s described as a valid (or even required) target to send/post messages to: About Messages and Message Queues "If the window handle is HWND_TOPMOST, DispatchMessage sends the message to the window procedures of all top-level windows in the system." Using Messages and Message Queues "If the handle is HWND_TOPMOST, the system posts the message to the thread message queues of all top-level windows." WM_TIMECHANGE "Windows Me/98/95: An application should send this message to all top-level windows after changing the system time using the SendMessageTimeout function with HWND_TOPMOST. Do not send this message by calling SendMessage with HWND_BROADCAST." I only ever used it with SetWindowPos to make a window topmost… Don’t leave us hanging…what is the correct method of powering down the monitor? A quick search on MSDN reveals ‘SetActivePwrScheme’ as a good start anyways. Another dangerous habit I noticed looking at the examples is misusing special parameters such as HWND_BROADCAST and HWND_TOPMOST. These values are only valid in certain functions, PostMessage/SendMessage and SetWindowPos in this case. You cannot expect any function that takes an HWND to properly handle these values. It seems obvious but its one of those mistakes I made when starting Windows programming. So, what is the correct way of turning off the monitor? Ok now I think I may have badly interpreted your last paragraph, and that sending a message is not the correct solution! (And I am reassured!) Still, why does the posted solution work? Is it because of the defwndproc or is a window intercepting the notification and doing the correct API call? (That would still leave me wondering why) Your article started by saying the user was trying to “trigger the system’s monitor blank timeout”. He said he was trying to power off the monitor. Are those the same things? "Making up intentionally invalid parameters and seeing what happens falls into the category of malicious goofing around, not in the realm of software engineering and design." Some would describe it as hacking, which isn’t always malicious, or goofing around. We owe a good many of the world’s inventions to people who were just "seeing what happens." The correct way is to make your own window and once it’s up and running to SendMessage or PostMessage to it. If your window procedure is correctly written, the messages which are not handled by your code will be handled by the system, and that’s what you want. From the MSDN articles Aaron dug up I see there’s even a little internal confusion between HWND_TOPMOST and HWND_BROADCAST. Perhaps this is partly the reason why the window manager accepts an alternate value of -1 for HWND_BROADCAST. Another thing I’m wondering about, hasn’t HWND always been an unsigned value? If so, isn’t #define HWND_TOPMOST ((HWND)-1) a bad idea? If Windows was open source we wouldn’t have to “fumble around in the dark”. Us poor non-Microsoft developers often have to reverse engineer Windows to get our job done; please don’t demean us because of it. We don’t really want to use Windows — its just forced on us by the marketplace. I’m going to be ruthless and defend the developers actions. Power management in windows is pretty minimally documented except in the driver pages, and support in .NET is particularly sparse. And of course, there is no source code for all of us to look at. Raymond: you have the advantages of (a) the source and (b) the ability to email someone who will know someone who knows the answer. The rest of us have to make do with experiments. Just be glad this isnt shipping software that you have to support in future. Please tell us: 1. How to power off the monitor in the right way and 2. How to prevent the monitor from turning off, if you are writing eg. a presentation program. Are these two things even possible? So, is the correct method to power-off a monitor to: hwnd = CreateWindow(WC_MYCLASS, WS_POPUP..); PostMessage(hwnd, WM_SYSCOMMAND, SC_MONITORPOWER, MONITOR_OFF); [pump messages] Where WC_MYCLASS has a WndProc like: LRESULT MyWndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { // TODO: Close after processing? if (message == WM_DESTROY) PostQuitMessage(0); return DefWindowProc(hWnd, message, wParam, lParam); } As an aside, is there a built-in Windows class that has a virtually-empty procedure body, save calling DefWindowProc and handles WM_DESTROY by calling PostQuitMessage? Or, is it easier to create a top-level hidden STATIC window (or something similiar) to perform that processing? Huh, I didn’t think it would work but indeed posting a SC_MONITORPOWER message to an existing window does in fact turn the monitor off. For example, from a MFC application: ::PostMessage(SomeCWnd.m_hWnd, WM_SYSCOMMAND, SC_MONITORPOWER, 2); I’m still wondering how you’d do it if you don’t have a window handle, such as a console application, without creating a temporary window. “With the source code, people would be more likely to break the rules, because they can see which rules they can get away with breaking.” –Raymond If I have neither the source code nor reliable documentation, I can only find (by experiment) what works right now. I can’t even determine if I am really breaking any rules. With either source code access or correct documentation, I can discover the correct thing to do, and then decide whether to take chances. I posit that I am much more likely to track changes in released source code or in reliable documentation than I am to repeat my entire series of experiments whenever I suspect that something may have changed. -Wang-Lo. Exactly! To power off the monitor you just SendMessage(MyhWnd, WM_SYSCOMMAND, SC_MONITORPOWER, MONITOR_OFF) where MyhWnd is the handle to a window you created yourself. The message is then passed to DefWindowProc which does the dirty deed for you. To put Raymond’s comments re open source another way – having the source code will make developers MORE likely to rely on the side effects of internal implementation details than they already do. That’s fine as long as the internal implementation details NEVER change, but that is rarely true of any product that lives beyond version 1.0. Believe me, I feel your pain – I live with it too. But open source is the wrong medicine for that pain. Well, yes and no. They’re rhetorical in that the answer is obvious. They’re not rhetorical in that they still deserve an answer. MSDNWiki covers "Visual Studio 2005 and .NET Framework 2.0." Yes, .NET is great and all, but an awfully large percentage of software isn’t written in it, and an awfully large percentage of software probably never will be. You’re not going to see any drop in questions about Win32 until you have the documentation about it posted there and available for edit and discussion. Sean, your claim was that a Wiki would generate correct and complete documentation. must disagree with your (Raymond’s) argument that open source APIs are more prone to misuse. Your argument boils down to “people can see the guts of the thing, so they’re more likely to rely on implementation choices; when the implementation changes, their app will be broken.” APIs are contracts; the API promises to provide such-and-such functionality if called in such-and-such fashion. If the correct calling procedure is documented in an easy-to-find way, there is no excuse for the developer to break the contract by calling it in a way other-than-specified. So much is true of both open and closed source software. The problem is that documentation tends to quickly fall out of sync. Open source has the advantage that the contract can be specified right in the code. This obviates (to a certain extent) the need to consult and maintain outside documentation. So open source has the advantage when it comes to maintaining and exploring APIs (for any given development team.) So Raymond, who pissed in your cornflakes today? Seriously, either the documentation needs to be improved or you should stop bitching about how we try to call the under documented items. Actually noting how to call the function (or answering any of the 4 or 5 comments asking may have helped as well). –> “We are not able to support editing in Firefox for this release. Check back soon for an updated release that will provide better Firefox support.” Will this be fixed in 6 month? I would’ve thought that by now people would’ve stopped requesting "the right way" to stop the monitor from turning off. The "right way" is to get the user to go into his power settings and choose presentation mode or always on or something to that effect. To understand why he’s responded to a few other posts and not to the ones requesting this information, perhaps you should read this: which is his post entitled "What if two programs did this?" and think about what would happen if one program tried to turn the monitor off while another tried to keep it on. Hmm… Raymond, do you mean that in order to put the monitor into blank timeout, one has to post a message to ITS window? How is that logical? And besides, I hope the message in question is not the one tried in the linked blog, because he is sending (WM_SYSCOMMAND, SC_MONITORPOWER), which is described as a notification () and not as being able to result as an order to the ‘system’! I am not a GUI programmer, so maybe I don’t understand the windowing "model", but I would expect that this WM_SYSCOMMAND notification only have an impact on the defwndproc for the maximize&co notifications, the monitor&screensavers being simple "notifications" on which the window/app shall not have any influence (other than saying "hey i’m using the display, please don’t do that") and certainly I wouldn’t expect *my* defwndproc to handle the screensaver start by itself. I can see why many people thought that the desktop window was "responsible" for the screensaver and monitor shutdown, but granted they should have checked in the doc! :) I have often wanted my 2 monitors (on separate computers) to come *out* of power-save mode (power off mode?) at about 8:00 AM each weekday, just before I arrive at work. Not crucial in the scheme of things, but one can always hope… How man! My previous MSDN link says (in the remarks) that the defwndproc, when passed the WM_SYSCOMMAND, *does* the action corresponding to the notification. Well as I said before, I can understand for the maximize, close & co but not for the screensaver & monitor power… This GUI stuff is confusing for me… Guillaume: so the right way to put the monitor in standby is to pop a message box asking the user to make it ‘always-on’? ;-) Maybe you meant "pop-up a message box to ask the user to shut down his monitor" ;) The "what if two programs did this" doesn’t apply here, what if two programs wanted to shut down the monitor? Hmm I don’t know… maybe… shut down the monitor!! Anyway, I guess the correct solution is the defwndproc one. Still, I don’t understand what is the idea behind it… Guillaume: So you’re seriously saying that the user should toggle the power settings himself every time he wants to watch a DVD? Blech. Nektar: Add a handler for WM_SYSCOMMAND. If wParam is SC_MONITORPOWER and lParam is 1 or 2, return 0. All other wParam and lParam values, if not already handled by your code, should be forwarded to DefWindowProc. A similar approach exists for SC_SCREENSAVE. To turn the power off, I have used this #include “stdafx.h” #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { SendMessage(GetForegroundWindow(),WM_SYSCOMMAND, SC_MONITORPOWER, 1); return 0; } and it seems to work. Of course, that doesnt account for if there is a foreground window that actually processes SC_MONITORPOWER instead of passing it to DefWindowProc. Monitor Blank vs Power Off At first I was thinking along the lines of the vertical blank on CRT devices – which iirc was the period of time it took for the CRT guns to reset from the end of the final line on the display to the beginning of the first line. Terminology exists for a reason – mixing up terms, even if the context seems to make it obvious what is meant. In this case I can’t even see a reason for having changed the terminology at all. The “screensaver defense” is a little disingenuous. Screensaver = screensaver = screensaver. But blanking the monitor doesn’t necessarily involve turning it off, and the term “blank” has other, very precise meanings for display devices. Also I find it a little odd to criticise a consumer of an API for naively making a call that the API itself seems to be mis-interpreting – for whatever reason – and then lambasting the consumer code for the API’s idiosynchracies. This blog is usually interesting, often informative, but every now and again it seems to miss the point a little. Had the tone of the piece been different it wound’t have mattered, but it seemed to be just a little condescending and critical, rather than a factual critique of an innocent mistake. <shrug> Keep it up though. :) Is it justifiable to break the law because you can’t find any legal alternative to what you’re trying to do? -Raymond Yes, depending on what you want to do. Of course, that’s a philosophy debate. So if open source is the wrong medicine for the pain of underdocumentation, then what is the right medicine — given that the average developer cannot contact people inside Microsoft to ask them either (A) how it’s supposed to work or (B) to fix the documentation? Microsoft, for years, has had a very opaque corporate structure to the outside world; I’ll place bets the majority of readers of this blog (myself included) don’t even know which team *Raymond* is on, much less how to contact members of the right team. That dearth of information has started to change over the last few years, but the net effect is that Microsoft has gone from an opaque wall to an opaque wall with a few pinholes of knowledge poked through it — far from transparent. Here’s a proposal: Convert a large portion of the Windows API documentation over to a Wiki, so that anybody can post, comment, and edit, with a suitable history log and rollbacks in case of abuse. My guess is that less than six months after converting it, the complaints about the confusing parts of the API would vanish almost entirely. Of course, this would mean embracing open communication, and I get the feeling there are a lot of folks inside Microsoft who really don’t want to do that. So you’re seriously saying that the user should toggle the power settings himself every time he wants to watch a DVD? Blech. – Bob Any program that went and automatically changed the power scheme or monitor setting goes straight into the recycle bin on my machine. Keep your filthy mits off my personal user settings! > MSDN Wiki > > CreateWindow [Search] > > There are no search results to display. Um, exactly what is this site supposed to be documenting, anyway? It seems a little lacking on Win32, which is where most of the Windows-related questions seem to be. I’d say the six months starts when the content is actually *posted* the first time around, and it sure doesn’t seem to be yet. *sigh* Wish it didn’t take upwards of 30 seconds to load a single page there, either. Traceroute gets as far as a few servers on msn.net before reaching ungodly ping times. :-( Win32 API documentation is poor. Until the documentation is as good as i want it to be, my life would be easier if i had the source code. Let’s pull out a random example, since AJAX is the new cool thing: IXMLHttpRequest.Send() If i do a synchronous request, will OnReadyStateChange get called? If so, how many times? i already know the answer, i’m not asking for technical support. i’m pointing out one example were the “contract” between the API writers and the developers is not fully spelled out. Yes, there is a contract that we want to adhere to, but you have to tell us what the contract is. > Is it justifiable to break the law because you can’t find any legal alternative to what you’re trying to do? It depends on: * the consequences of breaking the law (why the law is there) * the consequences of not being able to do what you’re trying to do * how hard you tried to find a legal alternative * what else is on your list of things to do Finding a "correct" way of doing things is a halting problem… it’s hard to tell if the answer is just one more search away, or doesn’t exist. Dear god, (some)people! If it’s not documented, it’s logical to assume absolutely nothing about it. Doc writers will never think to write down every way to NOT use something — only the proper ways *to* use it (think about how long the documentation would be for SendMessage!) and perhaps some common mistakes. I’m shocked people think this is some new revelation — this is simply the way documentation has always worked for every toolkit ever made. > First you said that some documentation to the effect of "Implementation detail, subject to change" is needed Yes… *in the source code.* That comment fell under the scope of your "if you got the source code to hundreds of undocumented APIs" scenario. To save copy/paste time, it would suffice to have a rule that "everything in source code is subject to change unless it specifically says otherwise," and then just tag certain chunks of code as /* public */ > now you say it’s implicit… In a /library of external documentation/ it’s implicit. *poor-man’s bold* /poor-man’s italic/ /* just a comment */ Larry says, understand that. The principle behind a Wiki is that you should get the best results with the most eyeballs. Lots of folks are interested in .NET, sure, but not anywhere near as many as are interested in accurate documentation of Win32. So wouldn’t it make sense to populate the Wiki initially with the most desired information, which in this case would be the Win32 API? Raymond says, “My point was ‘I can’t believe people actually develop software this way – randomly doing stuff to see what happens instead of thinking about what they are actually doing.’ But from the feedback it appears that it’s unfair of me to suggest that people try to understand what they’re doing.” Ray, I think you’re right on both counts. It’s pretty insane that people try to develop that way, and I doubt anybody here would disagree with that. To quote MST3K: “Randomly blowing up things is not a good battle strategy in a spaceship.” But, that said, the only reason folks resort to the software equivalent of guerrila warfare is because the known documentation is either misleading, hard to find, or just plain wrong. It’s fair to suggest people know what they’re doing, but unfair to *require* it. You and I may have degrees in computer science or software engineering, but the majority of people coding for Windows don’t, so it behooves you to provide the best documentation for them that you can. I would think that by this point, having heard a thousand people complaining that the Win32 documentation has significant problems and almost nobody claiming it’s good the way it is, and that with thousands of people getting the API calls wrong, that you’d at least consider that maybe, just maybe, there’s a problem with it. This poor guy sounds to me like an exasperated programmer who tried everything he could to do it right and finally just gave up and tried anything he could think of until he found something that worked (and yes, we’ve all been at the “just work already!” stage; don’t claim you haven’t!). Had the docs been more precise and easier to find, he might never have resorted to the crazy solutions. This does suggest that perhaps a Win32 API Wiki might be appropriate. Then all these people suggesting API doc improvements could, well, improve the API docs. Bloody hell. There IS an msdn wiki. Who knew? Not I for certain. I sense a certain .NET focus however. "I’m still wondering how you’d do it if you don’t have a window handle, such as a console application, without creating a temporary window." Lookup GetConsoleWindow() So, how do we avoid fumbling around in the dark in the first place? For instance, I’d like to emulate the effect of pressing Alt+Esc. (I do know you can’t use WM_SYSCOMMAND in this case because it starts a modal event loop looking for the release of the Alt key). It is possible to just do this without creating a temporary window: DefWindowProc(0, WM_SYSCOMMAND, SC_MONITORPOWER, MONITOR_OFF); paul2: Any program that went and automatically changed the power scheme or monitor setting goes straight into the recycle bin on my machine. Keep your filthy mits off my personal user settings! — The approach given DOES keep its mitts off your profile. Do you complain that games change your "personal user settings" when they open a fullscreen window? [How could the SendMessage documentation have been improved to solve this problem? "When sending a message to a window, make sure it’s really a window"? -Raymond] There’s nothing wrong with the SendMessage documentation. the problem is with the WM_SYSCOMMAND documentation. For reference: First of all, the title of the page is "WM_SYSCOMMAND Notification" which implies that the message is a *notification* and not something you’d generate yourself. Then, the first paragraph says: "A window receives this message when the user chooses a command from the Window menu (formerly known as the system or control menu) or when the user chooses the maximize button, minimize button, restore button, or close button." Which is totally wrong in this case. Now further down the page it says this: "An application can carry out any system command at any time by passing a WM_SYSCOMMAND message to DefWindowProc." Which is probably what the original person was wanting. This suggests that the functionality of WM_SYSCOMMAND has probably evolved over time, while the documentation has not evolved as well. I’ll not suggest that what the original person did was right, and thinking about what is written in the documentation should have come up with the "correct" solution. But you can’t deny the documentation for WM_SYSCOMMAND probably needs some work. Should I post a comment on the documentation page asking them to clarify it? Probably. But I’m a busy person, and if I wanted to submit corrections every time I spotted them, I’d be using open source software :p~ <quote>But if you were going to fill in the gaps, would you really consider intentionally passing invalid parameters as part of the missing documentation?</quote> i would have thought so, yes. But when i actually read the guys posts for myself, i see that he wasn’t randomly passing garbage and hoping it would work. He did what we all do, read through differect sections of the SDK documentation, trying to cobble together the idea of how it’s supposed to work. The fact that code iteration #1 and #2 didn’t work obviously means that the documentation wasn’t clear enough and he did something wrong. So we keep trying things that are along the same line until it does work. The fact that it works by accident isn’t something that we’re supposed to just divine. If you look at the documentation for SC_MONITORPOWER there is no indication that you are allowed to send the message anywhere. If fact, looking at the documentation for WM_SYSCOMMAND there is no indication that you’re supposed to be allowed to send any WM_SYSCOMMAND messages anywhere. The contract doesn’t specify it being allowed, so it must be forbidden – and therefore "hacking." Yet i find no shortage of people sending fake SC_MINIMIZE and SC_CLOSE messages. So maybe it’s not so bad after all – just poorly documented. Next, i receive a SC_MONITORPOWER notification when the monitor is powering down. It is a chance for me to know this is happening, and i can react accordinly (e.g. make my program do less graphical stuff.) Windows will know i’ve seen the message when i call DefWindowProc. Fair enough. But now someone is suggesting that if i simply generate a SC_MONITORPOWER notification message, and pass it to the DefWindowProc, and that will magically shut down the monitor is crazy. That is not only counter-intuitiave, but not mentioned in the documentation (even taking into account the line "The DefWindowProc function carries out the window menu request for the predefined actions specified in the previous table."). And as we’ve seen before simply mimicing the thing doesn’t make it so. Just because you decide to generate a SC_MONITORPOWER message doesn’t mean that the monitor was in the process of turning off, or that the process will suddenly startup – at least the documentation provides no indication of that. And yet other people used to do it and it worked. So either it was a bug in Windows that it didn’t react as documentation, or it was an undocumented feature – subject to change in future versions of Windows, or it was a poorly documented feature. Yet i find no shortage of people sending fake SC_MONITORPOWER messages. So maybe it’s not so bad after all – just poorly documented Finally, he didn’t broadcast a message, he sent a message to WM_TOPMOST. Apparently if i fake the symptoms of a monitor in the process of shutting down, then it shuts down. So one gets the idea that if i send a SC_MONITORPOWER message, the monitor will start to power down. Well obviously i can’t send it to myself, because i have no ability to initiate a monitor shutdown. i need to send it to some central "windows" authority who will then initiate the monitor shutdown procedure, and in turn broadcast a message to every window on the system, letting them know the monitor is about to turn off, as part of the normal preparation for monitor shutdown. Sounds perfectly reasonable and logical. But how shall we tell Windows the monitor should be shut off? Well, who starts the process in the first place? Windows of course. What is our way in to talk to "the sytem"? SendMessage(0, … SendMessage(-1, … SendMessage(GetDesktopWindow(), … One of those outta work. No? Dammit, wish this stuff was documented. Any other Windowsy hWnd’s we can find? SendMessage(HWND_TOPMOST, … Documentation doesn’t say to can send WM_SYSCOMMANDS, but you can. Documentation doesn’t say you can initiate a monitor shutdown by calling yourself, but you can. If i generate a WM_POWER message and pass it to DefWindowProc will the computer turn off? Documentation doesn’t really say. It says "Notifies applications that the system is about to enter a suspended mode." Still ambigious. So, in the absense of documentation as good as i want it to be, couse code please. Perhaps you could just redo windows in CLR, so we would decompile it ourselves into readable source code. I agree that the guy with the crazy solution is wrong. But I can’t help to sympathize. As Sean W mentioned: How many times when you want to do something and you don’t know how? Documentations doesn’t help. You can spend hours on MSDN unless you have the right keywords. So you google it or ask your friends how – but this likely leads to a buggy solution from the last guy who experienced the same problem. And this is mainly the fault of Microsoft for being so big. When you have millions of developers, you’re bound to get many wrong solutions for the same problem – especially when the "right" solution isn’t obvious. The only way to improve this situation is for Microsoft to address the needs for all these developers. Have a database for how-to-do-stuff that’s easy to find. **Advertise it**. 3rd parties websites have already done it (only with possibly buggy solutions). Either this, or put up with sketchy code and compatibility problems from now until windows become obsolete. @OldNewThing, As the author of the ‘offending’ articles, it seems only logical to respond to this. Let me state that this is oooold code and medio 2004 it was a pain in the ass to find code or a hint about powering down a monitor. First of all, some background. I needed the monitor power off/on functionality for a TFT monitor in a painting frame on the wall at home. This monitor displays photos I shoot during holidays, cats etc, but also serves another role. The monitor is touch enabled and shows me a TV guide, photo browsing, movies at the local movie theatre and – important – trafic jam info (as I live near Amsterdam). The monitor power off occurs at 11:30PM and the monitor is switched on again at 6:30AM. At least the living room is not lit up anymore during the night. As soon as I published the code, Raymond Chen (you!?) notified me that this was not the ‘best’ way to do this. Unfortunately, Chen’s comments on my blog did not survive upgrade to Community Server… <EDIT> Googling for Raymond Chen leads me to OldNewThing, so you HAVE been helping me in 2004! <large grin> </EDIT> Enfin, on Chen’s comments, I modified the code to NOT use HWND_TOPMOST anymore, but just post the message to the handle of the .NET form, as Chen indicated. Unfortunately, I did not conclude the article series with the proper version (because it was already in the comments) and thus serves a happy bashing session two years later. This is the code I use after Chen’s suggestion: PostMessage((int)this.Handle, WM_SYSCOMMAND, SC_MONITORPOWER, MONITOR_OFF); The ‘this.Handle’ is the handle of the WinForm showing the photos and has been working splendidly for two years. And this is also what you and others are suggesting. Actual code is (after refactor sessions): User32.NativeMethods.PostMessage((int)this.Handle, User32.Constants.WM_SYSCOMMAND, User32.Constants.SC_MONITORPOWER, User32.Constants.MONITOR_OFF); And yes, I was ‘fumbling’ to find a solution. Documentation about the messages was not usable, no code existed on blogs or article sites like Codeproject. And thus you experiment until a ‘solution’ appears and publish your findings. By publishing, Chen educated me and helped me refine the solution. The more eyes look at code, the better. My published search for a better impersonation (in SharePoint) also yielded better versions of de solution. The least you could have done in this blog post, is provide some code of a correct solution, instead of letting the readers guess for it. This also helps any other developer in finding the best solution for powering down a monitor. I hope you will be as helpful as in 2004 next time I have a hardcore windows problem. ;-) "Making up intentionally invalid parameters and seeing what happens falls into the category of malicious goofing around, not in the realm of software engineering and design." I’d argue that this *is* part of software engineering. It’s reverse engineering and you’ve touched on it before ( and other compatibility posts). Someone must have said this before, but if not, I claim it: "Reverse engineering, the 2nd oldest [software] profession". [BTW I like the inline comments better than replies to your own posts. I wonder sometimes why you persevere, though. I hope the good outweighs the bad]. @Victor — the correct solution was posted in the last paragraph of the post. Granted, it’s not code, but it also shouldn’t be hard to translate "make a window and post the message there" into workable code. Re: [Picky, picky.] Raymond, I was honsetly asking if those were the same things. I didn’t know. (I thought maybe the monitor would BLANK instead of power off. I *know* that blanking a screen and powering it off aren’t the same.) Now that you have yelled at me, I now know. Sorry for questioning it, because it’s not obvious that those are the same things. So passing intentionally invalid parameters is a form of civil disobedience. Remember that another cornerstone of civil disobedience is the expectation of being arrested. Do you expect Windows to block your program when you do these things? -Raymond Yes. In fact, that would have stopped the original code you complain about. If Windows would have thrown a BadParam exception, the original user would never have posted it. The first test run would have made it clear it was illegal. I’m not sure if this would require a special developers build of XP, but even if, Microsoft could also sell that as a ‘hardened’ version just like Sun created hardened Solaris. Just wanted to point out that code IS documentatoin (code by contract anyone?) so yes, in this way open source is better (a little promo: ReactOS 0.3.0 RC1 just came out, so try it out if you want an open source windows :) ). Also when you are complaining about people adressing you questions which are not your responsibility: Microsoft is for us a big opaque thing, it’s not like on the bottom of every page there is a link: if you want to provide feedback, click here! For example you say that we should address our complaints regarding the blog software to the “makers of the software”. But who are they? In every open source forum / blog software you would have found the link to the author at the bottom of the page, but not here. IMHO a more suitable response would be: it’s not my responsability / area of expertise, contact X (with a mail address or a link to the contact page) for it. Remember, you are not just yourself, you represent the entire company, like it or not. (Just in a sidedone, although I’m sure I’ll start a flame war: if open source isn’t better, how is it possible that in 4 years we have built a browser which is more secure, and the IE team still couldn’t plug all the holes and almost every month they come out with a remote code execution attack? I know that rewriting software from scratch is bad, but this is embarrasing) Regarding IE: > Can you open your My Computer folder from your favorite non-IE browser? No, but on the other hand, I don’t see the point of being able to do this, either. That’s what Windows Explorer is for. (No matter how much Microsoft marketing wanted us to believe that the Internet was no different from the local machine 5 years ago, it wasn’t true then, and it’s even less true now. The local machine can be trusted much more than the Internet can be.) (Note that I can do it (or its equivalent, view the root directory) in Konqueror. It’s just that Konqueror isn’t my *favorite* non-IE browser.) > How about an FTP site in icon mode so you can drag/drop files to upload/download? What’s the point of this? It makes it slightly easier for new users, or something? If you have a browser, why would you expect to be able to drag-and-drop to download a file? Wouldn’t users expect to click a file to download it, just like when you’re browsing via HTTP? Except, oh, that’s right, marketing says that the Internet is no different from the local filesystem again. OTOH, most third-party FTP programs have a drag-and-drop mode (and it’s possible that some of them will accept drops from Windows Explorer too, I don’t know). Not sure how many of them are free, though. And when you’re using a different program for FTP, it’s a little more obvious that a different action may be required to download a file, so the interface “cost” of going fully DnD isn’t as high. (Actually, Konqueror may be able to do this also, but I’ve never tried it so I don’t know for sure. Certainly the infrastructure is there; someone would just need to write an FTP kioslave.) > Or view a PDF file inside the browser window instead of using an external plug-in? “External plug-in” like the one from Adobe that’s required for IE too (at least IE6SP1)? No, I can’t view PDFs without that. But with it, I can view them “inside the browser window”. (But I doubt that Adobe has a plug-in for Konqueror. However, KDE does have a PDF viewer (named, appropriately enough, kpdf), and it’s possible that kpdf can be invoked inside the Konqueror window. Never tried that either, though, so I don’t know for sure.) Originally posted by Raymond: [Or view a PDF file inside the browser window instead of using an external plug-in?] That reminds me of an interesting thing my PC does… I have several toolbars set up as a sidebar on the desktop, including a "My Computer" bar which lets me browse my filesystem via drop-down menus. One interesting side-effect of this method is that when I open a PDF from this menu, it opens in IE! I’m guessing this is because the ability to make toolbars out of folders came with IE4, which was the first version of IE to be integrated into Windows, and that somehow they’re treated as being part of the "IE side" of Explorer, which naturally opens PDFs inline? … Will this be bugfixed in Vista? ;) BryanK — Yes, that’s the toolbar. (I was quite gratified when they started making it one of the default toolbars, since I’ve been using this particular feature since Win98.) I’m running XP Pro SP2, and IE is not my default browser either. In fact, I told Windows never to use it – yeah, I know, like that ever made a difference… Specifically, my PC is opening PDFs with the Acrobat plugin in an IE window. If I open then via a normal Explorer window, my application preferences are honoured. Just out of interest: was your "My Computer" toolbar part of the taskbar when you tested, or part of a separate toolbar? If it was part of the taskbar, you might want to give it a go as part of a sidebar and see what happens. You’ll need another (blank or otherwise) toolbar on the same screen edge to make it use the menu function. Oh, you’re right, it was part of the taskbar. But I put it on the right side of the screen (well, I put the Address toolbar on the right side of the screen, then added My Computer to it), and it didn’t open up IE, just Acrobat. I also tried a file located on a mapped network drive (so it’s from a different zone), and that didn’t seem to make any difference. What happens if you open the PDF from the Start->Run box? (Are ShellExecute and ShellExecuteEx working, in other words?) I wish to duplicate this followup in this thread because it follows up to this thread and there’s no way to do newsgroup-style crossposts. > It so happens that the numerical value -1 > for a window handle is suspiciously close to > the value of HWND_BROADCAST: > #define HWND_BROADCAST ((HWND)0xffff) > It so happens that internally, the window > manager supports (HWND)-1 as an alternative > value for HWND_BROADCAST. (I leave you to > speculate why.) We can leave it to MSDN to say why. * Usually, the client broadcasts this message * by calling SendMessage, with –1 as the * first parameter. The reason why -1 still can be used as a substute for 0xFFFF is because, in 16-bit Windows, -1 = 0xFFFF because the message value was 16-bit. It no longer equals 0xFFFF because the message value is increased to 32-bit in Win32. PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/oldnewthing/20060613-05/?p=30893
CC-MAIN-2018-26
refinedweb
6,902
61.87
The msvcr71.dll is needed to run python 2.4 programs. This dll is not part of older windows releases (Windows98). Py2exe copies the file python24.dll to the dist folder, it should also copy msvcr71.dll to the dist folder. Nobody/Anonymous 2005-01-12 Logged In: NO Indeed - as a MS knowledgebase article (;en-us;326922) notes: "...the Msvcr71.dll/Msvcr70.dll is no longer considered a system file, therefore, distribute Msvcr71.dll/Msvcr70.dll with any application that relies on it. Because it is no longer a system component, install it in your applications Program Files directory with other application-specific code..." Thomas Heller 2005-01-12 Logged In: YES user_id=11105 That is good to know - thanks for the research. So, the technical side is solved, although there is also a legal side - you need a license to distribute these dlls. It may be that only MSVC owners have the right to distribute the files. Although the latter is not py2exe's problem ;-) Nobody/Anonymous 2005-06-26 Logged In: NO Maybe there should be a tip on the py2exe and ctype wiki concerning this problem with an example like the following (quick'n dirty hack?): --- snip ---- from distutils.core import setup import py2exe import sys,os # add the msvcrt etc from the python installation pythonDir = os.path.dirname(sys.executable) data_files = [(".", [pythonDir+"/msvcr71.dll", pythonDir+"/msvcp71. dll"])] setup(console=["myProg.py"], data_files=data_files, ) --- snip --- Georg Kster georgk on bnet minus ibb dot de Thomas Heller 2005-06-28 Logged In: YES user_id=11105 Currently I'm thinking that I should take a different approach in the next release: py2exe should assume that each dll in the windows system directory is a system dll which should not be copied, with a few exceptions: pythonXY.dll, msvcr71.dll, pythoncomXY.dll, pywintypesXY.dll.
http://sourceforge.net/p/py2exe/bugs/66/
CC-MAIN-2014-15
refinedweb
305
63.39
In my task list for spe, a large item has been how to tie it into the current code base - you might have seen me reference it as translating data path to guid. To do that, I've had to understand what the current code is doing and the limitations in that code. I've also had to question exactly what it is we want done. The Simple Policy Engine (spe) tells the pNFS MetaData Server (MDS) how to layout the stripes on the data servers (DS) at file creation time. If you think of RAID, a file is striped across disks and we need to know how many disks it is striped across and what is the width of the stripe. Then to determine which disk a particular piece of data is on, we can divide the file offset by the stripe width to get the disk. This is simplistic, but is also the basic concept behind layout creation in pNFS. A huge difference is that we need to tell the client not only the stripe count and width, but the machine addresses of the DSes. It is a little bit more complex than that as each DS might have several data stores associated with it, a data store might be moved to a different DS, etc. We capture that complexity in the globally unique id (guid) assigned to each data store. But conceptually, lets consider just the base case of each DS having only one data store and it is always on that DS. So the NFSv4.1 protocol defines an OPEN operation and a LAYOUTGET operation. It doesn't define how an implementation will determine which data sets are put into the layout. In the current OpenSolaris implementation, these two operations result in the following call chains: "OPEN" -> mds_op_open -> mds_do_opennull -> mds_create_file "LAYOUTGET" -> mds_op_layout_get -> mds_fetch_layout -> mds_get_flo -> fake_spe In my development gate, a call to spe_allocate currently occurs in mds_create_file. The relevant files to look at are: usr/src/uts/common/fs/nfs/nfs41_srv.c and usr/src/uts/common/fs/nfs/nfs41_state.c. Note: I will be quoting routines in the above two files. Over time, those files will change and will not match up what I quote. The interesting stuff in layout creation occurs in mds_fetch_layout: Note that we have starting with nfs41_srv.c. 8320 if (mds_get_flo(cs, &lp) != NFS4_OK) 8321 return (NFS4ERR_LAYOUTUNAVAILABLE); And in mds_get_flo: 8269 mutex_enter(&cs->vp->v_lock); 8270 fp = (rfs4_file_t *)vsd_get(cs->vp, cs->instp->vkey); 8271 mutex_exit(&cs->vp->v_lock); 8272 8273 /* Odd.. no rfs4_file_t for the vnode.. */ 8274 if (fp == NULL) 8275 return (NFS4ERR_LAYOUTUNAVAILABLE); Which basically states that the file must have been created and in memory. These is not a panic for at least the following reasons: 8277 /* do we have a odl already ? */ 8278 if (fp->flp == NULL) { 8279 /* Nope, read from disk */ 8280 if (mds_get_odl(cs->vp, &fp->flp) != NFS4_OK) { 8281 /* 8282 * XXXXX: 8283 * XXXXX: No ODL, so lets go query PE 8284 * XXXXX: 8285 */ 8286 fake_spe(cs->instp, &fp->flp); 8287 8288 if (fp->flp == NULL) 8289 return (NFS4ERR_LAYOUTUNAVAILABLE); 8290 } 8291 } Note that an odl is a on-disk layout. And the statement on 8278 is how I will tie the spe in with this code. During an OPEN, I can simply set fp->flp and bypass this logic. If there is any error, then this field will be NULL and we can grab a simple default layout here. So I'll probably rename fake_spe to be mds_generate_default_flo. So understanding what fake_spe does will help me understand what the real spe will have to do: 8236 int key = 1; ... 8241 *flp = NULL; 8242 8243 rw_enter(&instp->mds_layout_lock, RW_READER); 8244 lp = (mds_layout_t *)rfs4_dbsearch(instp->mds_layout_idx, 8245 (void *)(uintptr_t)key, &create, NULL, RFS4_DBS_VALID); 8246 rw_exit(&instp->mds_layout_lock); 8247 8248 if (lp == NULL) 8249 lp = mds_gen_default_layout(instp, mds_max_lo_devs); 8250 8251 if (lp != NULL) 8252 *flp = lp; The current code only ever has 1 layout in memory. Hence, the key is 1. We'll need to see how that layout is generated. And that occurs in mds_gen_default_layout. Note how simplistic this code is - if for any reason the layout is deleted from the table, it is simply added back in here. Right now, the only reason the layout would be deleted is if a DS reboots (look at ds_exchange in ds_srv.c). This is the code builds up the layout and stuffs it in memory: Note that we have switched into nfs41_state.c. 1046 int mds_default_stripe = 32; 1047 int mds_max_lo_devs = 20; ... 1052 struct mds_gather_args args; 1053 mds_layout_t *lop; 1054 1055 bzero(&args, sizeof (args)); 1056 1057 args.max_devs_needed = MIN(max_devs_needed, 1058 MIN(mds_max_lo_devs, 99)); 1059 1060 rw_enter(&instp->ds_addr_lock, RW_READER); 1061 rfs4_dbe_walk(instp->ds_addr_tab, mds_gather_devs, &args); 1062 rw_exit(&instp->ds_addr_lock); 1063 1064 /* 1065 * if we didn't find any devices then we do no service 1066 */ 1067 if (args.dex == 0) 1068 return (NULL); 1069 1070 args.lo_arg.loid = 1; 1071 args.lo_arg.lo_stripe_unit = mds_default_stripe * 1024; 1072 1073 rw_enter(&instp->mds_layout_lock, RW_WRITER); 1074 lop = (mds_layout_t *)rfs4_dbcreate(instp->mds_layout_idx, 1075 (void *)&args); 1076 rw_exit(&instp->mds_layout_lock); We first walk across the instp->ds_addr_tab and look for effectively 20 entries. Note that max_devs_needed is always 20 for this code and so will be args.max_devs_needed. I think the check on 1067 is incorrect and a result of the current implementation normally being on a community with 1 DS. It should be the case that args.dex is greater than or equal to max_devs_needed. Actually, we need to be passing in how many devices we will have D (the ones assigned to a policy) and how many we need to use S, with S <= D. The args.dex will have to be >= S. Note that on 1070, we assign it the only layout id which will ever be generated. And if we play things right, we could store this layout id back in the policy and avoid regenerating the layout if at all possible. Finally we stuff the newly created layout into the table. So mds_gather_devs does the work of stuffing the layout. It gets called for every entry found in the instp->ds_addr_tab:? You can look for ds_addr_idx over in usr/src/uts/common/fs/nfs/ds_srv.c, but basically, for each data store that a DS registers, one of these is created. The upshot of all this is that if a pNFS community has N data stores, then the layout generated for the current implementation will have a stripe count of N. Note and nfs41_srv.c. Okay, we've generated the layout and start to generate the otw (over the wire) layout: 8332 8333 mds_set_deviceid(lp->dev_id, &otw_flo.nfl_deviceid); 8334 Crap, it is sending the device id across the wire! I'm going to have to rethink my approach. Instead of storing a policy as a device list and picking which devices I want out of that list (i.e., a Round Robin (RR) scheduler), I'm going to have to store each generated set as a new device list. I don't understand the process like I thought I did. Going back to mds_gather_devs, it is not stuffing data stores into a table as I thought. Instead, it is stuffing DS network addesses into a table. What I'm missing is how the ds_addr entries map back to data stores. Okay, this code in mds_gen_default_layout does it: mds_layout_lock, RW_WRITER); 1074 lop = (mds_layout_t *)rfs4_dbcreate(instp->mds_layout_idx, 1075 (void *)&args); 1076 rw_exit(&instp->mds_layout_lock); We have just gotten the device list via the walk over mds_gather_devs. And now we effectively call mds_layout_create on 1074. 1104 ds_addr_t *dp; 1105 struct mds_gather_args *gap = (struct mds_gather_args *)arg; 1106 struct mds_addlo_args *alop = &gap->lo_arg; ... 1119 lp->layout_type = LAYOUT4_NFSV4_1_FILES; 1120 lp->stripe_unit = alop->lo_stripe_unit; 1121 1122 for (i = 0; alop->lo_devs[i] && i < 100; i++) { 1123 lp->devs[i] = alop->lo_devs[i]; 1124 dp = mds_find_ds_addr(instp, alop->lo_devs[i]); 1125 /* lets hope this doesn't occur */ 1126 if (dp == NULL) 1127 return (FALSE); 1128 gap->dev_ptr[i] = dp; 1129 } Okay, alop->lo_devs is the array we built in mds_gather_devs. Yes, yes, that is true. I just figured out where all of my confusion is coming from - the code has struct ds_addr and ds_addr_t. In the xdr code, struct ds_addr is just an address (usr/src/head/rpcsvc/ds_prot.x): 338 /* 339 * ds_addr - 340 * 341 * A structure that is used to specify an address and 342 * its usage. 343 * 344 * addr: 345 * 346 * The specific address on the DS. 347 * 348 * validuse: 349 * 350 * Bitmap associating the netaddr defined in "addr" 351 * to the protocols that are valid for that interface. 352 */ 353 struct ds_addr { 354 struct netaddr4 addr; 355 ds_addruse validuse; 356 }; But in the code I've been looking at, ds_addr_t is a different structure (see usr/src/uts/common/nfs/mds_state.h): 133 /* 134 * ds_addr: 135 * 136 * This list is updated via the control-protocol 137 * message DS_REPORTAVAIL. 138 * 139 * FOR NOW: We scan this list to automatically build the default 140 * layout and the multipath device struct (mds_mpd) 141 */ 142 typedef struct { 143 rfs4_dbe_t *dbe; 144 netaddr4 dev_addr; 145 struct knetconfig *dev_knc; 146 struct netbuf *dev_nb; 147 uint_t dev_flags; 148 ds_owner_t *ds_owner; 149 list_node_t ds_addr_next; 150 } ds_addr_t; This is pure evil because we typically equate foo_t as being typedef struct foo foo_t. As you can see, I've been fighting that in the above analysis. I'm going to file an issue on this naming convention and leave the analysis here. I'll come back to it and rewrite it as if I knew all along that I was using a ds_addr_t and not a struct ds_addr.
http://blogs.sun.com/tdh/entry/understanding_layout_creation_to_understand
crawl-002
refinedweb
1,602
62.58
Fun with encryption and randomness One way to look at how good encryption is to check how much the encrypted data lacks patterns (looks like random noise). An interesting way to check for patterns in a file is to convert it to a picture and look at it. I got the idea for this from spectra. The tool for converting data to images is ImageMagick. Additionally, making a histogram (counting the occurance of every possible byte value) and calculating the entropy is informative too. First we look at a plain text file. I’m using a logfile. Since that is quite big, we isolate a portion using the dd command. > dd if=procmail.log of=data.raw bs=1k count=1k 1024+0 records in 1024+0 records out 1048576 bytes transferred in 0.007041 secs (148924777 bytes/sec) Next we use that data to make a set of two pictures. First, a black and white one (1 bit/pixel) from which we crop a block of 100×100 pixels from the left-top and enlarge it by a factor of three. Second a grayscale picture (8 bits/pixel) that is scaled to 300×300 pixels. > convert -size 2896x2896 mono:data -crop 100x100+0+0 -scale 300x300 data-mono.png > convert -size 1024x1024 -depth 8 gray:data -scale 300x300 data-gray.png The -size argument for creating the monochrome pictures is the nearest integer to the square root of 1048576×8, which is also a multiple of 8. Here is what the pictures looks like: There are clear patterns visible in both images. If you look closely at the left picture, there are vertical white stripes every eight pixels. The text is in ISO-8859 encoding, which is 8-bit but seldom has the eight bit set. (So to see these kinds of patters, it is a good idea to make the picture width a multiple of 8) In the picture on the right, which shows the whole file, there are several things that strike the eye. There is a horizontal band of a distinctly different pattern in the top of the picture. There are several white stripes in the bottom of the picture, and there are dark patterns running from top to bottom. The entropy of the data is 5.2120 bits/byte. This is calculated according to the definition of entropy in information theory: 255 -Σ p(x)×log₂₅₆(p(x)) x=0 Where p(x) is the chance that a byte has the value x. Next is the histogram. To interpret this picture you should know that in the ISO-8859 character set, the newline character has the integer value 10, the space 32 and the most common letter in English (‘e’) 101. This explains some of the high peaks in the histogram. If this data was completely random, and if every value of a byte was equally probable (known as a continuous uniform distribution), all values would occur 1/256×100% = 0.390625% of the time. This is the green line in the histogram. The following python program was used to generate the histograms and calculate entropy. It uses the excellent matplotlib library. #!/usr/bin/env python '''Make a histogram of the bytes in the input files, and calculate their entropy.''' import sys import math import subprocess import matplotlib.pyplot as plt def readdata(name): '''Read the data from a file and count how often each byte value occurs.''' f = open(name, 'rb') data = f.read() f.close() ba = bytearray(data) del data counts = [0]*256 for b in ba: counts[b] += 1 return (counts, float(len(ba))) def entropy(counts, sz): '''Calculate the entropy of the data represented by the counts list''' ent = 0.0 for b in counts: if b == 0: continue p = float(b)/sz ent -= p*math.log(p, 256) return ent*8 def histogram(counts, sz, name): '''Use matplotlib to create a histogram from the data''' xdata = [n for n in xrange(0, 256)] counts = [100*c/sz for c in counts] top = math.ceil(max(counts)*10.0)/10.0 rnd = [1.0/256*100]*256 fig = plt.figure(None, (7, 4), 100) plt.axis([0, 255, 0, top]) plt.xlabel('byte value') plt.ylabel('occurance [%]') plt.plot(xdata, counts, label=name) plt.plot(xdata, rnd, label='continuous uniform') plt.legend(loc=(0.49, 0.15)) plt.savefig('hist-' + name+'.png', bbox_inches='tight') plt.close() if __name__ == '__main__': if len(sys.argv) < 2: sys.exit(1) for fn in sys.argv[1:]: hdata, size = readdata(fn) e = entropy(hdata, size) print "entropy of {} is {:.4f} bits/byte".format(fn, e) histogram(hdata, size, fn) What happens when we encrypt this file? As a test we will be using openssl with Blowfish and AES encryption. For both we will use a long, pseudo-random password, generated like this: > openssl rand -base64 21 HDtWa0yeOfy0r19IXgQeVRQjrN6W > openssl bf -in data -out data.bf -pass pass:HDtWa0yeOfy0r19IXgQeVRQjrN6W Next we will convert it to pictures, exactly as the raw data above. These look quite random to the naked eye, doesn’t it? The same goes for the histogram for the Blowfish encrypted data. The graph looks nicely centered around the horizontal green line that indicates pure randomness with a continuous uniform distribution. It has a calculated entropy of 7.9998 bits/byte. Next we encrypt the same data with AES. Both the 1 bit/pixel and 8 bits/pixel images look pretty random again, without obvious patterns. The histogram for the AES encrypted data looks almost the same as that of the Blowfish encrypted data. The entropy in this case is also 7.9998 bits/byte. Two different and relatively modern encryption algorithms, used with a strong and salted key produce output that looks quite random. So, for comparison we look at the output of FreeBSD’s pseudo-random number generator. This uses the Yarrow algorithm and gathers entropy from lan traffic and hardware interrupts: > dd if=/dev/random of=random bs=1k count=1k 1024+0 records in 1024+0 records out 1048576 bytes transferred in 0.025319 secs (41414817 bytes/sec) > convert -size 2896x2896 mono:random -crop 100x100+0+0 -scale 300x300 random-mono.png > convert -size 1024x1024 -depth 8 gray:random -scale 300x300 random-gray.png The histogram for the pseudo-random data: Again the calculated entropy is 7.9998 bits/byte. Both the encrypted and random data look pretty much the same in their grayscale images and their histograms. So it looks like algorithms like Blowfish and AES are pretty good. There are no discernable pattern in the data, and the histograms are pretty close to a continuous uniform distribution. This illustrates that modern ciphers can be very resistent to known ciphertext attacks if: - keys are long enough (and use a salt) - algorithms were carefully researched (preferably in the open) - the operator doesn’t make mistakes. - procedures aren’t flawed
http://rsmith.home.xs4all.nl/howto/fun-with-encryption-and-randomness.html
CC-MAIN-2017-17
refinedweb
1,148
65.93
How on the screen, and there are two ways to do so: puts() and printf(). puts() Puts probably stands for put string, where a string is a bit of text you put to the screen. Regardless, here's how it works: puts("Greetings, human!"); The text to display — the string — is enclosed in the function's parentheses. Furthermore, it's enclosed in double quotes, which is how you officially create text inside the C language, and how the compiler tells the difference between text and programming statements. Finally, the statement ends in a semicolon. Here's how puts() might fit into some simple source code: int main() { puts("Greetings, human!"); return(0); } The puts() function works inside the main() function. It's run first, displaying the text Greetings, human! on the screen. Then the return(0); statement is run next, which quits the program and returns control to the operating system. printf() Another C language function that displays text on the screen is printf(), which is far more powerful than puts() and is used more often. While the puts() function merely displays text on the screen, the printf() function displays formatted text. This gives you more control over the output. Try the following source code: #include <stdio.h> int main() { printf("Sorry, can't talk now."); printf("I'm busy!"); return(0); } Type this code into your editor and save it to disk as HELLO.C. Then compile it and run it. Sorry, can't talk now.I'm busy! You probably assumed that by putting two printf() statements on separate lines, two different lines of text would be displayed. Wrong! The puts() function automatically appends a newline character at the end of any text it displays; the printf() function does not. Instead, you must manually insert the newline character (\n) into your text. To "fix" the line breaks in the preceding HELLO.C file, change line 5 as follows: printf("Sorry, can't talk now.\n"); The escape sequence \n is added after the period. It's before the final quotation marks because the newline character needs to be part of the string that's displayed. So save the change, recompile HELLO.C, and run it. Now the output is formatted to your liking: Sorry, can't talk now. I'm busy!
http://www.dummies.com/how-to/content/how-to-display-text-onscreen-in-c-with-puts-and-pr.navId-323181.html
CC-MAIN-2015-27
refinedweb
383
76.01
Declaring Member Variables Member variables in a class are also called fields or instance variables. The member variables are used to keep the state of the created object. Member variable are declared in the same way as normal variables, the same rules and conventions apply. Field declaration are composed of three components, in order: - Zero, or more modifiers, such as publicor privateaccess modifier. - The field's type - The field's name Example Here is a sample Message class from a chat application. The member variables are id, author, createdAt, channelId, and text. Notice we had to import the Date class. import java.util.Date; public class Message { String id; String author; Date createdAt; String channelId; String text; } Here is another example of how we could model a User class in our chat application. public class User { String id; String email; String nickname; String profilePictureUrl; } And a model Room class for the chat application could be represented as follows: public class Room { String channelID; String[] owners; String[] members; String title; } Notice in the Room class, owners is an array of Strings. The members field is also an array of Strings. Controlling Access to Member Variables Access level modifiers determine whether other classes can use use a particular field or method of a class. Class Fields Access Modifiers Example Access Modifiers public modifier public class User { public String profilePictureUrl; } The profilePictureUrl field variable will be accessible from within a class, package, subclass and the world. protected modifier public class User { protected String profilePictureUrl; } With the protected keyword, the profilePictureUrl field is accesible from the class, package, subclass but not the world. default, no modifier public class User { String profilePictureUrl; } With no modifier, the field is a assigned a default modifier. The field is accesbile from the class and package, but not from the subclass and the world. private modifier public class User { private String profilePictureUrl; } The field is only accesbile from within the class and not accesible anywhere else.
https://java-book.peruzal.com/class/declaring_member_variables.html
CC-MAIN-2018-22
refinedweb
327
50.97
Directory Node structure. More... #include <fiff_dir_node.h> Directory Node structure. Replaces _fiffDirNode struct fiffDirNodeRec,*fiffDirNode Definition at line 75 of file fiff_dir_node.h. Const shared pointer type for FiffDirNode. Definition at line 78 of file fiff_dir_node.h. Shared pointer type for FiffDirNode. Definition at line 77 of file fiff_dir_node.h. Constructors the directory tree structure. Definition at line 58 of file fiff_dir_node.cpp. Copy constructor. Definition at line 67 of file fiff_dir_node.cpp. Destroys the fiffDirTree. Definition at line 80 of file fiff_dir_node.cpp. Copies directory subtrees from fidin to fidout Definition at line 90 of file fiff_dir_node.cpp. Find nodes of the given kind from a directory tree structure Definition at line 176 of file fiff_dir_node.cpp. Try to explain... Refactored: fiff_explain (fiff_explain.c) Definition at line 289 of file fiff_dir_node.cpp. Try to explain a block... Refactored: fiff_explain_block (fiff_explain.c) Definition at line 276 of file fiff_dir_node.c. Definition at line 281 of file fiff_dir_node. Get textual explanation of a tag Refactored: fiff_get_tag_explanation (fiff_explain.c) Definition at line 303 of file fiff_dir_node.cpp. Checks whether a FiffDirNode has a specific kind Definition at line 219 of file fiff_dir_node.cpp. Definition of the has_tag function in fiff_read_named_matrix.m Definition at line 209 of file fiff_dir_node.cpp. Returns true if directory tree structure contains no data. Definition at line 121 of file fiff_dir_node.h. Returns the number of child nodes Definition at line 322 of file fiff_dir_node.cpp. Returns the number of entries in this node Definition at line 315 of file fiff_dir_node.cpp. Prints elements of a tree. Refactored: print_tree (fiff_dir_tree.c) Definition at line 234 of file fiff_dir_node.cpp. Child nodes. Definition at line 255 of file fiff_dir_node.h. Directory of tags in this node. Definition at line 248 of file fiff_dir_node.h. Directory of tags within this node subtrees as well as FIFF_BLOCK_START and FIFF_BLOCK_END Definition at line 250 of file fiff_dir_node.h. Id of this block if any. Definition at line 247 of file fiff_dir_node.h. Number of entries in the directory tree node. Definition at line 252 of file fiff_dir_node.h. Parent node. Definition at line 253 of file fiff_dir_node.h. Newly added to stay consistent with MATLAB implementation. Definition at line 254 of file fiff_dir_node.h. Block type for this directory. Definition at line 246 of file fiff_dir_node.h.
https://mne-cpp.github.io/doxygen-api/a02451.html
CC-MAIN-2022-27
refinedweb
385
55.2
Python library to transform objects into positional flat string lines Project description pyflat The pyflat package aims to provide easy serialization of Python objects into positional fixed-width strings. These kind of structures are useful when integrating with old system which depends on positional flat files. Quick Start Instalation To install Pyflat, use pip or pipenv: $ pip install pyflat Example Usage from pyflat import Base, Field, RIGHT class User(Base): name = Field(size=20) income = Field(size=10, sep='0', justify=RIGHT) user = User() user.name = 'John Doe' user.income = 4500.35 user.dump() # => John Doe 0000450035 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyflat/
CC-MAIN-2021-10
refinedweb
120
61.87
Just another simple developer's thoughts! Assume for a moment that you need to monitor a specific folder and its subfolders for incoming files such as a FTP transfer and you wanted to accomplish such a task without an type of action from the system administrator. To accomplish this I will employ the FileSystemWatcher Class which is located in the System.IO NameSpace. First off open Visual Studio 2005 and create a new solution. Next add a new Class Library project to this solution. Next add a new item to this project and select Windows Service. Provide a meaningful name, in this case I am using FileMonitor. Once this service is created click “click here to switch to code view”. Add the following using statement: using System.IO; Just to confirm the service actually works we will be writing entries to the system event log. Now we need to declare the EventLog object and FileSystemWatcher object that we will consume. private EventLog _eventLog = null;private FileSystemWatcher _fileSystemWatcher = null; Now we are ready to establish the EventLog. What we are going to do is wrap this logic in a method and we will in turn call this method from the OnStart event of our Windows Service. private void InitializEventLog(){ string _sourceName = "MyFsoWatcher"; string _logName = "Application"; if (!EventLog.SourceExists(_sourceName)) { EventLog.CreateEventSource(_sourceName, _logName); } _eventLog = new EventLog(); //Create Event Log _eventLog.Log = _logName; //Assign Event Log Name _eventLog.Source = _sourceName; //Assign Event Source Name} The next step is to create the logic to define the FileSystemWatcher and how we are going to use. Once again I will wrap this logic in a method and initialize it from the OnStart event. private void IntializeFileSystemWatcher(){ string _fileStore = "c:\temp"; string _fileType = "*.doc"; try { _fileSystemWatcher = new FileSystemWatcher(_fileStore, _fileType); } catch (Exception ex) { _eventLog.WriteEntry("Problem in IntializeFileSystemWatcher: " + ex.Message.ToString()); } _fileSystemWatcher.EnableRaisingEvents = true; // Begin watching. _fileSystemWatcher.IncludeSubdirectories = true; // Monitor Sub Folders // Add event handlers for doc files and deletion of doc files. _fileSystemWatcher.Created += new FileSystemEventHandler(OnDocFileCreated); _fileSystemWatcher.Deleted += new FileSystemEventHandler(OnDocFileDeleted);} If you recall I referenced the OnStart event a couple of times. To initialize the event log and file monitor the OnStart Method looks like: protected override void OnStart(string[] args){ InitializEventLog(); //Initialize Event Log IntializeFileSystemWatcher(); //Initialize The FileSystemWatcher} At this point your project should compile and we have established the event log and the FileSystemWatcher will monitor for files that are of type “doc” in the temp folder on the C: drive. Now it is time to create a installer for this service. Go to the design view of FileMonitor.cs and right click and select “Add Installer”. At this point you will have created ProjectInstaller.cs and you should be looking at the following window: Bring up the properties of serviceProcessInstaller1 and change the account type to LocalSystem. Now bring up the properties of serviceInstaller1 and change the StartType to “Automatic”. Also provide a description and display name. For real world use you would want to modify these service names to provide a unique meaningful name. The last thing to accomplish is to go back to the code view of FileMonitor.cs and wire up the service. static void Main(){ System.ServiceProcess.ServiceBase[] ServicesToRun; ServicesToRun = new System.ServiceProcess.ServiceBase[] { new FileMonitor() }; System.ServiceProcess.ServiceBase.Run(ServicesToRun);} At this point your service is complete and ready for installation. To install this service open a Visual Studio Command Prompt and change to the folder where your compiled executable resides and execute the following: installutil -u MyFileSystemWatcher.exe If all has gone as planned you may now open you services control panel and see this newly installed service. Go ahead and start the service and place a word document in the c:\temp folder which this service is monitoring and view the results in your event log. Since there is nothing worse the reinventing the wheel I turned to GotDotNet and looked at the user samples for a tool to allow me to edit a configuration file and to stop and start this service. I found "How To: Write, Modify, Remove, Read Application Settings From Configuration File" by Pankaj Banga which is a Windows Form that provvided the ability to edit a configuration but it did not include the ability to stop and start a service which was easily added. All I did was add two new buttons to the Windows form with the title of "Stop Service" and "Start Service". Here is the code for thos who may be interested. Private Sub btnStopService_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnStopService.Click Dim controller As New ServiceController controller.MachineName = "." controller.ServiceName = "FSWService" Dim status As String = controller.Status.ToString ' Stop the service controller.Stop()End Sub Private Sub BtnStartService_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BtnStartService.Click Dim controller As New ServiceController controller.MachineName = "." controller.ServiceName = "FSWService" Dim status As String = controller.Status.ToString ' Start the service controller.Start()End Sub Here is a screenshot of the form itself:
http://aspadvice.com/blogs/sswafford/archive/2006/10/14/Monitor-a-specified-file-location-for-changes.aspx
crawl-002
refinedweb
835
58.28
mbrlen - get number of bytes in a character (restartable) Synopsis Description Return Value Errors Examples Application Usage Rationale Future Directions See Also #include <wchar.h> size_t mbrlen(const char *restrict s, size_t n, mbstate_t *restrict ps); If s is not a null pointer, mbrlen() shall determine the number of bytes constituting the character pointed to by s. It shall be equivalent to: mbstate_t internal; mbrtowc(NULL, s, n, ps != NULL ? ps : &internal); If ps is a null pointer, the mbrlen()len(). The behavior of this function is affected by the LC_CTYPE category of the current locale. The mbrlen() function shall return the first of the following that applies: The mbrlen() function may fail if:The following sections are informative. None. None. None. None. mbsinit() , mbrtowc() , .
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man3p/mbrlen.3p
crawl-003
refinedweb
125
51.18
Archived - Game Development with Unity* and Intel® RealSense™ Camera By Biswanger, Gregor, published on May 28, 2015 The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available. Interactive games all have some functionality that game developers must implement: game objects should adhere to the laws of physics and collide with other game objects; a game object should trigger events, such as playing a sound and counting the score; and it should respond to user inputs like the joystick, mouse, and keyboard. Oftentimes, developers must re-implement this functionality for every target platform, which can be time-consuming. To make life easier, developers can choose a game engine that includes functions to perform common tasks, freeing them to focus on the more creative aspects of their game. The cross-platform game engine Unity* 3D from Unity Technologies is an attractive solution. Its targeted platforms are computers, game consoles, mobile devices, and web browsers. Unity 3D also supports different programming languages like C++, C#, Unity Script (similar to JavaScript*), and Boo. This article is appropriate for both beginner and experts. For those who have never worked with Unity before, there is a short introduction in the first part of the article. Following that, I demonstrate how to use hand tracking with the Intel® RealSense™ SDK to develop a little game in C# with the Intel RealSense Camera. To participate you`ll need: Visual Studio and Unity installed on your PC, the Intel RealSense SDK, and the Intel RealSense camera. Download, Install, and Configure Visual Studio* You can download Unity for free. Unity runs on the Microsoft Windows* OS and Apple Mac* OS X*. After Unity is installed, you should complete the free registration process. The code shown in this article is modelled on the C# programming language. The default cross-platform IDE Unity provides is MonoDevelop. If you are a .NET developer, you’ll be happy to know that Unity and Microsoft Visual Studio work perfectly together. The first step is to connect Unity with Visual Studio. To do this, you have to select Edit -> Preferences… from the drop-down menu. A Unity Preferences dialog will open. Choose External Tools on the left side (Figure 1). For the external script editor click on MonoDevelop within the ComboBox and find the Visual Studio .exe file using the Browse… button. Install Visual Studio using the default settings; the location of the program should be: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe. These are all necessary steps to successfully connect. Figure 1. Connect Visual Studio* with Unity* The Environment Now is a good time to configure certain features of the Unity environment. The main screen consists of different windows. To find your ideal arrangement either choose the settings manually or use the default settings. Let's choose the popular setting Tall from the Layout drop-down menu in upper right. Figure 2. Choose the position of the window The Scene Window and the Game Window The first big scene window (Figure 3) is used for positioning and to lay out your game world. The standard for visualizing content is in 3D. Game objects can be positioned easily via drag & drop. You can see the game tab directly above the main screen. The game tab displays a preview of the rendered game, as it will be shown on the desired platform. Figure 3. The scene window Figure 4. The game window The Hierarchy Window The hierarchy window (Figure 5) contains the relevant content objects, for example, game objects, camera views, light objects, UI elements, and audio. New content objects are created in the menu under "GameObject" or directly in the window with "Create". Figure 5. The hierarchy window The Inspector Window When a content object in the hierarchy window is selected, the inspector window displays, showing its properties (Figure 6). It’s possible to extend functions by adding components that are available on the 'add component' button. A possible component would be a rigid body for example, which gives a game object its gravity. Figure 6. The inspector window The Project Window Project files such as game scenes, GameObjects, audio, images, and script files are structured in the project window (Figure 7). When selecting a file, the inspector displays the appropriate properties and preview information. Applied script files can be opened directly with their own development environment by double-clicking the file name. Figure 7. The project window Developing a simple game In the early 1970s Atari founder Nolan Bushnell published the successful game named “Pong.” The principle of the game is simple and similar to that of table tennis: a ball moves back and forth on the screen. Each player controls a vertical paddle. If a player fails to return the ball, the opponent receives a point. The next steps show how to develop a Ping Pong game in Unity. With File -> New Project create a new project named PingPongUnity. Figure 8. Create a new project. The Paddle First, to add the paddles, click on Create -> 3D Object -> Cube in the hierarchy window. To change the properties for the shape and position of the paddles, you have to select the generated cube and change it in the inspector window. We change the name to Paddle1 and customize the shape in the transform section. The necessary scale-values are: X: 0.5, Y:3, Z:1. Set the Position to: X: -9, Y: 1, Z: 0. The first paddle is ready now. Right click on the paddle in the hierarchy window and then click Duplicate in the context menu. This creates an identical copy of the paddle. Now you only have to rename the copy to Paddle2. Then change the X-value of Position to 9. For the paddles to have the correct physical behavior, we need to add a RigidBody component by clicking the “Add Components” button for each paddle. This allows us to move the paddles within the code, allowing the paddles to collide with the ball. If the ball collides with a paddle now, it would be hurled out of the field. That is not the behavior we want. To prevent that, we need to set the constraints property of each added RigidBody. The paddles are just allowed to move up and down, so we need to select x, y and z at the freeze rotation and only x at the freeze position. Figure 9. The generated paddles The Walls For ping pong we need two walls, one above and one below, so that the ball bounces off them and remains in the area. For this, we use the cube object again. Click the Create -> 3D Object -> Cube in the hierarchy window again. The name will be WallBelow. The shape of the wall is now slightly longer and gets the scale values: X: 23, Y: 1, Z: 1.The position gets: X: 0, Y: -5, Z: 0. To generate the second wall you only have to duplicate the current wall again and rename it to WallAbove. Now just change the Y-Position to 7. Figure 10. The generated walls The ball The last and most important object of the game is, of course, the ball. Unity already offers one in the desired shape. Just click Create -> 3D Object -> Sphere in the hierarchy window. Change the name to Ball and set the Position to X: 0, Y: 1, Z: 0. To add the appropriate physics, you need to add the rigid body component using "Add Components" and remove the property selection "Use gravity”. Figure 11. The generated ball Change the camera perspective and let there be light The scene display and the game window are still not ideal for our game. We need to change the camera settings. In the hierarchy window, select the standard camera Main Camera. Then change the current 3D view to a 2D view. This is done in the camera Projection properties. Change from Perspective (3D) to Orthographic (2D). So we get a perspective from the front. Also change the new Size property to 10. If the representation of the scene window is still not ideal, you can rotate the view by clicking the coordinate’s axis. Figure 12. The rotation of the view will be enabled by clicking the coordinate’s axis. Now, the camera is equipped with an ideal setting. The game objects themselves are still quite dark. To add some light, switch to the hierarchy window and add the light by clicking on Create -> Light -> Directional Light. You can see the changes in the game window now. Figure 13. The game in the game window Let’s move the ball The field is ready, but it still needs some action. If you run the game, the ball falls to the ground and that’s it. What we need are a few lines of C# code to bring the ball in motion. To do this, we create a C# script file named “Ball” by clicking the Create button in the project window. Then we create a binding between the script file and the game object. This is done by dragging & dropping the file to the game object in the hierarchy or scene window. If you select the ball object, the script properties are shown in the inspector window. Double-clicking the C# file automatically starts the development environment you set up, either MonoDevelop or Microsoft Visual Studio. The generated C# class begins with a standard structure. This includes the MonoBehavior base class, which is an interface to the Unity API. The “Start” method is used for a one-time initialization and preparation. Game frameworks are working with several pictures per second, just like when you’re watching TV or a flip-book. A single picture is called a frame. The “Update” method is called every time before a frame is drawn. This repetition is what we call a game loop, which means that other game objects can act irrespective of the player actions. In our game we want to set the direction in which the ball may fall, once at the start of the game. The “Start” method seems to be perfect for this. You can find the required code in Listing 1. using UnityEngine; public class Ball : MonoBehaviour { private float _x; private float _y; // Use this for initialization void Start () { _x = GetRandomPositionValue(); _y = GetRandomPositionValue(); GetComponent<Rigidbody>().velocity = new Vector3(Random.Range(5,10) * _x, Random.Range(5,10) * _y, 0); Debug.Log("x: " + _x); Debug.Log("y: " + _y); } private float GetRandomPositionValue() { if (Random.Range(0, 2) == 0) { return -1; } return 1; } // Update is called once per frame void Update () { } } Listing 1. Let the ball fall in one direction with C# The real magic of the game object is hidden behind the components section. There is one component for every possible game logic. The C# code in Listing 1 accesses the Rigidbody component of the current game object by using the GetComponent method. Rigidbody expands the game object with the gravity functionality. When setting the velocity property, a new Vector3 instance is assigned. This defines a random direction of the current gravity. Testing the game again shows that the ball randomly falls in different directions. But if the ball collides against a wall, it slides up off along the screen, which is not how we want it to behave. It should be more like a rubber ball that bounces off the walls. For this behavior, the Physic Material is suitable. We create a Physic Material by clicking the Create button in the project window. We change the name to Bouncy. There are some adjustments to the properties required for the desired bounce behavior. In the inspector window, the properties Dynamic Friction and Static Friction are given the value 0. The Bounciness is set to 1. The Friction Combine requires the minimum value, and Bounce Combine, the maximum. Figure 14. Adding and setting up the Physic Material For all game objects to use our generated Physic Material, you must set the general physic settings. To do this click on the menu Edit -> Project Settings -> Physics. The Physic Material Bouncy is dragged & dropped in the Default material property. The property Bounce Threshold needs to be set to 0.1. If you start the game now, the ball behaves like a rubber ball. Figure 15. Set “Bouncy“ within the PhysicsManager Control the paddle with your hand The game is close to being finished. The game objects are where they belong and the ball is in motion. The only thing that’s missing now is the logic to control the paddles. For this, there’s nothing better than using an innovative input device, namely a 3D camera, which became popular on the Microsoft Xbox* with Kinect*. Some of the latest tablets and Ultrabook™ devices are being equipped with 3D cameras. If you’re don’t already own one, these devices are definitely affordable. For development you’ll need to download the free Intel RealSense SDK. After the installation you can find some Unity samples in the Intel RealSense SDK folder. Figure 16 – The Intel RealSense SDK samples Unity 3D supports third-party plug-ins, and one has been created for the Intel RealSense SDK. To include the Intel RealSense SDK it’s necessary to add the SDK DLL files to the project. In Unity select Assets -> Import package -> custom package.... Then choose the RSUnityToolkit.unitypackage file, which is under C:\Program files (x 86) \Intel\RSSDK\framework\Unity. A dialog window like the one shown in Figure 17 will then open. All of the files are selected by default. Click None to clear the selections and select only the plug-ins directory. If you’re developing on a 64-bit computer, you still need to replace the imported DLL files manually. You’ll find them in the following directory: C:\Program Files (x86)\Intel\RSSDK\bin\x64 Figure 17 – Integrate the Intel RealSense SDK into Unity* by using "import package" After the integration, it’s time to start the development of the paddle logic. We need another C# file, which can be generated by clicking the Create button in the project window. The name of the file is Paddle. Only one general instance needs to be generated during the game to respond to both paddles. This is very important because the Intel RealSense camera can only connect with one instance, not multiple. For this reason, we’ll connect the new file by dragging & dropping it to the "main camera"game object in the hierarchy window. Next double-click the Paddles.cs file. The paddle game objects are required, even though the file is connected with the main camera. The connection between multiple game objects in Unity is simply working via the properties. In C# you need to declare the additional properties using “get” and “set”, which is not necessary in Unity. All we need is to create one public variable of type GameObject for each paddle, as shown in Listing 2. The game objects can now be dragged & dropped to the created paddle properties to connect them. public class Paddles : MonoBehaviour { public GameObject Paddle1; public GameObject Paddle2; .. Listing 2 – Self written properties Figure 18. Declare game objects for the Paddles.cs file with Unity* To create a permanent connection with the camera, the Intel RealSense SDK offers a class named PXCMSenseManager. The start function will only be called once from the Unity engine, so it’s ideal for the general preparation of an instance. That’s why we initialize the PXCMSenseManager class within the start function. For that purpose, we’ll give it a QueryHand module first, which allows us to read hand gestures. To recognize the face and voice, a separate module is needed. The instance to the camera is declared outside the start function as _pxcmSenseManager field. The code is shown in Listing 3. public class Paddles : MonoBehaviour { public GameObject Paddle1; public GameObject Paddle2; private PXCMSenseManager _pxcmSenseManager; private PXCMHandModule _pxcmHandModule; // Use this for initialization private void Start() { _pxcmSenseManager = PXCMSenseManager.CreateInstance(); if (_pxcmSenseManager == null) { Debug.LogError("SenseManager Initialization Failed"); } else { pxcmStatus pxcmResult = _pxcmSenseManager.EnableHand(); if (pxcmResult != pxcmStatus.PXCM_STATUS_NO_ERROR) { Debug.LogError("EnableHand: " + pxcmResult); } else { _pxcmHandModule = _pxcmSenseManager.QueryHand(); _pxcmSenseManager.Init(); PXCMHandConfiguration configuration = _pxcmHandModule.CreateActiveConfiguration(); configuration.EnableAllGestures(); configuration.ApplyChanges(); configuration.Dispose(); } } } ... Listing 3. Connect permanently with the Intel RealSense camera A direct and up-to-date query of the camera data takes place in the update function (game loop) so we can determine if the camera sees a hand or not. The left hand is used for the left paddle and the right hand for the right paddle. Therefore, you need to assign the computed value to the respective paddle. Because the logic is very similar, we create a private function named MoveBall for it. If the player quits or restarts the game, the direct instance to the camera needs to be disconnected. The OnDisable function, which will be called automatically by the Unity engine, is perfect in this case. // Update is called once per frame private void Update() { if (_pxcmSenseManager == null) { return; } _pxcmSenseManager.AcquireFrame(false, 0); _pxcmHandModule = _pxcmSenseManager.QueryHand(); PXCMHandData handData = _pxcmHandModule.CreateOutput(); handData.Update(); MoveBall(handData, PXCMHandData.AccessOrderType.ACCESS_ORDER_LEFT_HANDS, Paddle1); MoveBall(handData, PXCMHandData.AccessOrderType.ACCESS_ORDER_RIGHT_HANDS, Paddle2); _pxcmSenseManager.ReleaseFrame(); } private void MoveBall(PXCMHandData handData, PXCMHandData.AccessOrderType accessOrderType, GameObject gameObject) { PXCMHandData.IHand pxcmHandData; if (handData.QueryHandData(accessOrderType, 0, out pxcmHandData) == pxcmStatus.PXCM_STATUS_NO_ERROR) { PXCMHandData.JointData jointData; if (pxcmHandData.QueryTrackedJoint(PXCMHandData.JointType.JOINT_CENTER, out jointData) == pxcmStatus.PXCM_STATUS_NO_ERROR) { gameObject.GetComponent<Rigidbody>().velocity = new Vector3(-9, jointData.positionWorld.y*100f, 0); } } } private void OnDisable() { _pxcmHandModule.Dispose(); _pxcmSenseManager.Dispose(); } } Listing 4. Move a paddle with your hand Restart the game The game objects are generated, the logic is all coded, and now it’s time to test the game. If a hand is recognized in a 1 to 5 meter range from the camera, the paddles will move. If the ball hits one of the paddles, it will bounce off and move in the opposite direction. If the ball leaves the field without touching a paddle, there’s nothing the desperate player can do. In that case, the game should restart automatically if the ball leaves the game field. Only a few lines of code are necessary to implement that logic in the Ball.cs file. The Update function should check if the ball is still in sight. If the ball is out of sight, the Application.LoadLevel function restarts the game, as shown in Listing 5. private bool _loaded; // Update is called once per frame void Update () { if (GetComponent<Renderer>().isVisible) { _loaded = true; } if (!GetComponent<Renderer>().isVisible && _loaded) { Application.LoadLevel(Application.loadedLevel); } } Listing 5. Ball.cs: Restart the game, if ball is out of sight Publish the game The first version of the game is ready and of course we want to share it with others. The menu item File | Build Settings (Ctrg + Shift + B) opens the Build dialog. You can choose which platforms you want your game to run from a huge selection in the Platform box. We will choose Windows and set x86_64 as architecture. Simply click Build to start the compiler and generate an .exe file for Windows. To play the game, you need to have an Intel RealSense camera including the pre-installed driver. Figure 19. Create an .exe file to publish the game Now it’s your turn I’ve shown you how to create a fun game, but it could still use some perfecting. So show me what you’ve got. Build in a high score, sounds, and some cool special effects. Upload a video on YouTube*, and send me an e-mail so I can see what you’ve created. I’m excited about your ideas. And now, have a lot of fun with your first Unity game and the Intel RealSense camera. Figure 20. Playing the game Figure 21. Game UI About the Author Gregor Biswanger (Microsoft MVP, Intel® Black Belt Software Developer, and Intel® Software Innovator) is the founder of CleverSocial.de and a freelance lecturer, consultant, trainer, author, and speaker. He advises large and medium-sized companies, organizations, and agencies on software architecture, agile processes, XAML, Web- and hybrid-app development, cloud solutions, content marketing, and social media marketing. He has published several video trainings about these topics at video2brain and Microsoft. He’s also a freelance author and writes online for heise Developer as well as articles for trade magazines. He has penned the book “cross-platform development” from entwickler.press. You can find Gregor often on the road attending or speaking at international conferences. In addition Intel asked him to be a technology consultant for the Intel® Developer Zone. He’s the leader of the INdotNET (Ingolstaedter .NET Developers Group).You can find his contact information here: 1 commentTop Pooja Baraskar said on May 30,2015 Very helpful, thanks for sharing :) Add a CommentSign in Have a technical question? Visit our forums. Have site or software product issues? Contact support.
https://software.intel.com/en-us/articles/game-development-with-unity-and-intel-realsense-3d-camera
CC-MAIN-2019-43
refinedweb
3,485
57.87
.net programmatically in - Create Excel (.XLS and .XLSX) file from C# How can I create an Excel Spreadsheet with C# without requiring Excel to be installed on the machine that's running the code? And what about using Open XML SDK 2.0 for Microsoft Office? A few benefits: - Doesn't require Office installed - Made by Microsoft = decent MSDN documentation - Just one .Net dll to use in project - SDK comes with many tools like diff, validator, etc Links: - Download SDK - Main MSDN Landing - "How Do I..." starter page - blogs.MSDN brian_jones announcing SDK - blogs.MSDN brian_jones describing SDK handling large files without crashing (unlike DOM method) The commercial solution, SpreadsheetGear for .NET will do it. You can see live ASP.NET (C# and VB) samples here and download an evaluation version here. Disclaimer: I own SpreadsheetGear LLC An extremely lightweight option may be to use HTML tables. Just create head, body, and table tags in a file, and save it as a file with an .xls extension. There are Microsoft specific attributes that you can use to style the output, including formulas. I realize that you may not be coding this in a web application, but here is an example of the composition of an Excel file via an HTML table. This technique could be used if you were coding a console app, desktop app, or service. You actually might want to check out the interop classes. You say no OLE (which this isn't), but the interop classes are very easy to use. You might be impressed if you haven't tried them. Please be warned of Microsoft's stance. Here's a completely free C# library, which lets you export from a DataSet, DataTable: CreateExcelFile.CreateExcelDocument(myDataSet, "C:\\Sample.xlsx"); It doesn't get much simpler than that... And it doesn't even require Excel to be present on your server. Syncfusion Essential XlsIO can do this. It has no dependency on Microsoft office and also has specific support for different platforms. - ASP.NET - ASP.NET MVC - UWP - Xamarin - WPF and Windows Forms - Windows Service and batch based operations Code sample: //Creates a new instance for ExcelEngine. ExcelEngine excelEngine = new ExcelEngine(); //Loads or open an existing workbook through Open method of IWorkbooks IWorkbook workbook = excelEngine.Excel.Workbooks.Open(fileName); //To-Do some manipulation| //To-Do some manipulation //Set the version of the workbook. workbook.Version = ExcelVersion.Excel2013; //Save the workbook in file system as xlsx format workbook.SaveAs(outputFileName); The whole suite of controls is available for free through the community license program if you qualify (less than 1 million USD in revenue). Note: I work for Syncfusion. Just want to add another reference to a third party solution that directly addresses your issue: (Disclaimer: I work for SoftArtisans, the company that makes OfficeWriter). Here's a way to do it with LINQ to XML, complete with sample code: Quickly Import and Export Excel Data with LINQ to XML It's a little complex, since you have to import namespaces and so forth, but it does let you avoid any external dependencies. (Also, of course, it's VB .NET, not C#, but you can always isolate the VB .NET stuff in its own project to use XML Literals, and do everything else in C#.) public class GridViewExportUtil { public static void Export(string fileName, GridView gv) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.AddHeader( "content-disposition", string.Format("attachment; filename={0}", fileName)); HttpContext.Current.Response.</param>); } } } } Hi this solution is to export your grid view to your excel file it might help you out: - Install the Open XML SDK with the tool. - Create an Excel file using the latest Excel client with desired look. Name it DesiredLook.xlsx. - With the tool open DesiredLook.xlsxand click the Reflect Code button near the top. - The C# code for your file will be generated in the right pane of the tool. Add this to your C# solution and generate files with that desired look. where you can paste the generated code. If you go back one version of the file, it will generate an excel file like this: Have you ever tried sylk? We used to generate excelsheets in classic asp as sylk and right now we're searching for an excelgenerater too. The advantages for sylk are, you can format the cells.
http://code.i-harness.com/en/q/24ddd
CC-MAIN-2019-09
refinedweb
721
58.08
MCP23008-RK (community library) Summary Particle driver for 8-port I2C GPIO Expander MCP23008 Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. MCP23008-RK Particle driver for MCP23008 8-port I2C GPIO expander Pinouts One Side: - 1 SCL (to Photon D1, blue) - 2 SDA (to Photon D0, green) - 3 A2 - 4 A1 - 5 A0 - 6 /RESET - 7 NC - 8 INT (output) - 9 VSS (GND, black) Other Side: - 10 GP0 - 11 GP1 - 12 GP2 - 13 GP3 - 14 GP4 - 15 GP5 - 16 GP6 - 17 GP7 - 18 VDD (3.3 or 5V, red) Note that the address lines are not biased so you must connect them to GND or VDD to set the address! Normally you'd connect all to GND to set address 0. Same for /RESET, though you probably want to connect that to VDD to keep the device out of reset. Here's my test circuit: [image unavailable] Important: Remember the pull-ups on the SDA and SCL lines (4.7K or 10K typically)! While many breakout boards like you'd get from Adafruit or Sparkfun includes the pull-ups on the board, you must add external resistors when using a bare MCP23008. Using the Library This library has an API that looks remarkably like the regular GPIO calls. The full API documentation is available in the docs/index.html file or in the browsable API reference. Initialization Typically you create an global object like this in your source: MCP23008 gpio(Wire, 0); The first parameter is the interface. It's typically Wire (D0 and D1). The second parameter is the address of the device (0-7). This corresponds to the value set on the A0, A1, and A2 pins and allows up to 8 separate MCP23008 devices on a single I2C bus. On the Electron, you can also use Wire1 on C4 and C5: MCP23008 gpio(Wire1, 0); begin void begin(); You must call begin(), typically during setup(), to initialize the Wire interface. pinMode void pinMode(uint16_t pin, PinMode mode); Sets the pin mode of a pin (0-7). Values for mode include: - INPUT (default) - INPUT_PULLUP - OUTPUT Note that it does not support INPUT_PULLDOWN, as the MCP23008 only supports internal pull-ups. Also, they're 100K vs. the 40K (-ish) pull-ups in the STM32F205. digitalWrite void digitalWrite(uint16_t pin, uint8_t value); Sets the value of a pin (0-7) to the specified value. Values are typically: - 0 (or false or LOW) - 1 (or true or HIGH) digitalRead int32_t digitalRead(uint16_t pin); Reads the value of the pin (0-7). This will be HIGH (true, 1) or LOW (false, 0). If used on an output pin, returns the current output state. getPinMode PinMode getPinMode(uint16_t pin); Returns the pin mode of pin (0-7), which will be one of: - INPUT (default) - INPUT_PULLUP - OUTPUT pinAvailable bool pinAvailable(uint16_t pin); Returns true if 0 <= pin <= 7. More Functions The full documentation is available in the docs/index.html file or in the browsable API reference. This documentation is generated automatically from comments in MCP23008-RK.h using the Doxygen tool. Example Programs 02-Really-Simple-Output #include "Particle.h" #include "MCP23008-RK.h" MCP23008 gpio(Wire, 0); void setup() { Serial.begin(9600); gpio.begin(); gpio.pinMode(0, OUTPUT); gpio.digitalWrite(0, HIGH); } void loop() { } It's simple! 01-simple-gpio This example outputs a square wave on pins 0 - 3: - GP0: 1000 ms. period (1 Hz) - GP1: 666 ms. period - GP2: 200 ms. period (5 Hz) - GP3: 20 ms. period (50 Hz) This should result in the following: [image unavailable] You can also connect a jumper from GP7 to one of those pins. It echoes the value on the GP7 input to the blue D7 LED on the Photon, so you can see the different frequencies. 03-interrupts Uses the INT pin and a handler to be notified of changes in a GPIO on the MCP23008 efficiently. 04-interrupts-class Uses the INT pin and has a handler as a C++ class member function. Version History 0.0.4 (2021-02-01) - Added documentation 0.0.3 (2021-01-28) - Adds support for interrupt mode, to be further documented later. - Adds Wire lock() and unlock() for thread safety Browse Library Files
https://docs.particle.io/cards/libraries/m/MCP23008-RK/
CC-MAIN-2021-21
refinedweb
732
65.73
My ten year old has learnt quite a lot of java and wants a few BEGINNER projects. Any ideas? Okay... Get your (son/daughter?) to write a control component. It should have the following buttons; Stop, Rewind (or Previous), Play/Pause, Fast Forward (or Next), Record. It should have a display panel above the buttons to display time. There should be a toggle to display either Time Elasped, or Time Remaining. The reason that I suggested this project is... I plan on doing the same thing on my vacation (in two weeks). I am willing (after my vacation, I'm not going to steal their work) to critique the work. Son. Thanks, but that is far too advanced for him. He knows all of this:- Objects Instance variables Bit of swing File io Threads and networking(well, a little) and more... He DOES NOT KNOW user io. user io is a lot easier to learn/code than File io just let him take a look at the Scanner class and JOptionPane. I know this sounds antiquated, but I learned a lot by writing a text adventure game -- like a Scott Adams text adventure game back in the early 1980s. Something like that would expand his knowledge of user interaction and could be great for learning code re-use. It could be done on the console or graphically or in an applet like at the link above. I know this sounds antiquated why would this sound antiquated? no matter in what century they were born, all Olympic medal sprinters learned to crawl, then walk. every one has to get to know the basics before going 'expert', so it's a good advice that 'll probably stay just that (good and solid advice) as long as people try to start learning java coding. Tic-tac-toe - some Swing, some arrays, some logic... but nothing too hard. I'd personally recommend a file/directory copy utility. This way, he'll get more acquainted with how the CLI clients work and get a taste of implementing a fun utility from scratch. The good thing is that this project can be beefed up if required by adding more options like the way a real copy command works. Bonus points for progress bar etc. I gave him these ideas and here is his response. 7:30 pm : "Yay! Learnt Scanner!" 7:45 pm : Checked out the Scott Adams game 8:00 pm : Scott Adams--Hmm... how could I do that? 8:15 pm : tic-tac-toe:"EUREKA!!!! JLabels..." 8:30 pm : file copy-way too complicated ? JPanel myPanel = new JPanel(); onButtonPressed myPanel = new JPanel(); something like that? seems possible to me, never actually coded it myself, though. ? Yes. But of you explain why, there may be a better approach (eg CardLayout) no, this is what i meant:- import java.io.*; import java.awt.event.*; public class --- implements Serializable, ActionListener { JFrame frame; public static void main(String[] args) { new ---().go(); } public void go() { JPanel panelone = new JPanel(); frame = new JFrame("Double panel swing test"); frame.setContentPane(panelone); JButton button = new JButton("Click me"); button.addActionListener(this); panelone.add(button); //other swing-related code...now for the actionPerformed() } public void actionPerformed(ActionEvent ev) { //all...or...nothing... JPanel panel2 = new JPanel(); frame.setContentPane(panel2);//THIS is what i meant } } Yes, I understood that. You skeleton code is OK. But if you want to switch between multiple sets of content in the same frame, there are Swing classes to handle that for you, that's all. It just depends on where you-re going next with this... 8:00 pm : Scott Adams--Hmm... how could I do that? Rock on! One of the great benefits of something like that is also USER RESPONSE. People evaluate what the program does rather than how it is written. I sometimes get more excitement out of the coolness factor of the code more than what the code produces. Getting feedback from users will serve him well. A simple hangman game should be easy enough :) Find some simple games which can keep him excited But if you want to switch between multiple sets of content in the same frame, there are Swing classes to handle that for you import java.awt.event.*; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; public class Jpaneltest implements ActionListener { JFrame frame; JPanel panelone; public static void main(String[] args) { new Jpaneltest().go(); } public void go() { panelone = new JPanel(); frame = new JFrame("Double panel swing test"); frame.setContentPane(panelone); JButton button = new JButton("Click me"); button.addActionListener(this); panelone.add(button); frame.setVisible(true); frame.setSize(300, 300); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public void actionPerformed(ActionEvent ev) { JPanel pe = new JPanel(); frame.setContentPane(pe); } } How? When I tested my code, all that happened:- 1. Click the button. 2. Went out of the button. 3. It still looks as if it has just been clicked, nothing happens either. If you want to have multiple Jpanels and use a Button to show one at a time the I think you should use CardLayout can a cardLayout use a JButton instead of a JComboBox to navigate? yes, here's a link for an example Related Articles
http://www.daniweb.com/community-center/threads/412452/any-projects-for-the-java-beginner
CC-MAIN-2013-48
refinedweb
867
67.35
Support » Pololu Jrk USB Motor Controller User’s Guide View document on multiple pages. You can also view this document as a printable PDF. - 1. Overview - - 2. Contacting Pololu - 3. Configuring the Motor Controller - - 4. Using the Serial Interface - - 5. Setting Up Your System - 6. Writing PC Software to Control the Jrk. 1.a. Module Pinout and Components The Pololu jrk USB motor controller can connect to a computer’s USB port via a USB A to mini-B cable (not included). The USB connection is used to configure the motor controller. It can also be used to send commands to the motor controller, get information about the motor controller’s current state, and send and receive TTL serial bytes on the TX and RX lines. Power for the motor must be supplied to the jrk on the VIN and GND lines pictured on the right side of the diagram above. Your power source must be capable of delivering the current your motor will draw. The jrk has reverse power protection on the motor power input lines, so the board will not be damaged if the motor power inputs are accidentally switched. If the VIN supply is not present, the jrk’s microcontroller can be powered directly from USB and perform all of its functions except for driving the motor. For the jrk 21v3, the input voltage should be 5–28 V (the recommended operating voltage is 8–28 V, but the jrk 21v3’s motor driver has derated performance down to 5 V and transient protection to 40 V). The jrk 21v3’s motor driver can supply a continuous 3 A with peaks up to 5 A. For the jrk 12v12, the input voltage should be 6–16 V. The jrk 12v12’s motor driver can supply a continuous 12 A with peaks up to 30 A. The jrk has a linear voltage regulator that derives 5 V from the VIN supply. The 5 V supply is used as the internal logic supply for the jrk and is also available at several pins for powering devices such as external microcontrollers and feedback sensors (such as potentiometers). Because the regulator must dissipate excess power as heat, the available output current is dependent on the input voltage: 50 mA is available for VIN up to 12 V; the available current drops off linearly from 50 mA at 12 V to zero at 30 V. The jrk has three indicator LEDs: - The green USB LED indicates the USB status of the device. When the jrk is not connected to a computer via the USB cable, the green LED will be off. When you connect the jrk to USB, the green LED will start blinking slowly. The blinking continues until the jrk receives a particular message from the computer indicating that the jrk’s USB drivers are installed correctly. After the jrk gets this message, the green LED will be on, but it will flicker briefly when there is USB activity. The configuration utility constantly streams data from the jrk, so when the configuration utility is running and connected to the jrk, the green LED will flicker constantly. - The red error LED indicates an error. If there is an error stopping the motor (besides the Awaiting Command error bit), then the red LED will be on. The red LED is tied to the active-high output ERR, so when there is an error, ERR will be driven high, and otherwise it will be pulled low through the LED. - The yellow output status LED indicates the status of the motor. If the yellow LED is off, then an error (other than the Awaiting Command error bit) is stopping the motor. If the yellow LED is flashing slowly (once per second), then either the motor is off (the Awaiting Command Error bit is set) or the jrk is in speed control mode and the duty cycle is zero. If the yellow LED is on solid, then the motor is on and the motor has reached the desired state. For analog and pulse width feedback modes, this means that the target is within 20 of the scaled feedback. For speed control mode, this means that the duty cycle equals the duty cycle target. If the yellow LED is flashing quickly (16 times per second), then the motor is on and the motor has not reached its desired state. The ERR line is an optional output that is tied to the red error LED described above. It is driven high when the red LED is on, and it is a pulled low through the red LED when the red LED is off. Since the ERR line is never driven low, it is safe to connect the ERR line of multiple jrks together. Please note, however, that doing this will cause the error LEDs of all connected jrks to turn on whenever one jrk experiences an error; the ERR output of the jrk experiencing the error will drive the LEDs of any other jrks it is connected to, even though they are not experiencing error conditions themselves. For more information on the possible error conditions and response options, please see Section 3.f. The TX line transmits non-inverted, TTL (0 – 5 V) serial bytes. These bytes can either be responses to serial commands sent to the jrk, or arbitrary bytes sent from the computer via the USB connection. For more information about the jrk’s serial interface, see Section 4. The RX line is the jrk’s control input. In serial input mode, the RX line is used to receive non-inverted, TTL (0 – 5 V) serial bytes. These bytes can either be serial commands for the jrk, arbitrary bytes to send back to the computer via the USB connection, or both. For more information about the jrk’s serial interface, see Section 4. In analog input mode, RX is the analog input line used to determine the system’s target output. In pulse width input mode, the jrk measures the duration of pulses on the RX line to determine the system’s target output. Please see Section 3.b for more information on control input signals. The FB line is the jrk’s feedback input. In analog feedback mode, the voltage on the FB line is used as a measurement of the output of the system. In frequency feedback mode, the frequency of low-to-high transitions on the FB line is used as a measurement of the output of the system. Please see Section 3.c for more information on feedback signals. The AUX line is an output that is generally high whenever the jrk has power. The line will only go low for two reasons: - If the jrk’s microcontroller goes to sleep (because there is no VIN supply and the device has entered USB suspend mode), the pin is tri-stated and pulled low through a resistor. - If the Detect disconnect with AUX option is enabled for either the feedback or the input, then the jrk will drive AUX low for about 150 μs each PID period to check if the feedback and/or analog inputs are disconnected. The RST pin can be driven low to perform a hard reset of the jrk’s microcontroller, but this should generally not be necessary for typical applications. The line is internally pulled high, so it is safe to leave this pin unconnected. 1.b. Supported Operating Systems The jrk works under Microsoft Windows 10, Windows 8, Windows 7, Windows Vista, Windows XP, Linux, and Mac, but it does work with later versions. 1.c. PID Calculation Overview The jrk is designed to be part of a control system in which the output (usually a motor position or speed) is constantly adjusted to match a specified target value. To achieve this, it constantly measures the state of the system and responds based on the latest information. The information processing performed by the jrk is outlined in the diagram below: In this diagram, each arrow represents a specific number measured or computed by the jrk, and the blue boxes represent the internal computations that each occur once per PID period. The PID period can be set in 1 ms increments and is one of about 50 configurable parameters that affect the behavior of the system. For more information about configuring the jrk, see Section 3. The jrk uses the following measurements to determine the output: - The input is measured as a value from 0 to 4095. In analog voltage input mode, this represents a voltage level of 0 to 5 V. In RC mode, the number is a pulse width in units of 2/3 μs. The input is adjusted according to input scaling parameters to determine the target, also a value from 0 to 4095 (see Section 3.b). - The feedback is measured as a value from 0 to 4095. In analog voltage feedback mode, this represents a voltage level of 0 to 5 V. In digital frequency mode, it is a representation of the output speed (see Section 3.c.) The jrk uses this value to compute the scaled feedback, which is a representation of the output of the entire control system. A scaled feedback of 0 should represent the minimum position of the system, and 4095 should represent the maximum position. - The current through the motor is measured as a number from 0 to 255. A calibration value relates this to an actual current in amps. Every PID cycle, the jrk performs the following computations to determine the behavior of the motor (see Section 3.d for more information): - The error is computed as the difference of scaled feedback and target (error = scaled feedback − target). - An implementation of the PID algorithm is applied to the error. PID stands for the three terms that are added together: proportional (proportional to the error), integral (proportional to the accumulated sum of the error over time), and derivative (proportional to the difference of the error relative to the previous PID period.) The three constants of proportionality are the most important parameters determining the behavior of the control system. The result of the PID algorithm is a number from -600 to +600 called the duty cycle target. - The duty cycle target is reduced according to various configurable limits, including acceleration, current, and maximum duty cycle limits (Section 3.e). The limits are intended to prevent the system from causing damage to itself under most circumstances. The resulting value becomes the duty cycle of the PWM (pulse width modulation) signal applied to the motor. A value of +600 corresponds to 100% duty cycle in the forward direction, a value of -600 corresponds to 100% duty cycle in the reverse direction, and a value of 0 corresponds to 0% duty cycle or off. Various parameters and commands have an effect on the steps described above. For example, feedback may be turned off so that the jrk can become a simple speed controller; in this case the PID calculation is bypassed and the duty cycle target is just equal to the target minus 2048. In this mode, limits applied to the duty cycle continue to provide a useful way of preventing damage to the system. As another example, a command to turn the system off prevents the motors from being driven, but all measurements and calculations continue to occur normally.! 3. Configuring the Motor Controller 3.a. Installing Windows Drivers and the Configuration Utility If you use Windows XP, you will need to have Service Pack 3 installed before installing the drivers for the jrk. See below for details. Before you connect your Pololu jrk USB motor controller to a computer running Microsoft Windows, you must install its drivers: - Download the jrk drivers and configuration software (5MB zip) - Open the ZIP archive and run setup.exe. If the installer fails, you may have to extract all the files to a temporary directory, right click setup.exe, and select “Run as administrator”. The installer will guide you through the steps required to install the Pololu Jrk Configuration Utility, the jrk command-line utility (JrkCmd), and the jrk drivers on your computer. - During the installation, Windows will ask you if you want to install the drivers. Click “Install” (Windows 10, 8, 7, and Vista) or “Continue Anyway” (Windows XP). - After the installation is finished, your start menu will have a shortcut to the Jrk Configuration Utility (in the Pololu folder). This is a Windows application that allows you to change all of the settings of your motor controller, as well as see real-time information about its state. Windows 8, Windows 7, and Windows Vista users: Your computer should now automatically install the necessary drivers when you connect a jrk. No further action from you is required. Windows XP users: Follow steps 5-9 for each new jrk you connect to your computer. - Connect the device to your computer’s USB port. The jrk shows up as three devices in one so your XP computer will detect all three of those new devices and display the “Found New Hardware Wizard” three jrk. Follow steps 6-9 for each wizard. After installing the drivers and plugging the jrk in via USB, if you go to your computer’s Device Manager, you should see three entries for the jrk that look like what is shown below: COM ports After installing the drivers, if you go to your computer’s Device Manager and expand the “Ports (COM & LPT)” list, you should see two COM ports for the jrk: jrk into your computer before installing our drivers for it. In that case, Windows will set up your jrk using the default Windows serial driver (usbser.inf), jrk drivers you already installed, which contain the correct name for the port. If you jrk. configuration software will work even if the serial port drivers are not installed properly. Native USB interface There should be an entry for the jrk in the “Pololu USB Devices” category of the Device Manager. This represents the jrk’s native USB interface, and it is used by our configuration software. 3.b. Input Options The Input tab of the jrk configuration utility contains settings for how the feedback system (consisting of the jrk, a motor and a feedback sensor) is externally controlled and monitored. Most importantly, there are three Input modes: - Serial indicates that the jrk gets its target setting over a serial interface, either a virtual COM port or the TTL-level serial port of the jrk, as explained in detail in Section 4. - Analog voltage is used when an analog voltage source, such as a potentiometer, connected to the RX line is used to set the target. A signal level of 0 V on this line corresponds to an input value of 0, and signal level of 5 V correponds to an input value of 4092. - Pulse width is used when the system is to be controlled by the width of digital pulses, such as those output by a radio-control (RC) receiver, measured on the RX line. In this input mode, the input value is the width of the most recent pulse, in units of 2/3 μs. For example, a pulse width of 1500 μs corresponds to an input value of 2250. This input interface accepts pulses from 400 to 2600 μs at a frequency between 10 and 150 Hz. The jrk will only update the input value if it has received four valid pulses in a row, and it will generate the Input invalid error if it goes more than 120 ms without updating the input value. The voltage of the high pulses must be between 2 and 5 V. Version 1.3 of the firmware for the Jrk 21v3 and the Jrk 12v12 contains a bug fix that improves the reliability of the Pulse width input. The update is recommended for devices with an earlier firmware version number, including all devices shipped before August 25, 2009. See Section 3.h for upgrade information. Input scaling The scaling options in this tab determine how the raw input values map to target values, which determine the output of the system. The parameters Maximum and Minimum should be set to the maximum and minimum possible values of the input device; these will be scaled to the target values specified in the right column. For input devices with a clearly defined neutral position, such as joysticks, parameters Neutral Max and Neutral Min are provided. Any input between Neutral Max and Neutral Min will be scaled to the neutral value specified in the right column. Setting the two neutral values to be different allows for a “dead zone”, which is especially desirable in speed control mode. If the input leaves the range specified by the Absolute Max and Absolute Min parameters, an Input disconnect error will occur. For convenience, the Invert input direction option is provided. Select this option to switch the positive and negative input directions. By default, the scaling is linear, but you can change the Degree parameter to use a higher-degree polynomial function, which gives you better control near the neutral point. Clicking the button labeled “Learn…” allows scaling values to be determined automatically: with the motor off, the program will request that the input be set to its minimum, maximum, and neutral positions, and the resulting values will be recorded. After learning, if the neutral position is not important for your system, you may uncheck “Asymmetric” to automatically center the neutral values between minimum and maximum. Input analog to digital conversion In analog mode, the analog to digital conversion panel lets you specify the number of analog samples to average together each PID cycle, which determines the precision and speed of the analog to digitial RX pin becomes disconnected from the analog voltage input device or shorted to 5 V. This option is intended for use in analog voltage input mode with a potentiometer connected between AUX and ground. When the option is selected, the jrk will periodically drive the AUX pin low, verifying that this results in a 0 V signal at RX. If the line does not respond as expected, the Input disconnect error will occur. Serial interface This panel allows the serial ports of the jrk to be configured, including specifying a fixed baud rate and enabling or disabling a CRC byte for all commands. The Device Number setting is useful when using the jrk with other devices in a daisy-chained configuration, and the Timeout specifies the duration before which a Serial timeout error will occur (a Timeout of 0.00 disables the serial timeout feature). For more details on the serial interface, especially for selecting the appropriate mode for your system, see Section 4.a. Manually set target (serial mode only) This section is provided for debugging and testing systems without using an input device. The target may be specified directly with the scrollbar or numerical input.. 3.d. PID Options The PID tab of the jrk configuration utility controls the central calculation performed by the jrk: The integral is computed as the sum of the error over all PID cycles, and the derivative is the current error minus the previous error. The error itself is the difference of the scaled feedback and the target (error = scaled feedback – target). Each of the PID coefficients is specified as an integer value divided by a power of two. The proportional and derivative coefficients can have values from 0.00003 to 1024, and any value above 0.0152 can be approximated within 0.5%. To get the closest approximation to a desired value, type the number into the box after the equal sign, and the best possible numerator and denominator will be computed. In the case of the integral coefficient, the range of the denominator is actually 23 to 218; this is a more useful range, since the integral is usually much larger than the error or derivative. The PID period can be adjusted here; this sets the rate at which the jrk runs through all of its calculations. Note that a higher PID period will result in a more slowly changing integral and a higher derivative, so that the two corresponding PID constants might need to be adjusted whenever the PID period is changed. Preventing integral wind-up Three options are provided for limiting “integral wind-up”, which is the uncontrolled growth of the integral when the feedback system is temporarily unable to keep the error small. This might happen, for example, when the target is changing quickly. One option is the integral limit, a value from 0 to 32767 that simply limits the magnitude of the integral. Note that the maximum value of the integral term can be computed as the integral coefficient times the integral limit: if this is very small compared to 600 (maximum duty cycle), the integral term will have at most a very small effect on the duty cycle. Another option causes the integral to reset to 0 when the proportional term exceeds the maximum duty cycle parameter. For example, if this option is selected when the proportional coefficient is 15 and the maximum duty cycle is 300, the integral will reset whenever the error is larger than 20. Additionally the Feedback dead zone option sets the duty cycle target to zero and resets the integral whenever the magnitude of the error is smaller than this amount. This is useful for preventing the motor from driving when the target is very close to scaled feedback. The feedback dead zone uses hysteresis to keep the system from simply riding the edge of the dead zone; once in the dead zone, the duty cycle and integral will remain zero until the magnitude of the error exceeds twice this value. 3.e. Motor. Detect motor direction. PWM frequency%. Limits motor is off. 3.f. Error Response Options There are several errors that can stop the jrk from driving its motor. For information about what each error means, see Section 4.f. The jrk’s response to the different errors can be configured. Each error has up to three different available settings. - Disabled: The jrk will ignore this error. You can still determine whether the error is occurring by checking the “Occurrence count” column in the configuration utility, or by using the Get Error Flags Occurred serial command (Section 4.f). - Enabled: When this error happens, the jrk will turn the motor off. When the error stops happening, the motor can restart. - Enabled and Latched: When this error happens, the jrk will turn the motor off and set the Awaiting Command error bit. The jrk will not drive the motor again until it receives one of the serial set target commands. The motor can also be restarted from the configuration utility. 3.g. The Plots Window The Plots window of the configuration utility displays real-time data from the jrk, scrolling from right to left. To access this window, select “Plots” from the Window menu, or click on the small plot displayed in the upper-right corner of the main window. All of the variables discussed in Section 1.c are available. Each variable can be independently scaled to a useful range. For example, the Error can be from -4095 to +4095, but for well-tuned feedback systems, it will usually have a much smaller value, so setting the range to ±100 might provide a more useful plot. The plot shows all variables on a scale from -100% to 100%, where 100% represents the high end of the variable’s range. The percentage range displayed on the plot can also be adjusted, using the Range settings at the bottom of the plot window. By default, the plot shows data from the past 5 seconds, with the most recent values on the right and the older values on the left. The time scale of the plot can be shortened using the Time (s) setting at the bottom of the window. The color of each variable in the graph can be selected by double clicking on the colored box next to the variable’s name. Each variable can be independently shown or hidden using the checkbox next to the variable’s name. Here is an example showing all variables plotted simultaneously: 10, Windows 8, Windows 7, and Vista: the driver for the bootloader will automatically be installed. - Windows XP: follow steps 6-8 from Section 3.a to get the driver working. - Once the bootloader’s drivers are properly installed, the green LED should be blinking in a double heart-beat pattern, and there should be an entry for the bootloader in the “Ports (COM & LPT)” list of your computer’s Device Manager. -. 4. Using the Serial Interface 4.a. Serial Modes The jrk has three different serial interfaces. First, it has the RX and TX lines. The jrk can send bytes on the TX line. If the jrk is in serial input mode, the RX line can be used to receive non-inverted, TTL (0 – 5 V) serial bytes (Section 4.b). If the jrk is not in serial input mode, it can not receive bytes on RX because the line is used for analog voltage or pulse width measurement. Secondly, the jrk shows up as two virtual serial ports on a computer if it is connected via USB. One of these ports is called the Command Port and the other is called the TTL port. You can determine the COM port numbers of these ports by looking in your computer’s Device Manager. See Section 3.a for information. The jrk can be configured to be in one of three basic serial modes: USB Dual Port In this mode, the Command Port can be used to send commands to the jrk and receive responses from it. The baud rate you set in your terminal program when opening the Command Port is irrelevant. The TTL Port can be used to send bytes on the TX line and (if the jrk is in serial input mode) receive bytes on the RX line. The baud rate you set in your terminal program when opening the TTL Port determines the baud rate used to receive and send bytes on RX and TX. This allows your computer to control the jrk and simultaneously use the RX and TX lines as a general purpose serial port that can communicate with other types of TTL serial devices. USB Chained In this mode, the Command Port is used to both transmit bytes on the TX line and send commands to the jrk. The jrk’s responses to those commands will be sent to the Command Port but not the TX line. If the input mode is serial, bytes received on the RX line will be sent to the Command Port but will not be interpreted as command bytes by the jrk. jrks, or a jrk and other devices that have a compatible protocol. UART In this mode, the TX and RX lines can be used to send commands to the jrk jrk when a 0xAA byte is received on RX, or it can be set to a fixed value ahead of time. This mode is only available when the input mode is serial. This mode allows you to control the jrk (and send bytes to a serial program on the computer) using a microcontroller or other TTL serial device. 4.b. TTL Serial If the jrk is in serial input mode, then its serial receive line, RX, can receive bytes when connected to a logic-level (0 to 4.0–5 V, or “TTL”), non-inverted serial signal. The bytes sent to the jrk on RX can be commands to the jrk or an arbitrary stream of data that the jrk passes on to a computer via the USB port, depending on which Serial Mode the jrk is in (Section 4.a). The voltage on the RX pin should not go below 0 V and should not exceed 5 V. The jrk provides logic-level (0 to 5 V) serial output on its serial transmit line, TX. The bytes sent by the jrk on TX can be responses to commands that request information or an arbitrary stream of data that the jrk is receiving from a computer the USB port and passing on, depending on which Serial Mode the jrk is in. If you aren’t interested in receiving TTL serial bytes from the jrk,, (Vcc) and logical zeros are transmitted as low (0 V), which is why this format is referred to as “non-inverted” serial. The byte is terminated by a “stop bit”, which is the line going high for at least one bit time. Because each byte requires a start bit, 8 data bits, and a stop bit, each byte takes 10 bit times to transmit, so the fastest possible data rate in bytes per second is the baud rate divided by ten. At the jrk’s maximum baud rate of 115,200 bits per second, the maximum realizable data rate, with a start bit coming immediately after the preceding byte’s stop bit, is 11,520 bytes per. 4.c. Command Protocols: Compact Protocol. Pololu Protocol: Therefore, the command packet is: 0xAA, device number byte, command byte ,. 4.d. Cyclic Redundancy Check (CRC) Error Detection For certain applications, verifying the integrity of the data you are sending and receiving can be very important. Because of this, the jrk has optional 7-bit cyclic redundancy checking, which is similar to a checksum but more robust as it can detect errors that would not affect a checksum, such as an extra zero byte or bytes out of order. Cyclic redundancy checking can be enabled by checking the “Enable CRC” checkbox in the configuration utility. In CRC mode, the jrk expects an extra byte to be added onto the end of every command packet. The most-significant bit of this byte must be cleared, and the seven least-significant bits must be the 7-bit CRC for that packet. If this CRC byte is incorrect, the jrk will generate its Serial CRC error and ignore the command. The jrk does not append a CRC byte to the data it transmits in response to serial commands. A detailed account of how cyclic redundancy checking works is beyond the scope of this document, but you can find a wealth of information using Wikipedia. The CRC computation is basically a carryless long division of a CRC “polynomial”, 0x91, into your message (expressed as a continuous stream of bits), where all you care about is the remainder. The jrk uses CRC-7, which means it uses an 8-bit polynomial and, as a result, produces a 7-bit remainder. This remainder is the lower 7 bits of the CRC byte you tack onto the end of your command packets. The CRC implemented on the jrk is the same as the one on the qik motor controller but differs from that on the TReX motor controller. Instead of being done MSB first, the computation is performed jrk with CRC enabled). If you have never encountered CRCs before, this probably sounds a lot more complicated than it really is. The following example shows that the CRC-7 calculation is not that difficult. For the example, we will use a two-byte sequence: 0x83, 0x01. with CRC enabled is: 0x83, 0x01, 0x17. 4.e. Motor Control Commands. Motor Off). Set Target High Resolution. Set Target Low Resolution Forward. Set Target Low Resolution Reverse). If the Feedback Mode is Analog or Tachometer, then the formula is Target = 2048 − 16×magnitude. If the Feedback Mode is None (speed control mode), then the formula is Target = 2048 − (600/127)×magnitude. This means that a magnitude of 127 will set the duty cycle target to full-speed reverse (-600).. 4.g. Variable Reading Commands Compact protocol: read variable command byte Pololu protocol: 0xAA, device number, read variable command byte with MSB clear The jrk has several serial commands for reading its variables. Most of the variables are two bytes long. For each of those variables, three variable reading commands are provided: - Two bytes: These commands will result in a two-byte serial response from the jrk containing both bytes of the variable. All variables are little endian, so the first byte transmitted will be the least-significant byte, and the second byte transmitted will be the most-significant byte. For variables that can have negative values, the two’s complement system is used (a response of 0xFE, 0xFF means -2). - Low byte: These commands will result in a one-byte serial response from the jrk containing just the least-significant byte of the variable. - High byte: These commands will result in a one-byte serial response from the jrk containing just the most-significant byte of the variable. The command bytes are listed in the table below. The meaning of the variables is described below: - Input: The input is the raw, un-scaled input value, representing a measurement taken by the jrk of the input to the system. In serial input mode, the input is equal to the target, which can be set to any value 0–4095 using serial commands. In analog input mode, the input is a measurement of the voltage on the RX pin, where 0 is 0 V and 4092 is 5 V. In pulse width input mode, the input is the duration of the last pulse measured, in units of 2/3 μs. See Section 3.b for more information. - Target: In serial input mode, the target is set directly with serial commands. In the other input modes, the target is computed by scaling the input. The scaling can be configured in the “Scaling” box of the Input tab in the configuration utility. - Feedback: The feedback is the raw, un-scaled feedback value, representing a measurement taken by the jrk of the output of the system. In analog feedback mode, the feedback is a measurement of the voltage on the FB pin, where 0 is 0 V and 4092 is 5 V. In no feedback mode (speed control mode), the feedback is always zero. - Scaled feedback: The scaled value of feedback. The feedback scaling can be configured in the “Scaling” box of the Feedback tab in the configuration utility. - Current: The value of this variable is proportional to the current running through the motor: on the jrk 21v3, a value of 1 nominally represents 38 mA of current in the motor, while on the jrk 12v12 a value of 1 nominally represents 149 mA of current in the motor . However, the circuitry on the motor driver chips varies between different units, and they can vary depending on which direction the motor is driving, so these calibration values will not always be right for every jrk. The value of this variable will always be zero unless a current limit is enabled. See Section 3.e for more information about current measurement and calibration. - Error sum (integral): Every PID period, the error (scaled feedback minus target) is added to the error sum. The error sum gets reset to zero whenever the jrk is not driving the motor, and can optionally be reset whenever the proportional term of the PID calculation exceeds the maximum duty cycle. There is also a configurable integral limit that the integral can not exceed. - Duty cycle target: Represents the duty cycle that the jrk is trying to achieve. A value of -600 or less means full speed reverse, while a value of 600 or more means full speed forward. A value of 0 means braking. In no feedback mode (speed control mode), the duty cycle target is the target minus 2048. In other feedback modes, the duty cycle target is the sum of the proportional, integral, and derivative terms of the PID algorithm. - Duty cycle: Represents the duty cycle that the jrk is driving the motor with. A value of -600 or less means full speed reverse, while a value of 600 or more means full speed forward. A value of 0 means braking. The absolute value of the duty cycle will always be less than the absolute value of the duty cycle target. The duty cycle is different from the duty cycle target because it takes in to account all of the jrk’s configurable limits: maximum acceleration, maximum duty cycle, maximum current, and also brake duration (Section 3.e). - PID period count: This is the number of PID periods that have elapsed. It resets to 0 after reaching 65535. The duration of the PID period can be configured (Section 3.d). Note: All command bytes from 0x81 to 0xBF that are not listed in this section or Section 4.f are undocumented variable reading commands that will result in a serial response from the jrk and not generate a serial protocol error. These commands are not useful, but they are not harmful.. 4.i. Serial Example Code 4.i.1. Cross-platform C The example C code below works on Windows, Linux, and Mac OS X 10.7 or later. It demonstrates how to set the target of a jrk by sending a Set Target command to its Command Port, and how to read variables from the jrk. The jrk’s input mode must be set to “Serial” and the serial mode must be “USB Dual Port” for this code to work. You will also need to modify the line that specifies the name of the COM port device. This code will work in Windows if compiled with MinGW, but it does not work with the Microsoft C compiler. For Windows-specific example code that works with either compiler, see Section 4.i.2. // Uses POSIX functions to send and receive data from a jrk. // NOTE: The jrk's input mode must be "Serial". // NOTE: The jrk's serial mode must be set to "USB Dual Port". // NOTE: You must change the 'const char * device' line below. #include <fcntl.h> #include <stdio.h> #include <unistd.h> #ifdef _WIN32 #define O_NOCTTY 0 #else #include <termios.h> #endif // Reads a variable from the jrk. // The 'command' argument must be one of the two-byte variable-reading // commands documented in the "Variable Reading Commands" section of // the jrk user's guide. int jrkGetVariable(int fd, unsigned char command) { if(write(fd, &command, 1) == -1) { perror("error writing"); return -1; } unsigned char response[2]; if(read(fd,response,2) != 2) { perror("error reading"); return -1; } return response[0] + 256*response[1]; } // Gets the value of the jrk's Feedback variable (0-4095). int jrkGetFeedback(int fd) { return jrkGetVariable(fd, 0xA5); } // Gets the value of the jrk's Target variable (0-4095). int jrkGetTarget(int fd) { return jrkGetVariable(fd, 0xA3); } // Sets the jrk's Target variable (0-4095). int jrkSetTarget(int fd, unsigned short target) { unsigned char command[] = {0xC0 + (target & 0x1F), (target >> 5) & 0x7F}; if (write(fd, command, sizeof(command)) == -1) { perror("error writing"); return -1; } return 0; } int main() { // Open the Jrk's virtual COM port. const char * device = "\\\\.\\USBSER000"; // Windows, "\\\\.\\COM6" also works //const char * device = "/dev/ttyACM0"; // Linux //const char * device = "/dev/cu.usbmodem00000041"; // Mac OS X int fd = open(device, O_RDWR | O_NOCTTY); if (fd == -1) { perror(device); return 1; } #ifndef _WIN32 struct termios options; tcgetattr(fd, &options); options.c_lflag &= ~(ECHO | ECHONL | ICANON | ISIG | IEXTEN); options.c_oflag &= ~(ONLCR | OCRNL); tcsetattr(fd, TCSANOW, &options); #endif int feedback = jrkGetFeedback(fd); printf("Current Feedback is %d.\n", feedback); int target = jrkGetTarget(fd); printf("Current Target is %d.\n", target); int newTarget = (target < 2048) ? 3000 : 1000; printf("Setting Target to %d.\n", newTarget); jrkSetTarget(fd, newTarget); close(fd); return 0; } 4.i.2. Windows C For example C code that shows how to control the jrk using its serial interface in Microsoft Windows, download JrkSerialCWindows.zip (4k zip). This zip archive contains a Microsoft Visual C++ 2010 Express project that shows how to send a Set Target command and also a Get Position command. It can also be compiled with MinGW. The jrk’s input mode needs to be set to “Serial” and the serial mode needs to be set to “USB Dual Port” for this code to work. This example is like the previous example except it does the serial communication using Windows-specific functions like CreateFile and WriteFile. See the comments in the source code for more details... jrk. You will need to set the jrk’s serial mode to be either “USB Dual Port” or “USB Chained”.
https://www.pololu.com/docs/0J38/all
CC-MAIN-2018-26
refinedweb
6,760
60.65
Conventions for writing code in the notebook. Project description pidgin is a collection of IPython magics for creating computable essays. if __name__ == '__main__': %load_ext pidgin Markdown Mode %pidgin markdown --- With `pidgin.markdown`, code cells accept markdown. Any indented code blocks are executed. foo = 42 print(f"foo is {foo}") > Accepting the `pidgin.markdown` convetion means the author agrees to indent all their code at least once; and sometimes more in nested lists. --- With pidgin.markdown, code cells accept markdown. Any indented code blocks are executed. foo = 42 print(f"foo is {foo}") Accepting the pidgin.markdownconvetion means the author agrees to indent all their code at least once; and sometimes more in nested lists. foo is 42 Template Mode With templates real data can be inserted into the computational essay. An author should desire their notebook restart and run all during template mode. %pidgin template Skipping the first line suppresses the markdown output. --- In template mode, `jinja2` may be invoked to template markdown and code. We already know that `foo` is 42, but can test that assertion with assert foo is {{foo}} is 42 {% for i in range(3) %}print({{i}}) {% endfor %} --- In template mode, jinja2 may be invoked to template markdown and code. We already know that foo is 42, but can test that assertion with assert foo is 42 is 42 print(0) print(1) print(2) 0 1 2 # Turning off magics %pidgin --off template markdown Turning off magics %pidgin --off template markdown Yaml Start code with --- %pidgin conventions --- a: 42 assert a == 42 Graphviz Start code with graph or digraph !conda install -y graphviz graph { {Ipython Julia R}--Jupyter} File "<ipython-input-9-1661b3d05729>", line 1 graph { {Ipython Julia R}--Jupyter} ^ SyntaxError: invalid syntax Notebooks as source pidgin uses notebooks as source; line numbers are retained so that the notebook source produces semi-sane tracebacks. from pidgin import markdown, template, conventions The pidgin loader allows an author to import notebooks directly as source. This means all of the pidgin documents are importable. %%pidgin markdown template conventions import readme assert all(file.__file__.endswith('.ipynb') for file in (markdown, template, conventions)) Everything Should Compute Convert a document into other formats; Restart, Run All, nbconvert. %%pidgin markdown template Use pidgin a cell magic to temporarily employ any convetions. if __name__ == '__main__': !jupyter nbconvert --to markdown readme.ipynb Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pidgin/
CC-MAIN-2019-47
refinedweb
416
56.76
table of contents <command>... -e, --event= - 'name' : User defined event name. Single quotes (') may be used to escape symbols in the name from parsing by shell and tool like this: name=\'CPU_CLK_UNHALTED.THREAD:cmask=0x1\'.. When processing a '.c' file, perf searches an installed LLVM to compile it into an object file first. Optional clang options can be passed via the '--clang-opt' command line option, e.g.: perf record --clang-opt "-DLINUX_VERSION_CODE=0x50000" \ -e tests/bpf-script-example.c Note: '--clang-opt' must be placed before '--event/-e'. --filter=<filter> Event filter. This option should follow an event selector (-e) which selects either tracepoint event(s) or a hardware trace PMU (e.g. Intel PT or CoreSight). In the case of tracepoints, multiple '--filter' options are combined using '&&'. be calculated from the first and last symbols, i.e. to trace the whole file. If symbol names (or '*') are provided, they must be surrounded by white space. The filter passed to the kernel is not necessarily the same as entered. To see the filter that is passed, use the -v option. The kernel may not be able to configure a trace region if it is not within a single mapping. MMAP events (or /proc/<pid>/maps) can be examined to determine if that is a possibility. Multiple filters can be separated with space or comma. --exclude-perf -a, --all-cpus -p, --pid= -t, --tid= -u, --uid= -r, --realtime= --no-buffering -c, --count= -o, --output= -i, --no-inherit -F, --freq= --strict-freq -m, --mmap-pages= --group -g --call-graph -v, --verbose -s, --stat -d, --data --phys-data -T, --timestamp -P, --period --sample-cpu -n, --no-samples -R, --raw-samples -C, --cpu -B, --no-buildid -N, --no-buildid-cache -G name,..., --cgroup name,... If wanting to monitor, say, cycles for a cgroup and also for system wide, this command line can be used: perf stat -e cycles -G cgroup_name -a -e cycles. -b, --branch-any -j, --branch-filter. --weight --namespaces --transaction --per-thread -D, --delay= -I, --intr-regs --user-regs --running-time -k, --clockid -S, --snapshot --proc-map-timeout --switch-events --clang-path=PATH --clang-opt=OPTIONS --vmlinux=PATH --buildid-all --aio[=n] --affinity=mode --mmap-flush=number The maximal allowed value is a quarter of the size of mmaped data pages. The default option value is 1 byte which means that every time that the output writing thread finds some new data in the mmaped buffer the data is extracted, possibly compressed (-z) and written to the output, perf.data or pipe. Larger data chunks are compressed more effectively in comparison to smaller chunks so extraction of larger chunks from the mmap data pages is preferable from the perspective of output size reduction. Also at some cases executing less output write syscalls with bigger data size can take less time than executing more output write syscalls with smaller data size thus lowering runtime profiling overhead. -z, --compression-level[=n] --all-kernel --all-user --kernel-callchains --user-callchains Don’t use both --kernel-callchains and --user-callchains at the same time or no callchains will be collected. --timestamp-filename Append timestamp to output file name. --timestamp-boundary --switch-output[=mode] --switch-max-files=N --dry-run perf record --dry-run -e can act as a BPF script compiler if llvm.dump-obj in config file is set to true. --tail-synthesize --overwrite. SEE ALSO¶ perf-stat(1), perf-list(1)
https://manpages.debian.org/experimental/linux-perf-5.3/perf_5.3-record.1.en.html
CC-MAIN-2022-21
refinedweb
568
56.45
Jake Mauve wrote:Hello all ! I have a small problem with the repaint() method. I have a graphical component consisting of about 6 filled rectangles and 3 filled ovals, each with a random color. Which is why I'd like to have it 'repaint itself' every X units of time. I tried by entering long time=7000; repaint(time); after drawing all my figures, within the same component method. It compiles and works, but the only problem is that it isn't taking my time parameter into account, and repaints at light speed :p (An instance of the component is then called into a JFrame since I hadn't mentioned it) Second question, what is best between JFrame and Applet? if they're both good for different cases, could you specify which? thanks! xD Jake Mauve wrote:ok, I've got import * public class abc extends JComponent{ void paintComponent(Graphics g){ .. g2.setColor(randomColor); g2.fillRect(x,..,height); } } other java file/class import javax.swing.JFrame; public class cde { public st. v. main ( JFrame frame=new JFrame(); abc component=new abc(); frame.add(component); .. could you please tell me where I 'put it in'? and, please don't quote a message when its already printed right up, what else would you be answering to? :p and please dont answer a question with another one =) cause the point is, in both cases the answer depends on context rather than taste xD I like using applets cause I find it makes it a bit easier not having to call my component object the same way and that in practical terms it lets we have more than one applet window, as in, it opens a new one each time. But, I dont know enough about either applets or frames to know their advantages/disadvantages. I've read a bit on it, even had a lecture on it this past week but hearing from other programmers is good too xD Thanks! when defining my JFrame to 200 in height, it comes out 'too short' repaint() method in the actionPerformed() method wasn't being recognized Please give the final, concise result, without confusing me with ...
http://www.coderanch.com/t/559868/GUI/java/graphics
CC-MAIN-2014-15
refinedweb
360
68.5
Where?" Rollin' with your homies pdr answers (13 votes): I think you'd be surprised how often operator overloads are implemented in some form. But they're not commonly used in a lot of communities. Why use ~ to concatenate to an array? Why not use << like Ruby does? Because the programmers you work with are probably not Ruby programmers. Or D programmers. So what do they do when they come across your code? They have to go and look up what the symbol means. I used to work with a very good C# developer who also had a taste for functional languages. Out of the blue, he started introducing monads to C# by way of extension methods and using standard monad terminology. No one could dispute that some of his code was terser and even more readable once you knew what it meant, but it did mean that everyone had to learn monad terminology before the code made sense. Fair enough, you think? It was only a small team. Personally, I disagree. Every new developer was destined to be confused by this terminology. Do we not have enough problems learning a new domain? On the other hand, I will happily use the ?? operator in C# because I expect other C# developers to know what it is, but I wouldn't overload it into a language that didn't support it by default. Let me count the ways mikera answers (4 votes): I can think of a few reasons: - They aren't trivial to implement—allowing arbitrary custom operators can make your compiler much more complex, especially if you allow user-defined precedence, fixity, and arity rules. If simplicity is a virtue, then operator overloading is taking you away from good language design. - They get abused—mostly by coders who think it is "cool" to redefine operators and start redefining them for all sorts of custom classes. Before long, your code is littered with a load of customized symbols that nobody else can read or understand because the operators don't follow the conventional well-understood rules. I don't buy the "DSL" argument, unless your DSL happens to be a subset of mathematics :-) - They hurt readability and maintainability—if operators are regularly overridden, it can become hard to spot when this facility is being used, and coders are forced to continually ask themselves what an operator is doing. It is much better to give meaningful function names. Typing a few extra characters is cheap and long term maintenance problems are expensive. - They can break implicit performance expectations.—For example, I'd normally expect lookup of an element in an array to be O(1). But with operator overloading, someobject[i] could easily be an O(n) operation depending on the implementation of the indexing operator. In reality, there are very few cases where operator overloading has justifiable uses compared to just using regular functions. A legitimate example might be designing a complex number class for use by mathematicians, who understand the well-understood ways that mathematical operators are defined for complex numbers. But this really isn't a very common case. Some interesting cases to consider: - Lisps—in general don't distinguish at all between operators and functions, - + is just a regular function. You can define functions however you like (typically there is a way of defining them in separate namespaces to avoid conflict with the built-in +), including operators. But there is a cultural tendency to use meaningful function names, so this doesn't get abused much. Also, in Lisp, prefix notation tends to get used exclusively, so there is less value in the "syntactical sugar" that operator overloads provide. - Java—disallows operator overloading. This is occasionally annoying (for stuff like the Complex number case) but on average it's probably the right design decision for Java which is intended as a simple, general-purpose OOP langauge. Java code is actually quite easy for low/medium-skilled developers to maintain as a result of this simplicity. - C++—has very sophisticated operator overloading. Sometimes this gets abused (cout << "Hello World!" anyone?) but the approach makes sense given C++'s positioning as a complex language that enables high level programming while still allowing you to get very close to the metal for performance, so you can e.g. write a Complex number class that behaves exactly as you want without compromising performance. It is understood that it is your own responsibility if you shoot yourself in the foot. Find more answers or leave your own at the original post. See more Q&A like this at Programmers, a site for conceptual programming questions at Stack Exchange. And of course, feel free to login and ask your own. You must login or create an account to comment.
http://arstechnica.com/information-technology/2013/05/why-arent-user-defined-operators-more-common/
CC-MAIN-2014-41
refinedweb
795
55.03
I am using the Sublime text 3 dev version now. Its awesome! The question I have is that how can i know that which of my packages is running while I am editing? Obviously I dont want my python plug-in running in the background while i am coding in php or html. If this question is dumb, sorry about that. But I still want to know lol I have no idea how this works Nice day guys! Wether a package ignores certain filetypes depends on the author of the package. There are various ways in which a plugin author can accomplish this (e.g. check the syntax and bail out, use a ViewEventListener with an appropriate is_applicable, etc). Usually the console is a good place to check if there are any packages that do something unintended. (View -> Show Console). By default all packages are "running". That means all of their commands get loaded into memory, all of their EventListeners get instantiated (once) and all of their ViewEventListeners get instantiated if applicable (per view), all of their snippets get loaded into memory, all of their keybindings get registered, etc. The plugin author has to make sure that their snippets and keybindings get masked correctly. sounds making sense... But can i know if there is any place I can check for console commands? Dont see it on Support page lol The console is a Python interpreter. You can, for example, type this: >>> sublime.version() >>> sublime.platform() >>> view.substr(view.line(0)) >>> view.name() >>> view.settings().get("syntax") >>> view.settings().get("color_scheme") >>> window.views() To log all of the commands, you can type: >>> sublime.log_commands(True) You can also import os and import sys and use those modules, just like in a python script. import os import sys hm.. so can i check these scripts? so for the example you gave me. How can i know where's class sublime? no idea how to find those methods under it. My python sucks btw. lol Here is the complete API reference, enjoy There's a bit more stuff about commands that you should understand. Read that here. This link might also be useful. Oh thank you!!! thats exactly what i want! btw I guess there is a bug on replying messages... No idea why it doesnt show the message i posted was replying to you... That's one of it's problem: it's not complete OverrideAudit is completely ST3 only. Did you know, that I know absolutely nothing! This plugin show which plugins are enabled or disabled
https://forum.sublimetext.com/t/how-to-know-which-package-is-running-in-my-sublime-text-3/30324
CC-MAIN-2018-22
refinedweb
425
76.72
Implicits The Implicit Modifier. Example Monoid } } Implicit Parameters An implicit parameter list (implicit $p_1$,$\ldots$,$p_n$) of a method marks the parameters $p_1 , \ldots , p_n$ as implicit. A method or constructor can have only one implicit parameter list, and it must be the last parameter list given. of type $T$ fall into two categories. First, eligible are all identifiers $x$, $T$. The implicit scope of a type $T$ consists of all companion modules of classes that are associated with the implicit parameter's type. Here, we say a class $C$ is associated with a type $T$ if it is a base class of some part of $T$. The parts of a type $T$ are: - if $T$ is a compound type $T_1$ with $\ldots$ with $T_n$, the union of the parts of $T_1 , \ldots , T_n$, as well as $T$ itself; - if $T$ is a parameterized type $S$[$T_1 , \ldots , T_n$], the union of the parts of $S$ and $T_1 , \ldots , T_n$; - if $T$ is a singleton type $p$.type, the parts of the type of $p$; - if $T$ is a type projection $S$#$U$, the parts of $S$ as well as $T$ itself; - if $T$ is a type alias, the parts of its expansion; - if $T$ is an abstract type, the parts of its upper bound; - if $T$ denotes an implicit conversion to a type with a method with argument types $T_1 , \ldots , T_n$ and result type $U$, the union of the parts of $T_1 , \ldots , T_n$ and $U$; - the parts of quantified (existential or universal) and annotated types are defined as the parts of the underlying types (e.g., the parts of T forSome { ... }are the parts of T); - in all other cases, just $T$ itself. Note that packages are internally represented as classes with companion modules to hold the package members. Thus, implicits defined in a package object are part of the implicit scope of a type prefixed by that package. If there are several eligible arguments which match the implicit parameter's type, a most specific one will be chosen using the rules of static overloading resolution. If the parameter has a default argument and no implicit argument can be found the default argument is used. Example Assuming the classes from the Monoid example, which matches the implicit formal parameter type Monoid[Int] is intMonoid so this object will be passed as implicit parameter. This discussion also shows that implicit parameters are inferred after any type arguments are inferred. Implicit $T$ is searched, the “core type” of $T$ is added to the stack. Here, the core type of $T$ is $T$ dominates a type $U$ if $T$ is equivalent to $U$, or if the top-level type constructors of $T$ and $U$ have a common element and $T$ is more complex than $U$. The set of top-level type constructors $\mathit{ttcs}(T)$ of a type $T$ depends on the form of the type: - For a type designator, $\mathit{ttcs}(p.c) ~=~ {c}$; - For a parameterized type, $\mathit{ttcs}(p.c[\mathit{targs}]) ~=~ {c}$; - For a singleton type, $\mathit{ttcs}(p.type) ~=~ \mathit{ttcs}(T)$, provided $p$ has type $T$; - For a compound type, $\mathit{ttcs}(T_1$ with $\ldots$ with $T_n)$$~=~ \mathit{ttcs}(T_1) \cup \ldots \cup \mathit{ttcs}(T_n)$. The complexity $\operatorname{complexity}(T)$ of a core type is an integer which also depends on the form of the type: - For a type designator, $\operatorname{complexity}(p.c) ~=~ 1 + \operatorname{complexity}(p)$ - For a parameterized type, $\operatorname{complexity}(p.c[\mathit{targs}]) ~=~ 1 + \Sigma \operatorname{complexity}(\mathit{targs})$ - For a singleton type denoting a package $p$, $\operatorname{complexity}(p.type) ~=~ 0$ - For any other singleton type, $\operatorname{complexity}(p.type) ~=~ 1 + \operatorname{complexity}(T)$, provided $p$ has type $T$; - For a compound type, $\operatorname{complexity}(T_1$ with $\ldots$ with $T_n)$$= \Sigma\operatorname{complexity}(T_i)$ Example. Example. Views Implicit parameters and methods can also define implicit conversions called views. A view from type $S$ to type $T$ is defined by an implicit value which has function type $S$=>$T$ or (=>$S$)=>$T$ or by a method convertible to a value of that type. Views are applied in three situations: - If an expression $e$ is of type $T$, and $T$ does not conform to the expression's expected type $\mathit{pt}$. In this case an implicit $v$ is searched which is applicable to $e$ and whose result type conforms to $\mathit{pt}$. The search proceeds as in the case of implicit parameters, where the implicit scope is the one of $T$ => $\mathit{pt}$. If such a view is found, the expression $e$ is converted to $v$($e$). - In a selection $e.m$ with $e$ of type $T$, if the selector $m$ does not denote an accessible member of $T$. In this case, a view $v$ is searched which is applicable to $e$ and whose result contains a member named $m$. The search proceeds as in the case of implicit parameters, where the implicit scope is the one of $T$. If such a view is found, the selection $e.m$ is converted to $v$($e$).$m$. - In a selection $e.m(\mathit{args})$ with $e$ of type $T$, if the selector $m$ denotes some member(s) of $T$, but none of these members is applicable to the arguments $\mathit{args}$. In this case a view $v$ is searched which is applicable to $e$ and whose result contains a method $m$ which is applicable to $\mathit{args}$. The search proceeds as in the case of implicit parameters, where the implicit scope is the one of $T$. If such a view is found, the selection $e.m$ is converted to $v$($e$).$m(\mathit{args})$. The implicit view, if it is found, can accept its argument $e$). Example Ordered. Context Bounds and View Bounds TypeParam ::= (id | ‘_’) [TypeParamClause] [‘>:’ Type] [‘<:’ Type] {‘<%’ Type} {‘:’ Type} A type parameter $A$ of a method or non-trait class may have one or more view bounds $A$ <% $T$. In this case the type parameter may be instantiated to any type $S$ which is convertible by application of a view to the bound $T$. A type parameter $A$ of a method or non-trait class may also have one or more context bounds $A$ : $T$. In this case the type parameter may be instantiated to any type $S$ for which evidence exists at the instantiation point that $S$ satisfies the bound $T$. Such evidence consists of an implicit value with type $T[S]$. $v_i$ and $w. Evidence parameters are prepended to the existing implicit parameter section, if one exists. For example: def foo[A: M](implicit b: B): C // expands to: // def foo[A](implicit evidence$1: M[A], b: B): C Example The <= method from the Ordered example can be declared more concisely as follows: def <= [B >: A <% Ordered[B]](that: B): Boolean Manifests $M[T]$ of class OptManifest[T], a manifest is determined for $M[S]$, according to the following rules. First if there is already an implicit argument that matches $M[T]$, this argument is selected. Otherwise, let $\mathit{Mobj}$ be the companion object scala.reflect.Manifest if $M$ is trait Manifest, or be the companion object scala.reflect.ClassManifest otherwise. Let $M'$ be the trait Manifest if $M$ is trait Manifest, or be the trait OptManifest otherwise. Then the following rules apply. - If $T$ is a value class or one of the classes Any, AnyVal, Object, Null, or Nothing, a manifest for it is generated by selecting the corresponding manifest value Manifest.$T$, which exists in the Manifestmodule. - If $T$ is an instance of Array[$S$], a manifest is generated with the invocation $\mathit{Mobj}$.arrayType[S](m), where $m$ is the manifest determined for $M[S]$. - If $T$ is some other class type $S$#$C[U_1, \ldots, U_n]$ where the prefix type $S$ cannot be statically determined from the class $C$, a manifest is generated with the invocation $\mathit{Mobj}$.classType[T]($m_0$, classOf[T], $ms$)where $m_0$ is the manifest determined for $M'[S]$ and $ms$ are the manifests determined for $M'[U_1], \ldots, M'[U_n]$. - If $T$ is some other class type with type arguments $U_1 , \ldots , U_n$, a manifest is generated with the invocation $\mathit{Mobj}$.classType[T](classOf[T], $ms$)where $ms$ are the manifests determined for $M'[U_1] , \ldots , M'[U_n]$. - If $T$ is a singleton type $p$.type, a manifest is generated with the invocation $\mathit{Mobj}$.singleType[T]($p$) - If $T$ is a refined type $T' { R }$, a manifest is generated for $T'$. (That is, refinements are never reflected in manifests). - If $T$ is an intersection type $T_1$ with $, \ldots ,$ with $T_n$where $n > 1$, the result depends on whether a full manifest is to be determined or not. If $M$ is trait Manifest, then a manifest is generated with the invocation Manifest.intersectionType[T]($ms$)where $ms$ are the manifests determined for $M[T_1] , \ldots , M[T_n]$. Otherwise, if $M$ is trait ClassManifest, then a manifest is generated for the intersection dominator of the types $T_1 , \ldots , T_n$. - If $T$ is some other type, then if $M$ is trait OptManifest, a manifest is generated from the designator scala.reflect.NoManifest. If $M$ is a type different from OptManifest, a static error results.
https://www.scala-lang.org/files/archive/spec/2.12/07-implicits.html
CC-MAIN-2020-16
refinedweb
1,536
53.31
servlets servlets hi i am using servlets i have a problem in doing an application. in my application i have html form, in which i have to insert on date value, this date value is retrieved as a request parameter in my servlet there is are requirement of sending data appended to a query string in the URL... on the amount of data you can send and because the data does not show up servlets to a jsp page for the remaining process(ie viewing the data) 4) the information...-data-list.shtml Servlets Servlets when i am compiling the follow program it gives the error.so help me to resolve the problem import java.io.*; import javax.Servlet.*; import javax.Servlet.http.*; import java.sql.*; import java.sql.connection Servlets +"')"); out.println("Data is inserted successfully Servlets { System.out.print("failed to insert the data Servlets { System.out.print("failed to insert the data Servlets - JSP-Servlet program which connects to Microsoft sql server and retrieves data from it and display... .. and there is a problem with connection string in oracle 8 can you tell me the url path... visit the following link: java servlets with database interaction java servlets with database interaction hai friends i am doing a web... "not enough values" can you please assist how to get rid of this problem what problem... Servlet with database and insert data import java.io.*; import problem in database problem in database thanks for web site. I want change this code to insert data into PostgreSql database using jsp,servlets. but i getting output "Record has been inserted", 1.no data in the table(sample) public Servlet problem problem from last three month and now i hope rose india developers will definitely help me. I built a web application using jsp, servlets . My web application...: [unixODBC][Driver Manager]Data source name not found, and no default Servlets Books that is both easier to write and faster to run. Servlets also address the problem... Servlets Books  ... Courses Looking for short hands-on training classes on servlets Retrieve data from xml using servlets Retrieve data from xml using servlets Hi plz send me the code for retrieving the data from xml File using Servlets. Hi, Do you want... Thanks Hi, Learn Get Data From the XML File. Thanks Servlets Program Servlets Program Hi, I have written the following servlet: [code] package com.nitish.servlets; import javax.servlet.*; import java.io.*; import... [/code] The problem I am facing is when I tried to compile the code, it gave me Java Servlets Java Servlets If the binary data is posted by both doGet and doPost.... It postprocess a request ,i.e, gather data from a submitted HTML form and doing some...). It is safe and doest not display the data. It can send large amount of data. Servlet jTable data problem jTable data problem Hello. I have a code that read file and store in arraylist and then convert to array(To use for table model) My class extends...","Number"}; List<String> data=new ArrayList<String>(); String jsp and servlets the request submitted from browser and process the data and redirect it to JSP Problem in inserting clob data in jsp Problem in inserting clob data in jsp how to insert any rich text editor data (which have more than 32766 characters) in a clob type column of oracle database with jsp Problem in accessing data from Database Problem in accessing data from Database hi..... i'm making a project... is text and all others are of currency data type. If i enter 0 or null value in currency column and then try to retrieve data using my servlet coding java j s p and servlets - Servlet Interview Questions java j s p and servlets how to handle data from a select box store them back to database through servlet Hi friend, For solving the problem visit to : servlets - JSP-Servlet servlets how to upload images in servlets Hi friend, For solving the problem : Thanks Problem with appending data to the end of file Problem with appending data to the end of file MY JSP CODE..." import="java.io.*,data.*" %> <% String name=request.getParameter("name...: package data; import java.io.*; import java.util.*; public class musicja Servlets - JSP-Servlet Servlets Hi,im d beginner to learn servlets and jsp.please can u... with the host server.It also allows the servlets to write events to a log file. ServletConfig stores initial parameters and ServletContext stores global data ajax problem - Ajax connection in servlets. public class Data extends HttpServlet { public void doGet...[form.first.selectedIndex].text) var url = "../Data?country= "+source /*+ "&r=" + new Date...]); } } } xmlObj.send(null); } On Select change data of another Select servlets - Servlet Interview Questions servlets i am using servlets. in that servlet i have an option "logout" button. when ever i press this "logout" button the browser page...; Hi friend, Code to solve the problem : Close window servlets servlets what is the duties of response object in servlets servlets servlets why we are using servlets data retrivel code - XML data retrivel code Can someone help me in retriving data from MySql... the following link given for solving your problem. I hope this will help you a lot..... to display the text box based on the seletion made in the dropdownlist box. My problem JSP-Servlets-JDBC JSP-Servlets-JDBC Hi all, 1, Create - i want sample code... should be navigated to a Servlet page and then the data should be passed..... It will be helpful if it's made into sub modules, JSP, Driver Constants, Servlets, Java Beans servlets and jsp - JDBC servlets and jsp I want to display textboxes dynamically in my page using JSP and servlets (javascript for validation). For eg, consider... select the manager, it has to get the data for Name, Address, tel no, +2 and UG servlets - JSP-Servlet servlets. Hi friend, employee form in servlets...;This is servlets code. package javacode; import java.io.*; import java.sql.... = response.getWriter(); System.out.println("Insert data Example what is the architecture of a servlets package what is the architecture of a servlets package The javax.servlet package provides interfaces and classes for writing servlets. The Servlet Interface The central Doubt in servlets - JSP-Servlet , I want to add data dynamically for that which methods i have to use to retrieve the dynamic data . Hi Friend, Try the following code: 1... into data(name,address) values('"+name+"','"+address+"')"); st.close u Hi friend, For solving the problem visit to : http Problem in enctype="multipart/form-data" in JSP Problem in enctype="multipart/form-data" in JSP im using a page... the file itself when i click the submit button. im using enctype="multipart/form-data... the uploaded value. but the problem is the uploaded file is not stored servlets servlets what are different authentication options available in servlets There are four ways of authentication:- HTTP basic authentication HTTP digest authentication HTTPS client authentication Form-based Servlets - JSP-Servlet , We check your code and it can be successfully compile : Your problem Servlets - Servlet Interview Questions (fileOut); fileOut.close(); out.println("Data is saved in excel file here we r getting the problem with ut data and get data???????????? here we r getting the problem with ut data and get data???????????? private ArrayList keys; private ArrayList values; public Menus() { keys = new ArrayList(); values = new ArrayList(); } public - JSP-Servlet in inserting data: "+(s2-s1) +"seconds "); } } Thanks The Advantages of Servlets programs can't do. Servlets can share data among each other, they even make... Advantages of Java Servlets  ... that the servlets are written in java and follow well known standardized APIs so SERVLETS servlets the servlets Servlets
http://www.roseindia.net/tutorialhelp/comment/99712
CC-MAIN-2014-42
refinedweb
1,287
64.2
Introducing the Task Parallel Library Techniques for distributing tasks across multiple computers have been around for decades. Unfortunately those techniques have mostly been cumbersome, requiring lots of skill and patience to code correctly. They also had a fair amount of overhead, so a net performance gain occurred only if the pieces of your problem were relatively large. If you divide an easy task into two trivial subtasks, the overhead of calling to remote computers to calculate the subtasks and then reassembling the results often took longer than just doing the work on a single CPU. (If you're a parent, you probably know that it can take longer to make your kids clean up their rooms than it would to clean them yourself. In that case, however, there's a principle involved.) In contrast, the multi-core computers that are becoming ubiquitous require much less overhead to make and return calls between cores, which makes multi-core programming far more attractive. And the Task Parallel Library simplifies the process by providing routines that can automatically distribute tasks across the computer's available CPUs quickly and painlessly. The TPL's methods have relatively little overhead, so you don't pay a huge penalty for splitting up your tasks. If your computer has a single CPU, the library pays a small penalty for breaking the tasks up and running them one at a time. If you have a dual-core system (like I do), TPL spreads the tasks across the two CPUs. If you have a quad-core system, TPL spreads the tasks across the four CPUs. In the future, if you have a network of 19 Cell CPUs scattered around your family room and kitchen, TPL will spread the tasks across those 19 CPUs. At least that's the theory. This scenario is still a bit down the road so don't be surprised if the details change before it becomes a reality. Getting familiar with using TPL now, however, will help you with any future developments in parallel programming later. One of TPL's additional advantages is that it automatically balances the load across the available CPUs. For example, suppose you run four tasks without TPL and you assign two to run on one CPU and two to run on a second. If the first two tasks take longer than the second two tasks, the second CPU finishes its work early and then sits there uselessly while your program waits for the first CPU to finish. TPL automatically prevents that sort of unbalanced scheduling by running tasks on whatever CPU is available for work. In this example, your program uses TPL to start the four tasks but doesn't decide where they execute. TPL runs one task on each CPU. When a CPU finishes its current task, TPL gives it another one. The process continues until all of the tasks complete. If some task is particularly long, other tasks will run on other CPUs. If some tasks are short, one CPU may run many of them very quickly. In any case, the TPL balances the workload, helping to ensure that none of the CPUs sit around idly while there is work to do. Using TPL Enough background and theory; it's time for some details. TPL is part of the System.Threading namespace. To use it, first download and install the latest version of the Microsoft Parallel Extensions to .NET Framework 3.5 (the June 2008 Community Technology Preview as of this writing). After you've installed the TPL, start a new Visual Basic or C# project and add a reference to System.Threading. Open the Project menu, select Add Reference, select the .NET tab, and double-click the System.Threading entry. To make working with the namespace easier, you can add an Imports statement in Visual Basic or a using statement in C# to your code module. Now you're ready to work with TPL. The following sections describe some of the most useful TPL methods: Parallel.Invoke, Parallel.For, and Parallel.ForEach. They also describe two useful parallel objects, Tasks and Futures, and some simple locking mechanisms provided by the System.Threading namespace. Parallel.Invoke The Parallel class provides static methods for performing parallel operations. The Parallel.Invoke method takes as parameters a set of System.Action objects that tell the method what tasks to perform. Each action is basically the address of a method to run. Parallel.Invoke launches the actions in parallel, distributing them across your CPUs. The type System.Action is just a named delegate representing a subroutine that takes no parameters and doesn't return anything—in other words, you can simply pass in the addresses of the subroutines that you want to run. The Parallel.Invoke routine takes a parameter array (ParamArray in Visual Basic) so you can simply list as many subroutines as you like in the call. The following code shows how you can use Parallel.Invoke in Visual Basic: Parallel.Invoke(AddressOf Task1, AddressOf Task2, AddressOf Task3) What could be easier than that? Actually, the following C# version is syntactically a little shorter: Parallel.Invoke(Task1, Task2, Task3); That's all you need to do! And you thought using multiple CPUs was going to be a big chore! Naturally there are a few details that you need to consider, such as: Many classes are not "thread-safe," so you cannot safely use their properties and methods from multiple threads. For now, ignore these issues and just admire the elegant simplicity of Parallel.Invoke. As long as the tasks mind their own business, it's amazingly easy. The Parallel.Invoke routine can also take an array of actions as a parameter. That's useful if you don't know at design time exactly which actions you might need to execute. You can fill an array with the tasks at run time and then execute them all. The following Visual Basic code demonstrates that approach: Dim actions() As Action = _ {AddressOf Task1, AddressOf Task2, AddressOf Task3} Parallel.Invoke(actions) Here's the C# version: System.Action[] actions = new System.Action[] { Task1, Task2, Task3 }; Parallel.Invoke(actions); The sample program InvokeTasks, which is available with the downloadable code for this article in Visual Basic and C# versions, uses Parallel.Invoke to run three tasks in parallel. In that program, the three tasks simply use Thread.Sleep to sleep for 1, 2, and 3 seconds, respectively. As expected, when the program calls these three routines sequentially (one after another), it takes six seconds to execute. It's a simple program, and if you have a multi-core machine, it provides a simple way to determine whether your code is using more than one core, because when the program uses Parallel.Invoke, it takes only three seconds on my dual-core system. While the application isn't particularly impressive, the results are. The program can access both my CPUs with very simple code. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/dotnet/Article/39204/0/page/2
CC-MAIN-2015-48
refinedweb
1,188
65.12
2020/12/16 DataFrame API Preview now Available!Brian Hulette [@BrianHulette] & Robert Bradshaw We’re excited to announce that a preview of the Beam Python SDK’s new DataFrame API is now available in Beam 2.26.0. Much like SqlTransform (Java, Python), the DataFrame API gives Beam users a way to express complex relational logic much more concisely than previously possible. A more expressive API Beam’s new DataFrame API aims to be compatible with the well known Pandas DataFrame API, with a few caveats detailed below. With this new API a simple pipeline that reads NYC taxiride data from a CSV, performs a grouped aggregation, and writes the output to CSV, can be expressed very concisely: from apache_beam.dataframe.io import read_csv with beam.Pipeline() as p: df = p | read_csv("gs://apache-beam-samples/nyc_taxi/2019/*.csv", usecols=['passenger_count' , 'DOLocationID']) # Count the number of passengers dropped off per LocationID agg = df.groupby('DOLocationID').sum() agg.to_csv(output) Compare this to the same logic implemented as a conventional Beam python pipeline with a CombinePerKey: with beam.Pipeline() as p: (p | beam.io.ReadFromText("gs://apache-beam-samples/nyc_taxi/2019/*.csv", skip_header_lines=1) | beam.Map(lambda line: line.split(',')) # Parse CSV, create key - value pairs | beam.Map(lambda splits: (int(splits[8] or 0), # DOLocationID int(splits[3] or 0))) # passenger_count # Sum values per key | beam.CombinePerKey(sum) | beam.MapTuple(lambda loc_id, pc: f'{loc_id},{pc}') | beam.io.WriteToText(known_args.output)) The DataFrame example is much easier to quickly inspect and understand, as it allows you to concisely express grouped aggregations without using the low-level CombinePerKey. In addition to being more expressive, a pipeline written with the DataFrame API can often be more efficient than a conventional Beam pipeline. This is because the DataFrame API defers to the very efficient, columnar Pandas implementation as much as possible. DataFrames as a DSL You may already be aware of Beam SQL, which is a Domain-Specific Language (DSL) built with Beam’s Java SDK. SQL is considered a DSL because it’s possible to express a full pipeline, including IOs and complex operations, entirely with SQL. Similarly, the DataFrame API is a DSL built with the Python SDK. You can see that the above example is written without traditional Beam constructs like IOs, ParDo, or CombinePerKey. In fact the only traditional Beam type is the Pipeline instance! Otherwise this pipeline is written completely using the DataFrame API. This is possible because the DataFrame API doesn’t just implement Pandas’ computation operations, it also includes IOs based on the Pandas native implementations ( pd.read_{csv,parquet,...} and pd.DataFrame.to_{csv,parquet,...}). Like SQL, it’s also possible to embed the DataFrame API into a larger pipeline by using schemas. A schema-aware PCollection can be converted to a DataFrame, processed, and the result converted back to another schema-aware PCollection. For example, if you wanted to use traditional Beam IOs rather than one of the DataFrame IOs you could rewrite the above pipeline like this: from apache_beam.dataframe.convert import to_dataframe from apache_beam.dataframe.convert import to_pcollection with beam.Pipeline() as p: ... schema_pc = (p | beam.ReadFromText(..) # Use beam.Select to assign a schema | beam.Select(DOLocationID=lambda line: int(...), passenger_count=lambda line: int(...))) df = to_dataframe(schema_pc) agg = df.groupby('DOLocationID').sum() agg_pc = to_pcollection(pc) # agg_pc has a schema based on the structure of agg (agg_pc | beam.Map(lambda row: f'{row.DOLocationID},{row.passenger_count}') | beam.WriteToText(..))}') ... Caveats As hinted above, there are some differences between Beam’s DataFrame API and the Pandas API. The most significant difference is that the Beam DataFrame API is deferred, just like the rest of the Beam API. This means that you can’t print() a DataFrame instance in order to inspect the data, because we haven’t computed the data yet! The computation doesn’t take place until the pipeline is run(). Before that, we only know about the shape/schema of the result (i.e. the names and types of the columns), and not the result itself. There are a few common exceptions you will likely see when attempting to use certain Pandas operations: - NotImplementedError: Indicates this is an operation or argument that we haven’t had time to look at yet. We’ve tried to make as many Pandas operations as possible available in the Preview offering of this new API, but there’s still a long tail of operations to go. - WontImplementError: Indicates this is an operation or argument we do not intend to support in the near-term because it’s incompatible with the Beam model. The largest class of operations that raise this error are those that are order sensitive (e.g. shift, cummax, cummin, head, tail, etc..). These cannot be trivially mapped to Beam because PCollections, representing distributed datasets, are unordered. Note that even some of these operations may get implemented in the future - we actually have some ideas for how we might support order sensitive operations - but it’s a ways off. Finally, it’s important to note that this is a preview of a new feature that will get hardened over the next few Beam releases. We would love for you to try it out now and give us some feedback, but we do not yet recommend it for use in production workloads. How to get involved The easiest way to get involved with this effort is to try out DataFrames and let us know what you think! You can send questions to user@beam.apache.org, or file bug reports and feature requests in jira. In particular, it would be really helpful to know if there’s an operation we haven’t implemented yet that you’d find useful, so that we can prioritize it. If you’d like to learn more about how the DataFrame API works under the hood and get involved with the development we recommend you take a look at the design doc and our Beam summit presentation. From there the best way to help is to knock out some of those not implemented operations. We’re coordinating that work in BEAM-9547.
https://beam.apache.org/blog/dataframe-api-preview-available/
CC-MAIN-2021-31
refinedweb
1,021
56.15
How to Test Everything [This post also appears on Dustin's github blog]. I recently had a Membase user point out a sequence of operations that led to an undesirable state. I’ve got a lot of really good engine tests I’ve written, but not this case: The bug is pretty straightforward – expiry is lazy and it turns out I’m not checking for expiry in this case. It was pretty easy to write this test, but immediately made me think about what other cases weren’t being run. Now, I know there are countless tools out there to aid in testing. I’ve written another one. I probably spent an hour or so writing a framework to write and run all of the tests I needed. The difference between what I’m describing here and, for example, quick check is that I want something very simple to express actions that expect their environment to be in a particular state and will leave the environment in another state. Then I want to hit every possible arrangement of these actions to ensure they don’t interfer with each other in unexpected ways. This blows up very quickly – specifically the number of tests generated for a test sequence of n actions from a possible actions is approximately an. Consider three defined actions permuted into sequences of two. That blows out to nine possibilites as shown in the diagram on the right. The actions in the diagram are defined with memcached semantics on a single key, so add has a prerequisite that the item must not exist and del has a prerequisite that the item must exist. The generated test expects success at each white box, failure at each red box, and tracks the expected state mutations to build assertions. My first test… um, test ran with 11 actions in sequences of 4 actions. I have more actions to go, but 4 is a pretty good length, so the chart at the left is going to demonstrate my growth rate. The awesome part is that it pointed out the original bug quite easily and another couple of bugs with limited effort. How Do I Use This? The API is so far pretty simple and composable. There are basically five classes (three are shown in the image on the right). Condition A Condition is a simple callable that is used for preconditions and postconditions. A given class doesn’t care which one it’s used for, and in many cases will be used for both. For example, consider my implementation of DoesNotExist: def __call__(self, state): return TESTKEY not in state Effect An Effect changes our view of the state (and depending on the driver, may actually cause something in the world to change with it). For example, the StoreEffect works as follows: def __call__(self, state): state[TESTKEY] = '0' Action An Action brings an Effect and one or more Condition classes as pre and post conditions. For example, we’ll look at two actions, an Add action and a Set action: effect = StoreEffect() postconditions = [Exists()] class Add(Action): preconditions = [DoesNotExist()] effect = StoreEffect() postconditions = [Exists()] The interesting part of this is that Set and Add have different semantics, but are expressed as different compositions of the same conditions and effects. Driver Driver is kind of a larger part (seven defined methods!). It does enough that I can do anything from generate a C test suite for memcached engines all the way to actually executing tests across a remote protocol. I won’t describe the entire thing here since it’s documented in the source. I will however, close the loop by showing you some example code that it generated that demonstrated the error we failed to find in the first place: ENGINE_HANDLE_V1 *h1) { add(h, h1); assertHasNoError(); // value is "0" add(h, h1); assertHasError(); // value is "0" delay(expiry+<span class="mi">1); assertHasNoError(); // value is not defined add(h, h1); assertHasNoError(); // value is "0" checkValue(h, h1, "0"); return SUCCESS; } </span> That demonstrates how much information you know at each step of the way. From there, we can do all kinds of stuff with our stubs (delay above is implemented with the memcached testapp “time travel” feature, for example). From here, it’s less exciting. We provide constraints, it writes tests, and makes sure that there’s another area that it’s impossible for users to encounter something we haven’t seen before.
https://blog.couchbase.com/how-test-everything
CC-MAIN-2014-10
refinedweb
741
57.3
Hey everyone, I'm pretty new with Java and am preparing for a class I'm taking later this summer. I've been doing a lot of self study and have been attempting to get as far as I can in some Java books by myself. Right now I'm working on a carpet calculator problem and I'm a bit stuck. I'm attempting to create a RoomDimension class to set the area of a room, then use a method from that class to establish an argument in a RoomCarpet class which calculates the total cost of of carpeting for the room. public class RoomCarpet { private double carpetCost; private double roomSize; private RoomDimension yourSize; // This is what I'm having trouble with public RoomCarpet(double cost, double roomsize){ // I'm not sure if I should use double or a RoomDimension object as the parameter carpetCost = cost; yourSize = new RoomDimension(); // I'm not sure what to pass as arguments here roomSize = yourSize.getArea(); } ... and here's the RoomDimension class: public class RoomDimension { private double length; private double width; public RoomDimension(double len, double w){ length = len; width = w; } public RoomDimension(RoomDimension object2){ length = object2.length; width = object2.width; } public double getArea(){ return length * width; } public String toString(){ String str; str = "This rooms area is " + getArea(); return str; } } I'd appreciate any help I can get. Thanks!
http://www.javaprogrammingforums.com/object-oriented-programming/16264-trouble-using-aggregate-classes.html
CC-MAIN-2015-32
refinedweb
226
53.44
#include <stdio.h> #include <windows.h> #include "portpriv.h" #include "hyportpg.h" Determine if DLPAR (i.e. the ability to change number of CPUs and amount of memory dynamically) is enabled on this platform. Determine the maximum number of CPUs on this platform. Determine the character used to separate entries on the classpath. Determine the CPU architecture. See for good values to return. Query the operating system for environment variables. Obtain the value of the environment variable specified by envVar from the operating system and write out to supplied buffer. Determines an absolute pathname for the executable. Determine the number of CPUs on this platform. Determine the OS type. Determine version information from the operating system. Determine the size of the total physical memory in the system, in bytes. Determine the process ID of the calling process. Determine the collective processing capacity available to the VM in units of 1% of a physical processor. In environments without some kind of virtual partitioning, this will simply be the number of CPUs * 100. Query the operating system for the name of the user associate with the current thread. Obtain the value of the name of the user associated with the current thread, and then write it out into the buffer supplied by the user PortLibrary shutdown. This function is called during shutdown of the portLibrary. Any resources that were created by hysysinfo_startup should be destroyed here. PortLibrary startup. This function is called during startup of the portLibrary. Any resources that are required for the system information operations may be created here. All resources created here should be destroyed in hysysinfo_shutdown. Determine if the platform has weak memory consistency behaviour. Genereated on Tue Dec 9 14:12:59 2008 by Doxygen. (c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable.
http://harmony.apache.org/externals/vm_doc/html/hysysinfo_8c.html
CC-MAIN-2013-48
refinedweb
301
52.76
Sublime Text is simple but powerful source code editor with a Python application programming interface. It natively supports many programming languages and markup languages, and its functionality can be extended by users with plugins, themes and packages, typically community-built and maintained under free-software licenses. It is Featureful: It is Stable: It is Mainained: It is Customizable: Sublime Text 2 is built from the ground up for being highly customizable. From the apparent simple configuration things like displaying line number, code folding etc to some deep software architecture with "vi mode" enabling and a huge list of user settings, it is what you make of it. The fact that the entire user preference is a plain text config file with comments rather than a deeply nested menu also appeals to a lot of programmers/designers. It is Innovative: Ctrl+) or the ability to navigate the command list by name ( Cmd+Shift+P) in the "command palette" or the multiple cursors as mentioned before are really innovative and gorking just one of these is enough for any user for a huge productivity boost. It is Cross Platform: It is Extendable by Plugins: | 2.0 | 8th July 2013 | | 3.0 Beta | 29th January 2013 | | 3.0 | 28th June 2013 | To customize Sublime Text (including themes, syntax highlighting etc) you must have package control installed. To install package control visit. Instead of following the above link, you can open the Sublime console to install it. The console is accessed via the ctrl+ shortcut or the View > Show Console menu. Once open, paste the following Python code for your version of Sublime Text into the console. import urllib.request,os,hashlib; h = 'df21e130d211cfc94d9b0905775a7c0f' + '1e3d39e33b79698005270310898eea76';) This code creates the Installed Packages folder for you, and then downloads the Package Control.sublime-package into it. The download will be done over HTTP instead of HTTPS due to Python standard library limitations, however the file will be validated using SHA-256. After this has finished, open Sublime Text and press shift-command-p on OSX or control-p on windows to open the package search function. Start typing in "Package Control" and select the Package Control: Install Package Once this has loaded, search through each package/theme and double click to install one. Once this has been installed, open the search function again ( shift-command-p on OSX or control-p on windows) and search for the package/theme you have just installed. Most packages come with an automatic activation, for example the Boxy theme shows Boxy Theme: Activation . Simply select this to install the theme. Your UI will now look different depending on the theme you picked. See the image below for an example of the Spacegray theme: To download Sublime Text, visit the download section on their website. There are various builds for different operating systems including: Once Sublime Text has been successfully downloaded, simply open the .dmg file to start the installation process. After completion of the installation process, navigate to the Applications folder and open Sublime Text. You have successfully installed the free version of Sublime Text! To obtain a licence, you must purchase Sublime Text by visiting this purchase link.
https://riptutorial.com/sublimetext
CC-MAIN-2019-22
refinedweb
529
53.71
FileInfo.OpenWrite Method () Creates a write-only FileStream. Assembly: mscorlib (in mscorlib.dll) Return ValueType: System.IO.FileStream A write-only unshared FileStream object for a new or existing file. The OpenWrite method opens a file if one already exists for the file path, or creates a new file if one does not exist. For an existing file, it does not append the new text to the existing text. Instead, it overwrites the existing characters with the new characters. If you overwrite a longer string (such as "This is a test of the OpenWrite method") with a shorter string (like "Second run"), the file will contain a mix of the strings ("Second runtest of the OpenWrite method"). The following example opens a file for writing and then reads from the file. using System; using System.IO; using System.Text; class Test { public static void Main() { string path = @"c:\Temp\MyTest.txt"; FileInfo fi = new FileInfo(path); // Open the stream for writing. using (FileStream fs = fi.OpenWrite()) { Byte[] info = new UTF8Encoding(true).GetBytes("This is to test the OpenWrite method."); // Add some information to the file. fs.Write(info, 0, info.Length); } // Open the stream and read it back. using (FileStream fs = fi.OpenRead()) { byte[] b = new byte[1024]; UTF8Encoding temp = new UTF8Encoding(true); while (fs.Read(b,0,b.Length) > 0) { Console.WriteLine(temp.GetString(b)); } } } } //This code produces output similar to the following; //results may vary based on the computer/file structure/etc.: // //This is to test the OpenWrite method. // // // // // // // // // // // for reading and writing files. Associated enumerations: FileIOPermissionAccess.Read, FileIOPermissionAccess.Write Available since 10 .NET Framework Available since 1.1 Silverlight Available since 2.0 Windows Phone Silverlight Available since 7.0
https://msdn.microsoft.com/en-us/library/system.io.fileinfo.openwrite.aspx
CC-MAIN-2017-09
refinedweb
283
71.61
I’m stretching the limits of what these software platforms were designed to to, but I’m impressed such a haphazard hacked-together code as this produces fast, functional results. The code below is the simplest case code I could create which would graph the audio spectrum of the microphone input (or a WAV file or other sound as it’s being played). There’s some smoothing involved (moving window down-sampling along the frequency axis and sequence averaging along the time axis) but the darn thing seems to keep up with realtime audio input at a good 30+ FPS on my modest maching. It should work on Windows and Linux. I chose not to go with matplotlib because I didn’t think it was fast enough for my needs in this one case (although I love it in every other way). Here’s what the code below looks like running: NOTE that this program was designed with the intent of recording the FFTs, therefore if the program “falls behind” the realtime input, it will buffer the sound on its own and try to catch up (accomplished by two layers of threading). In this way, *EVERY MOMENT* of audio is interpreted. If you’re just trying to create a spectrograph for simple purposes, have it only sample the audio when it needs to, rather than having it sample audio continuously. import pyaudio import scipy import struct import scipy.fftpack from Tkinter import * import threading import time, datetime import wckgraph import math #ADJUST THIS TO CHANGE SPEED/SIZE OF FFT bufferSize=2**11 #bufferSize=2**8 # ADJUST THIS TO CHANGE SPEED/SIZE OF FFT sampleRate=48100 #sampleRate=64000 p = pyaudio.PyAudio() chunks=[] ffts=[] def stream(): global chunks, inStream, bufferSize while True: chunks.append(inStream.read(bufferSize)) def record(): global w, inStream, p, bufferSize inStream = p.open(format=pyaudio.paInt16,channels=1,\ rate=sampleRate,input=True,frames_per_buffer=bufferSize) threading.Thread(target=stream).start() def downSample(fftx,ffty,degree=10): x,y=[],[] for i in range(len(ffty)/degree-1): x.append(fftx[i*degree+degree/2]) y.append(sum(ffty[i*degree:(i+1)*degree])/degree) return [x,y] def smoothWindow(fftx,ffty,degree=10): lx,ly=fftx[degree:-degree],[] for i in range(degree,len(ffty)-degree): ly.append(sum(ffty[i-degree:i+degree])) return [lx,ly] def smoothMemory(ffty,degree=3): global ffts ffts = ffts+[ffty] if len(ffts)< =degree: return ffty ffts=ffts[1:] return scipy.average(scipy.array(ffts),0) def detrend(fftx,ffty,degree=10): lx,ly=fftx[degree:-degree],[] for i in range(degree,len(ffty)-degree): ly.append(ffty[i]-sum(ffty[i-degree:i+degree])/(degree*2)) #ly.append(fft[i]-(ffty[i-degree]+ffty[i+degree])/2) return [lx,ly] def graph(): global chunks, bufferSize, fftx,ffty, w if len(chunks)>0: data = chunks.pop(0) data=scipy.array(struct.unpack("%dB"%(bufferSize*2),data)) #print "RECORDED",len(data)/float(sampleRate),"SEC" ffty=scipy.fftpack.fft(data) fftx=scipy.fftpack.rfftfreq(bufferSize*2, 1.0/sampleRate) fftx=fftx[0:len(fftx)/4] ffty=abs(ffty[0:len(ffty)/2])/1000 ffty1=ffty[:len(ffty)/2] ffty2=ffty[len(ffty)/2::]+2 ffty2=ffty2[::-1] ffty=ffty1+ffty2 ffty=scipy.log(ffty)-2 #fftx,ffty=downSample(fftx,ffty,5) #fftx,ffty=detrend(fftx,ffty,30) #fftx,ffty=smoothWindow(fftx,ffty,10) ffty=smoothMemory(ffty,3) #fftx,ffty=detrend(fftx,ffty,10) w.clear() #w.add(wckgraph.Axes(extent=(0, -1, fftx[-1], 3))) w.add(wckgraph.Axes(extent=(0, -1, 6000, 3))) w.add(wckgraph.LineGraph([fftx,ffty])) w.update() if len(chunks)>20: print "falling behind...",len(chunks) def go(x=None): global w,fftx,ffty print "STARTING!" threading.Thread(target=record).start() while True: graph() root = Tk() root.title("SPECTRUM ANALYZER") root.geometry('500x200') w = wckgraph.GraphWidget(root) w.pack(fill=BOTH, expand=1) go() mainloop() Tsuki April 15, 2010 at 2:54 PM (UTC -5) Link to this comment Heya would u midn helping me make a modification to this code so i find in at what frequence the graph is at the highest? Scott April 21, 2010 at 6:40 PM (UTC -5) Link to this comment 0.000 is the highest. bc May 15, 2010 at 7:24 AM (UTC -5) Link to this comment A few suggestions: -You can do all this with numpy, rather than scipy. Numpy is less of a dependancy. -Since your input data is real, an rfft (rather than fft) would be more appropriate (more efficient). -You can avoid all your global state if you use iterators. E.g. implement your smoothMemory function as a generator: it can then store it’s state as a local variable. for example (taken from the python deque documentation:) Peter Wang May 18, 2010 at 11:29 PM (UTC -5) Link to this comment Chaco () also has an example similar to this, except it also includes a spectrograph image: The code is at: The plots are interactive, i.e. can be panned and zoomed in realtime as they are updating. PK October 19, 2011 at 7:04 AM (UTC -5) Link to this comment i have a problem integrating wckgraph into python 2.6?? any suggestions? Jorge Orpinel December 17, 2011 at 4:01 AM (UTC -5) Link to this comment Hey, this code was very helpful for my final project in the Music Information Retrieval at NYU. Thanks! Anonymous August 24, 2012 at 4:50 AM (UTC -5) Link to this comment can i get this code in c++ Emdad April 7, 2013 at 2:05 AM (UTC -5) Link to this comment Why cant I find WCKGRAPH for python2.7 or later versions? achayan September 16, 2013 at 4:22 PM (UTC -5) Link to this comment Hi Scott, Is there any way we can change this to a mp3 file, like when playing a mp3 file I want to plot the frequency, is it possible ? Thanks abby November 21, 2013 at 4:23 AM (UTC -5) Link to this comment i want this code in c# Noha el kammah March 25, 2014 at 1:02 PM (UTC -5) Link to this comment Hey Scott, I was wondering, I have this EEG headset and It sends discrete input to my computer. I want to use your code to perform a spectrum analysis on this data, but I have no idea how to modify your code. Instead of audio, where can I insert my class function( that sends discrete data all the time) in your code and just let it do the rest? appreciate the help, Noha
http://www.swharden.com/blog/2010-03-05-realtime-fft-graph-of-audio-wav-file-or-microphone-input-with-python-scipy-and-wckgraph/
CC-MAIN-2014-41
refinedweb
1,101
61.97
x:Name Directive Uniquely identifies XAML-defined elements in a XAML namescope. XAML namescopes and their uniqueness models can be applied to the instantiated objects, when frameworks provide APIs or implement behaviors that access the XAML-created object graph at run time. XAML Attribute Usage <object x:Name="XAMLNameValue".../> XAML Values Remarks After x:Name is applied to a framework's backing programming model, the name is equivalent to the variable that holds an object reference or an instance as returned by a constructor. The value of an x:Name directive usage must be unique within a XAML namescope. By default when used by .NET Framework XAML Services API, the primary XAML namescope is defined at the XAML root element of a single XAML production, and encompasses the elements that are contained in that XAML production. Additional discrete XAML namescopes that might occur within a single XAML production can be defined by frameworks to address specific scenarios. For example, in WPF, new XAML namescopes are defined and created by any template that is also defined on that XAML production. For more information about XAML namescopes (written for WPF but relevant for many XAML namescope concepts), see WPF XAML Namescopes. In general, x:Name should not be applied in situations that also use x:Key. XAML implementations by specific existing frameworks have introduced substitution concepts between x:Key and x:Name, but that is not a recommended practice. .NET Framework XAML Services does not support such substitution concepts when handling name/key information such as INameScope or DictionaryKeyPropertyAttribute. Rules for permittance of x:Name as well as the name uniqueness enforcement are potentially defined by specific implementing frameworks. However, to be usable with .NET Framework XAML Services, the framework definitions of XAML namescope uniqueness should be consistent with the definition of INameScope information in this documentation, and should use the same rules regarding where the information is applied. For example, the Windows Presentation Foundation (WPF) implementation divides various markup elements into separate NameScope ranges, such as resource dictionaries, the logical tree created by the page-level XAML, templates, and other deferred content, and then enforces XAML name uniqueness within each of those XAML namescopes. For custom types that use .NET Framework XAML Services XAML object writers, a property that maps to x:Name on a type can be established or changed. You define this behavior by referencing the name of the property to map with the RuntimeNamePropertyAttribute in the type definition code. RuntimeNamePropertyAttribute is a type-level attribute. Using.NET Framework XAML Services, the backing logic for XAML namescope support can be defined in a framework-neutral way by implementing the INameScope interface. WPF Usage Notes Under the standard build configuration for a WPF application that uses XAML, partial classes, and code-behind, the specified x:Name becomes the name of a field that is created in the underlying code when XAML is processed by a markup compilation build task, and that field holds a reference to the object. By default, the created field is internal. You can change field access by specifying the x:FieldModifier attribute. In WPF and Silverlight, the sequence is that the markup compile defines and names the field in a partial class, but the value is initially empty. Then, a generated method named InitializeComponent is called from within the class constructor. InitializeComponent consists of FindName calls using each of the x:Name values that exist in the XAML-defined part of the partial class as input strings. The return values are then assigned to the like-named field reference to fill the field values with objects that were created from XAML parsing. The execution of InitializeComponent make it possible to reference the run time object graph using the x:Name / field name directly, rather than having to call FindName explicitly any time you need a reference to a XAML-defined object. For a WPF application that uses the Microsoft Visual Basic targets and includes XAML files with Page build action, a separate reference property is created during compilation that adds the WithEvents keyword to all elements that have an x:Name, to support Handles syntax for event handler delegates. This property is always public. For more information, see Visual Basic and WPF Event Handling. x:Name is used by the WPF XAML processor to register a name into a XAML namescope at load time, even for cases where the page is not markup-compiled by build actions (for example, loose XAML of a resource dictionary). One reason for this behavior is because the x:Name is potentially needed for ElementName binding. For details, see Data Binding Overview. As mentioned previously, x:Name (or Name) should not be applied in situations that also use x:Key. The WPF ResourceDictionary has a special behavior of defining itself as a XAML namescope but returning Not Implemented or null values for INameScope APIs as a way to enforce this behavior. If the WPF XAML parser encounters Name or x:Name in a XAML-defined ResourceDictionary, the name is not added to any XAML namescope. Attempting to find that name from any XAML namescope and the FindName methods will not return valid results. x:Name and Name Many WPF application scenarios can avoid any use of the x:Name attribute, because the Name dependency property as specified in the default XAML namespace for several of the important base classes such as FrameworkElement and FrameworkContentElement satisfies this same purpose. There are still some common XAML and WPF scenarios where code access to an element with no Name property at the framework level is important. For example, certain animation and storyboard support classes do not support a Name property, but they often need to be referenced in code in order to control the animation. You should specify x:Name as an attribute on timelines and transforms that are created in XAML, if you intend to reference them from code later. If Name is available as a property on the class, Name and x:Name can be used interchangeably as attributes, but a parse exception will result if both are specified on the same element. If the XAML is markup compiled, the exception will occur on the markup compile, otherwise it occurs on load. Name can be set using XAML attribute syntax, and in code using SetValue; note however that setting the Name property in code does not create the representative field reference within the XAML namescope in most circumstances where the XAML is already loaded. Instead of attempting to set Name in code, use NameScope methods from code, against the appropriate namescope. Name can also be set using property element syntax with inner text, but that is uncommon. In contrast, x:Name cannot be set in XAML property element syntax, or in code using SetValue; it can only be set using attribute syntax on objects because it is a directive. Silverlight Usage Notes x:Name for Silverlight is documented separately. For more information, see XAML Namespace (x:) Language Features (Silverlight). See Also FrameworkElement.Name FrameworkContentElement.Name Trees in WPF
https://docs.microsoft.com/en-us/dotnet/framework/xaml-services/x-name-directive
CC-MAIN-2018-05
refinedweb
1,176
50.16
Track ---- New Paradigms? Object-Oriented Code? No problem: let's do Drupal 8! Object Oriented Programming (Course 1)Login or register to track your progress! OOP (Course 2): Services, Dependency Injection and ContainersLogin or register to track your progress! OOP (course 3): Inheritance, Abstract Classes, Interfaces and other amazing thingsLogin or register to track your progress! PHP Namespaces in Under 5 MinutesLogin or register to track your progress! Dependency Injection and the art of services and containersLogin or register to track your progress! Drupal 8: Under the HoodLogin or register to track your progress! Twig Templating for Friendly Frontend DevsLogin or register to track your progress! Prerequisites After this track, what will my level be? Track Summary Coming from a Symfony background, Drupal 8 looks like a developers playground. The code you write is less magic, you can override anything, and the debugging tools are amazing. And to make things sweeter, all the skills you'll need to master Drupal 8 - OO, namespaces, services, etc - are global skills that will make you more dangerous in anything you use. The goal of this track is simple: prepare you for the new object-oriented paradigm and keep going until you've positively mastered the nuts and bolts behind how Drupal 8 actually works. The Plan 1) OO Prerequisites If you're getting your feet wet with OO code, PHP namespaces or dependency injection, start here. By the time you get to the Drupal 8-specific stuff, you will fly! 2) Dive into D8 Now we get to the good stuff. Not how to use Drupal 8, but deeper: how D8 works. And for the themer in your life, let them learn Twig and code along with the coding challenges. And don't forget the extra credit!
https://symfonycasts.com/tracks/drupal
CC-MAIN-2022-33
refinedweb
294
65.42
Okay,! Do nothing Make no accomodation for this particular buggy protocol implementation. People who are running that particular implementation will get incomplete directory listings. Publish a Knowledge Base article describing the problem and directing customers to contact the vendor for an updated driver. Advantages: - Operating system remains "pure", unsullied by compatibility hacks. Disadvantages: - Customers with this problem may not even realize that they have it. - Even if customers notice something wrong, they won't necessarily know to search for the vendor's name (as opposed to the distributor's name) in the Knowledge Base to see if there are any known interoperability problems with it. - And even if the customer finds the Knowledge Base article, they will have to bypass their distributor and get the driver directly from the vendor. This may invalidate their support contract with the distributor. - If the file server software is running on network attached storage, the user likely doesn't even know what driver is running inside the sealed plastic case. Upgrading the server software will have to wait for the distributor to issue a firmware upgrade. Until then, the user will experience temporary data loss. (Those files beyond the first hundred are invisible.) - If the customer does not own the file server, the best they can do is ask the file server's administrator to upgrade their driver and hope the administrator agrees to do so. - Since Windows XP didn't use fast queries, it didn't have this problem. Users will interpret it as a bug in Windows Vista. Auto-detect the buggy driver and put up a warning dialog.) Advantages: - Users are told why they are getting incomplete results. Disadvantages: - There's not much the user can do about the incomplete results. It looks like a "Ha ha, you lose" dialog. - Users often don't know who the administrators of a file server are, so telling them to contact the administrator merely leads to a frustrated, "And who is that, huh?", or even worse, "That's me! And I have no idea what this dialog box is telling me to do." (Consider the network attached storage device.) - The administrator of that machine might have his/her reasons for not upgrading the driver (for example, because it voids the support contract), but they will keep getting pestered by users thanks to this new dialog. - Since Windows XP didn't use fast queries, it didn't have this problem. Users will interpret it as a bug in Windows Vista. Auto-detect the buggy driver and work around it next time.) Advantages: - Windows auto-detects the problem and works around it. Disadvantages: - The first directory listing of a large directory from a buggy server will be incorrect. If that first directory listing is for something that has a long lifetime (for example, Explorer's folder tree), then the incorrect data will persist for a long time. - If you regularly visit more than 16 (say) buggy servers, then when you visit the seventeenth, the first one falls out of the cache and will return incorrect data the first time you visit a large directory. - May also have to develop and test a mechanism so that network administrators can deploy a "known bad list" of servers to all the computers on their network. In this way, servers on the "known bad list" won't have the "first directory listing is bad" problem. - Since Windows XP didn't use fast queries, it didn't have this problem. Users will interpret it as a bug in Windows Vista. Have a configuration setting to put the network client into "slow mode". Advantages: - With the setting set to "slow mode", you never get any incomplete directory listings. Disadvantages: - Since the detection is not automatic, you have many of the same problems as "Do nothing". Customers have to know that they have a problem and know what to search for before they can find the configuration setting in the Knowledge Base. Until then, the behavior looks like a bug in Windows Vista. - This punishes file servers that are not buggy by making them use slow queries even though they support fast queries. Have a configuration setting to put Explorer into "slow mode". Advantages: - With the setting set to "slow mode", you never get any incomplete directory listings. Disadvantages: - Every program that uses fast queries must have their own setting for disabling fast queries and running in "slow mode". - Plus all the same disadvantages as putting the setting in the network client. Disable "fast mode" by default Stop supporting "fast mode" in the network client since it is unreliable; there are some servers that don't handle "fast mode" correctly. This forces all programs to use "slow mode". Optionally, have a configuration setting to re-enable "fast mode". Advantages: - All directory listings are complete. Everything just works. Disadvantages: - The "fast mode" feature may as well never have been created: It's off by default and nobody will bother turning it on since everything works "well enough". - People will accuse Microsoft of unfair business practices since the client will run in "slow mode" even if the server says it supports "fast mode". "Obviously, Microsoft did this in order to boost sales of its competing product which doesn't have this artificial and gratuitous speed limiter." Something else Be creative. Make sure to list both advantages and disadvantages of your proposal. something to add to tweakui :) Disable "fast mode" by default Since Vista is a new product, the last thing it needs is to arrive with "problems". Let people turn fast mode on and get and error message if the server has a problem. That way, the user is in control. Couldn’t you auto detect the problem and refresh the list using slow mode? This way you don’t have to keep a list. The downside would be that the file list would be retrieved even slower than slow mode. Automatically fix results by rerunning query in slow mode for explorer. Log an event in the system or application event log once per whatever indicating the problem and linking to the knowledgebase article. Advantages: Everything works Administrators who monitor event logs will become aware of the problem and can fix if desired. Similar behaviour to windows XP. Disadvantages: Difficult to implement? Hack there just for compatibility Can’t "Auto-detect the buggy driver and work around it next time" be modified such that when Explorer detects the error code it immediatly asks for the directory listing using the slow mode again, therefore avoiding the display of any truncated list ever? Something like: string[] GetDirList() { bool weirdErrorHappened = false; string[] fileList = GetFileListUsingFastMode(out weirErrorHappened); if(weirdErrorHappened) { fileList = GetFileListUsingSlowMode(); } return fileList; } have a filter in the network stack that can overide slow/fast for all apps! Ok, I guess you need to let us know, at what point you get the error code that allows you to detect the bad server. I take it now that this only happens AFTER some app might have received the first parts of the file list during enumeration? Perhaps this sounds a little harsh, but i’m a fan of the Do Nothing method. I personally feel (and feel free to tell me I’m wrong) like its problems like these that lead to the windows codebase being so hard to maintain for backwards compatibility. Where do these "hacks" to the windows code stop? Why isn’t the responsibility put on the 3rd party vendors to make their software work properly? I think the fact that you’ve recognized the problem, found out that there IS a solution (the update from the vendor) and could put out a KB article describing the problem and solution in detail is PLENTY for Microsoft to be responsible for. And in fact, I’d argue its probably a lot more than other vedors might do… David, Will, Scott: David’s suspicion is correct. The shell has already returned a partial file list to its caller by the time the error is detected. An application calls IShellFolder::EnumObjects and the shell issues a fast query. Each time the application calls IEnumIDList::Next, the next result is returned. After returning about 100 items, oops, it turns out that the server is one of those bad servers whose fast query is broken. Now what? It can’t go back in time and “un-return” all those items that it had returned up until now in response to IEnumIDList::Next… Sorry I didn’t explain why you can’t “refresh”. I didn’t think it was important. Considering Vista apparently won’t ship this decade, why are you worrying about it? If you must create bugs in your own code, do what David suggests, otherwise tell the vendor to fix their distribution. Of course, instead of putting the message in a message box you could always put it in a log which would be monitored by sysadmins. Nah… Also, I don’t like any of the options that show specific UI for that to the user (a la "contact your admin"). But what would be nice is something in the event log. So that at least in well managed environments admins would get tipped off (without involvement of the user). Why not contact the vendor if you know they have a problem? Depends on what exactly the other server is. If it’s Samba, for instance, then report it as a bug with Samba (or fix it yourself and give them the code). Users will eventually upgrade, and the problem will eventually go away. But this is only an option if the server is open-source. If the server is not open-source, then there’s really nothing you can do but maintain the back-compat hack database that none of the commenters here seem to like. ;-) As for "an update to the server software was released that claims to fix the bug": Does it actually fix the bug? Presumably you have a testcase that shows the issue; have your testers tried applying the update to see if the problem is still there? If it does work, then whether or not people have applied the update right now, they will eventually. And once they do, the back-compat hack is useless. (And an open-source program like Samba will get the fixed version rolled out to more places more quickly, as well — not least because the average Linux distro seems to update packages much more frequently than the average closed-source OS. For instance, Debian updates (on average) 5-6 packages every day: they will eventually pick up this fix, if it works as advertised. Users who have auto-update mechanisms in place (which is most of them) will then get it installed. Yes, NAS boxes are still an issue — maybe contacting the known vendors of NAS boxes using this server would be an option. Not sure on that though.) If the vendor is in Europe, refer it to the European Commission for advice. The first question that must be answered is, just how much difference is there between fast mode and slow mode? If it’s actually worth having fast mode, then create a new return code from the find file API for "server gave up and needs you to start again". The network driver need then only remember the error occurred for that session (and mark the session as "slow"), and Explorer will know to requery the directory. Maybe log to the Event Log at the same time, and maybe have a mechanism to globally force the driver to slow mode. Advantages: It uses fast mode if it can It uses slow mode if it must It doesn’t need any persistent memory Disadvantages Needs adding a new error code which not all apps will deal with; "legacy" apps which don’t know they need to resubmit will just be left with a broken find, especially as they probably don’t check error codes properly Will slow down the first directory listing of a broken server DO NOTHING!!! Changing your projduct to fixed someone elses problem? You are just asking to open another bug or security hole. AND… Why not work with the offending vendor and offer to pay them(for their time) to fix there product? (and if it’s a problem with the client-side driver, simply refuse to let the user load it, saying it’s unsigned or incompatible or some other similar complaint) Scrap ‘fast’ mode and add an autodetectable ‘faster’ mode and fallback to ‘slow’ mode when it isn’t available. Ok, so clients call IEnumIDList::Next numerious times to fetch all the names of the files. During one of those calls you guys realise "damn, we can’t list the complete dir, because we got the strange error back". Isn’t that just VERY similiar to say the network connection being dropped in the middle of the enumeration? So, why not return some error from IEnumIDList::Next that would have the meaning "Sorry, can’t retrieve all items, due to network problem". Then, if an app wants to try again, you could use your known bad server list to use slow mode the next time. Tremendous hack: Do the fast query, remember the files returned (up to 100, or the maximum number of files that a bugged fast query can return). If it fails, do the slow one, and don’t pass the files already reported. If the server is known to work well, don’t remember the returned files anymore for that server. If the serves is known not to work, use slow mode always for that server. So Do nothing BUT engage the vendor to a get real solution! Example of failure: Remember the issue with Write Back Cacheing and Exchange 5.5? Compaq and Microsoft could not agree on a standard on how to report the way memory was written to the disk from cache on the controller card. Exchange kept expecting to find the data in the last place it wrote the data (the cache) but the data had already gone to disk. If Microsoft had engaged Compaq just a little more this could have been fixed by Compaq. Microsoft needs an escalation process to work with the outside Vendors to fix their issues. I second the motion on the modified "work around it next time" scenario that David proposed in the fifth comment. Hacking just explorer would be my recommended option; leave the poor network driver unsullied. First we’ll need a new option in explorer which has three states: 1. Use slow queries 2. Use fast queries 3. Not set yet (this is the default) Dropdown only needs 2 entries and should be blank if has not been set yet. If the option has not been set yet explorer auto detects a truncated listing and automatically changes the option to use slow mode and re-issues the query in slow mode. All queries from now on will be done in slow mode. If the users change this option to using fast mode then they will see truncated results on buggy servers and a dialog box warning them of such. The dialog box is acceptable in this case since the user had to manually turn on fast queries (if the server claims to support them of course). Explorer can now note this as a bad server and retain X number of bad servers where it should use slow queries instead of bad ones. Online help for this option should explain why this option may have been set to use slow queries by the system. Disclaimer: This is all theoretical. I don’t know if it’s possible for explorer to re-issue a query without causing issues internally which would invalidate part of my solution. Idea 1:. Idea 2: If previous is not possible (why? most of the servers identify their version, or their version can be easily recognized) then even if you have some "get first/get next" you can still "revert to slow" after the error. There’s a chance that the "list" changed, but if the first 100 entries still exist, you can still deliver the whole list without the error probably much more often than in any other scenario. In my opinion, it also depends on how common the particular server is. If it’s going to cause a problem for say (abitrary low ‘insignificant’ number goes here) of Vista users it’s different than something that affects half your users. I’d also vote for "Do nothing exceptt issue a KB" and try to persuade the company/organization to issue their equivalent of a KB to let their users know they need the fix. Less code paths, less testing, more time available to test your own product rather than fix a problem that isn’t your fault. Other than that I would go for "Auto-detect the buggy driver and work around it next time" (+ logging in the event log) except there should be some kind of feedback for the user that lets them know the current operation failed. Raymond: what keeps IEnumIDList::Next from returning say E_FAIL which causes Explorer to clear the items it received so far and pop up a dialog box saying the operation failed? No results are better than inaccurate ones. Great real-world question, Raymond. The fix / workaround is going to depend pretty highly on the technical details of how things work under the covers. My two initial ideas (both mentioned by others above) were: 1) Before attempting a Fast Query, poll the server OS to see if it falls in a list of versions with this known issue. If it does, use Slow Query instead. Maybe store this on a temporary "good" and "bad" list so later calls to a known server. Advantages: It just works Disadvantages: Is this even technically possible? Is there a way to identify this system besides actually getting the error itself? Does this extra call make the Fast Query call SLOWER than just using a Slow Query to begin with? 2) If you get the "weird error" using a fast query, retry immediately with a slow query. Remember that server on a "slow list" so retries use the slow query from now on. Advantages: It just works Disadvantages: Is this even possible? From your response to the comments above, it sounds like this doesn’t really work: that an application doesn’t do a "getFolderContents" (or whatever) call, but instead does a recursive call to FindNextFile. So any fix along this line would really have to be on the application level. You could fix Windows Explorer, of course, but you have no control over the implementation details of all the millions of other apps out there. What happens if you do a slow query FindNextFile immediately after receiving an error? Does it "reset" and give you the first file in the directory again? If it does, then you’re screwed. If it really does give you the correct next file, then idea #2 above should work. 3) Put the above fix into Windows Explorer, and let FindNextFile calls by other applications always use the slow query. Advantages: Makes Explorer faster than competitors It just works Disadvantages: Heh. Don’t let anyone see you do this! :) Actually, that gives me another idea: 4) Always do a slow query when FindNextFile is called. Make a new API called FindNextFileFast that does the same thing in Fast Query mode, BUT also returns a new error code if this particular problem (or some other similar one)is encountered. Retool the Explorer application with the proper logic to use the fast version when possible, but fail over to the slow version if the error is encountered. Let the application worry about issues like keeping a list of bad servers, etc. Advantages: Pushes the logic up a layer to the application Old apps will just work the way they did in XP (using slow query mode) Newer apps can be tooled with the proper logic to failover correctly, and will work faster. Disadvantages: Probably the most difficult technically Requires rewriting some deep, important parts of Explorer The Windows API for Vista is probably already locked, this may not be a valid option. Please keep us posted on this one, Raymond!!! Me personally, I like "Have a configuration setting to put the network client into ‘slow mode.’" The idea of auto-detecting this error makes me cringe. What if you auto-detect it wrong, then what happens? What if in the future the error code you’re looking for ends up being a valid code, now your "workaround" may cause problems. You could go with "Disable ‘fast mode’ by default" but I don’t like the idea of dumbing it down to the lowest common denominator just because a minority have a problem. I’d rather see the minority use a workaround rather than the majority use a workaround to enable the full product capabilities. "The shell has already returned a partial file list to its caller by the time the error is detected. An application calls IShellFolder::EnumObjects and the shell issues a fast query." Why not, if the error that the server returns is obvious enough, then issue the slow query and don’t return the items that have already been sent to the app? Raymond, is it possible to modify IEnumIDList::Next? I know you shouldn’t handle such a problem on that level, but you make the method behave normal until you get the error after file #100. Instead of returning the error, you’d recreate the file list using IShellFolder::EnumObjects (but in slow mode), run ::Next a hundred times and return item #101. Advantages – Works on all clients – Doesn’t bother the user with bugs/contact some administrator – Doens’t cause problems on other file servers, so they can take full advantage of the new fast query. Disadvantages – Ugly hack on the "wrong" level in the API – Will probably cause a performance hit when querying the problem servers, since the first 100 files will be queried twice (in fast and slow mode). I don’t think the last disadvantage is such a problem: most people probably won’t notice, and you can still post a mskb article about the correct drivers to "improve performance". Bad performance is still better than missing files.. The "Do nothing" approach would make a lot of favour to the world and MSFT itself, especially on the long run. Let those vendors and developers fix their own stuff. If this cannot work, I’d choose this: "Have a configuration setting to put the network client into slow mode" + Put an Event Log entry from Explorer if the bug is detected. And finally: Document your protocols, so that 3rd parties could possibly avoid these kind of bugs in the first place. I personally am a fan of the "Auto-detect the buggy driver and put up a warning dialog" method although I would suggest also creating an event log message each time the error occurs. Obviously all the alternatives suck, otherwise you would have picked one and not asked! I can’t see why there’s two different "speed" queries in the first place. Is the "fast" query really that much faster? Why? What does it short-circuit? Does the "slow" query just do "FOR I IN 1 TO 100; NEXT I" after each disk read? :) Seriously though, what sort of performance hit are we seeing here? I guess it must be significant, otherwise you’d just not use fast queries and be done with it. It seem nothing really uses the fast query anyway, since the issue is showing up so late in the game. I guess the best action would be "disable fast mode by default" since it’s just plain buggy and nobody seems to care to beat enough ass to get it fixed… otherwise you’d pop up a window saying "get yer buggy drivers fixed" Would MS ever consider moving towards a ‘certified works on Vista’ model? Only drivers/code/apps/components etc that pass MS tests get the certification. This buggy driver would obviously not get certified. Perhaps if an uncertified component was installed, Vista could display that info to the user in a obvious but unobtrusive manner. Getting back to this specific instance, I would think most users would favor correctness over performance. A complete directory listing is desireable over an incomplete listing. An error message should not be presented to the user. Windows should handle the problem transparently so that users get what they expect in terms of the data – forget the speed issue. If speed is a problem for the user, they’ll find the kb article detailing the issue and apply the appropriate patch or force Windows to override the compatability fix. Just remember, those missing files could represent mission critical data where limited testing did not test with more than the threshold number of files. So, why can’t Windows internally issue a new slow request and merge the slow results with the incomplete results when it receives the fast error message – the app waiting for the next enumeration may stall for a short while as the new results are returned and merged (but only after file c. 100). At this particular point, all files identified in the internal slow request as not appearing in the incomplete fast list are appened to the, as yet unterminated, original query. FindFirst… (1) FindNext… (2) FindNext… (3) .. .. FindNext… [internally, error returned from server to Windows] (don’t tell the caller – issue an internal slow request now) .. [internal (got that one)] [internal (got that one)] [internal (a new one)] FindNext(101) [internal (got that one)] FindNext(102) [internal (got that one)] .. .. [internal (a new one)] FindNext(n) [internal (no more files)] FindNext (no more files) Benefit. XP users are used to slow network listings and can continue to work without issue when using Vista. Scheme seems to be backwards compatible for all applications using the internally-patched FindFirst.. FindNext… APIs. Users don’t appear to ‘mislay’ data. Microsoft don’t get reports of ‘Vista can’t see my files’. Downside. Vista is not as fast as the developers know it could be. Competing OSes that don’t care about the end user will win in directory listing benchmarks. I would do exactly what Andy suggests. Admitedly it is an ugly hack, but it’s already in an error handling routine so that’s not that big an issue. In general ugliness is acceptable when you are already in the "Something bad happened" state. The requery might have to be a little smarter than just "loop 100", since the returned lista may have chnaged between the two calls. This is really the best fix, as it allows fast mode to be enabled, and allows it for others. It also provides a nice "forward compaitibility" so that when the file server is fixed things start working fine. Am I the only one who is breaking into a cold sweat reading the majority of these suggestions? ‘ Have a configuration setting to put the network client into "slow mode" ‘ is the right thing to do! When we upgraded to XP Sp1 (hope I remember correctly) our Samba server made problems: Clients could not join the domain anymore. We had to prepare the image with the "RequireSignOrSeal" registry key. Overall this was not a big problem. Especially consider that people will roll out Vista (while the old NAS still is there) and can prepare to include that fix in it! Auto-detect the buggy driver version and put-up a warning dialog with the option to re-run the query in slow-mode and the ability to always do that in this particular scenario ("Always perform this action/Don’t show again" setting). Default to fast mode. Note though that the average user won’t know what this means anyway. Just out of curiosity, why did Windows XP always use the slow mode? If Vista works around this problem transparently, then how could you force the buggy driver developer to fix the driver? What is the point of having this fast query if it can’t be used? The only right solution is to tell the developer and not change your code. The developer then has several months to address this with their customer base. If they don’t do a satisfactory job by the time Vista ships, also create a knowledgebase article saying they are broken and point to them for the solution. Any other route will make things a lot worse. Any form of workaround will end up with you deciding to go in slow mode even after they have fixed the problem. At that point it won’t be possible to tell if Microsoft is using the slow mode as an anti-competitive practise or for "compatibility". I would do a combination of ‘Have a configuration setting to put the network client into "slow mode"’ and ‘Auto-detect the buggy driver and put up a warning dialog’ The dialog should say something like this: The server you are accessing only returns the first 100 files when using fast queries. Do you want to turn fast queries off? Y/N. 2. Make the mode (fast/slow) user changeable and if after the first pull the data is incomplete force the mode to slow and inform the user of the change and the reason -> link to article Detect the buggy driver, put up a warning dialog asking them to ask the administrator to upgrade the driver. Give the user the option to ignore the error and continue with buggy behavior, or retry in degraded mode. Explain what degraded mode is. RECORD THE ISSUE TO THE EVENT LOG, ALSO REPORT WHAT THE USER CHOOSE, AND RECORD HOW TO UNDO THE CHOICE. The first time you get the weird error code for a particular server, let the user choose what to do: either try again (once) in Slow Mode, turn off Fast Mode for this server (i.e. add it to the local Slow Mode cache), turn off Fast Mode for all servers, or Cancel (i.e. live with the reduced list, and ignore any redirects that may allow a DoS attack). I’m going to take the not-so-wild guess that we are talking about Samba here. The Samba folks themselfves are cool and are always interested in fixing problems, and it sounds like they have. However vendors that sound like ‘DeadRat’ aren’t so cool and aren’t so interested in fixing problems. Given how the DeadRat (and friends) community works socially, I’ll be reading on /. how this was a MSFT conspiracy to break compatibility with everybody with DeadRat installed. Also the number of embedded systems with Samba installed is scary, some of which have very old versions of Samba that the Samba team would probably rather went away as well. Perhaps some sort of additional attribute could be added to the server code. If this attribute is present that say identifies the server as W2K3ServerR2SP1 use the fast query mode, if not, act like XP. Someday the Samba folks will implement this new attribute as well and then all ‘distributors’ of Samba when they pick upp the fixed Samba bits will automatically start using the fast query mode. (joke) Exploit the bug and make the server crash. Then only 100 items will have appeared because the server crashed :-) Seriously, what I’d expect Windows to do is to detect the error, repeat the query in slow mode, fast forward the 100-items-or-so behind the scenes, and replace the iterator under the program’s nose :-) (Sorry, haven’t read all the comments above.) Make the network client, not Explorer, automatically re-query dir list in slow mode when the problem is detected. This should be 100% transparent to every program including Explorer itself. AND Store last "16" (I would suggest more) buggy servers not to issue the first fast query at all, to make it all work as fast as you can. Advantages: – everything just works – it works faster, that XP – user is not asked questions he has no idea about Disadvantages: – buggy servers, if "17" or more, work even slower then they could if slow mode would have been always enabled (hey, but it’s a buggy server, but it still works – cool!) – you need to use couple of bytes of windows registry to store the "16"-server list As far as I can see, advantages beat out the disadvantages completely. Oh, and I’m not sure about implementation issues regarding my solution, just have no idea :-) How would the HTTP redirect DOS attack work? HTTP doesn’t have a "fast mode", so being redirected through a billion web pages can’t cause this bug to be tickled. It’s a different protocol to the one that we’re talking about here, isn’t it?? Besides, if an app/the HTTP stack doesn’t have an HTTP redirect limit that stops you being redirected a billion times anyway, you’ve already got a DOS vulnerability on your hands. My vote: Explorer should recognize the strange error code and display an error message to the user saying, "The server \servername returned an unexpected error. Contact the administrator of \servername." Then, of course, have a KB with THAT EXACT ERROR TEXT in it that people could use to find the root cause of the problem and get the fix. Yeah, let me underscore that regardless of which camp wins (user’s blissful ignorance camp vs. punish for bad drivers camp), full logging of this condition is key. It would be of enormous help to the admin who is going to get a vague enough problem report from the user one day if they check the event log and it has useful information about the condition. My choice would be to auto-detect, then pop up an error dialogue explaining there’s a bug in the server. In the same dialogue, propose to work around it by switching a global setting to "slow mode". Advantages: – Simple to implement. – Doesn’t hide the bug, and clearly assigns responsability. – Provides a practical workaround when the problem appears. Disadvantages: – A one-time nuisance dialogue. – Doesn’t allow for the use of fast mode with good servers, if the user has met just one bad one. – Users will tend to forget the setting is changed and never use fast mode again, even if the server is fixed. About those clever schemes people are suggesting, where you try fast mode first, then automagically fall back on slow mode? Well, it just ain’t worth the complexity. Remember that such workarounds tend to stick around longer than the original bug. Solution below is way too complicated for reality but maybe it will inspire something else. Default to slow mode to maintain accuracy of the Directory Listing. That 101st file that is inaccessible is unacceptable! If an error dialog and Event Logging are used as well as disabling Fast mode per Network drive maybe the default can be fast mode. (Probably usability test the difference.) If fast mode is the default the user is more likely to see this situation. Then allow the user to configure the setting similar to "Optimize for Removal" that is available for USB drives. This can be available on the "General" Tab of network drive Properties. The wording would be similar to the text used on the "Troubleshoot" tab of Advanced Display Properties. "Disabled all accelerations. Use this setting only if your computer (…) has (…) severe problems". "All accelerations are enabled. Use this setting if your computer has no problems. (…)" Otherwise maybe another feature in TweakUI is more appropriate. The first time the error value is received then fast mode is disabled with a message written to the Eventlog. The user should also get an "OK" style dialog because that 101st file may not be visible. Explorer should then Refresh the directory list using slow mode. Disadvantage: This really is a full compatibility hack. Multiple Code paths complicate testing. If a Drive letter gets assigned to a different drive. Advantage: Data integrity. User confidence. They will be protected as much as possible from data loss. Low annoyance. (Especially defaulting to slow mode.) Don’t write to the eventlog for every occurance because the option is disabled automatically. Raymond, can you please explain a little bit more what this fast mode is? Can I as developer also use it? Is there anything magical about the Win32 FindFirst, findnext things? Or the .NET Directory.GetFiles? I would have to agree with the comments suggesting/agreeing-with "Do nothing". It’s noble to want to ensure a reliable user experience; but, when you’re dealing with third-party software, at what point do you "draw the line". By adding something to compensate for buggy third-party software not only do you increase the code base with code that (hopefully) will eventually be entirely unrequired, you also bloat the code base with code that not all customers will need. You’re also setting a policing precedent suggesting that Explorer will work with all drivers even if they are buggy. Do you really want that weight on your shoulders? What about third-party software bugs you can’t compensate for? Should a customer be expected to understand that you tweaked explorer for one bug and not another? I would suggest the "Do nothing" approach, continuing to detect and inform the user of errors. If a user encounters the error they can find a KB article that describes the error and what to do about it. Then, have the ability to force explorer into slow mode via some configuration option. What about situations where the error occurs and has nothing to do with fast-mode? Compensating for a third-party bug is just a nightmare: design a fix, create unit tests, create test cases, test the fix, perform full regression testing with the fix in non-failure situations to make sure the fix didn’t affect anything else, manage the fix, create a task to re-address the fix at a later date when its deemed all third-party software is stable, remove the fix, perform full regression test (again) to make sure removable didn’t break anything, etc. etc. etc. "Give me the strength to accept the things I cannot change and change the things I can and give me the wisdom to know the difference." Depends. If the buggy server is common (say, has >10% marketshare of servers speaking this protocol in same price range?) then make slow mode default with a configuration to turn fast mode on. It’s the third rule of optimisation – Make sure what you have is correct. Then make it fast, if making it fast doesn’t make it less correct. If the buggy server is not common, do nothing. Of the remaining options, Autodetect + warning dialog + syslog message + auto-work-around next time would be next, because it’s not that bad an option. Making the user dig around to make things work properly just sucks. It should work properly by itself, and if it notices it’s not working properly then it should notice and tell you and ask you there and then (which is the previous option) Doing that per-application is the worst, as the user now has to dig around all over the place to get all their apps to do the right thing, and some apps won’t even have that option, and those that do might have a buggy "go slow" switch, etc, etc, etc… I would suggest the last option; disable fast mode by default, the logic being that since that’s how it was in XP, nobody knows any different. It’s not a good solution, but it’s the most reliable one, most people will be none the wiser, and anyone who knows and cares enough to complain will be able to fix it themselves. There is no chance that any of the ‘break things’ options (including the ‘break things, but allow expert users who can use the registry or search the KB to fix it’) will be acceptable if the software in question is likely to be in use by any but the smallest number of average consumers, particularly if it is inside unpatchable hardware. Blacklist the specific version of software which does this and automatically set it to "slow-mode" – and output a warning to the event log. Being a bit slower is not a critical failure, a popup warning it is overkill for this kind of bug. But you can prepare your software to work against a buggy version using a blacklist and still get it wrong: other buggy servers may be released in the future, so autodetecting the buggy server and reconnect/relist the directory automatically – and add a warning to the event log – is a must aswell. Storing a list of recent buggy servers is stupid. What if the administrator realizes that the server is broken and fixes it? The client would get it wrong. Either do a blacklist of broken servers or detect and fix it somehow dinamically. And hey, the "future buggy server" may be one from Microsoft or a broken security upgrade, so you really want to autodetect broken servers regardless of the avalability of broken servers anyway. Is about robustness. (Also add registry keys to tweak all this ie: keys to allow clients to autodisconnect from buggy servers Easy. If the server returns an error code, report the error. If you recognize the error code, provide a link to the knowledge base article that describes the problem. In the knowledge base article, describe possible solutions (e.g. how to disable fast mode in Explorer.) Case closed. —- Even non-technical users try to track down problems like this on a regular basis. The worst thing to do is hide the fact that there is a problem. Inform people, provide a possible workaround, and put the onus on the 3rd party to fix their software. "About those clever schemes people are suggesting, where you try fast mode first, then automagically fall back on slow mode? Well, it just ain’t worth the complexity." I don’t know whether the resulting race condition applies to this problem, but I’m not sure people suggesting this approach have considered it. If you take 2 different snapshots at 2 different times and then combine the snapshots, you may get an impossible combination of data items. This would happen if you make a fast query, detect the error, make a slow query, and just don’t return the items that you’ve already returned. What if between the fast and slow query, the directory snapshot has changed? If your directory listing snapshot becomes invalid the microsecond after you take it, that’s fine. But if your snapshot shows that File A and File B are existing at the same time, you may run into problems. I don’t know enough about the problem domain to know whether this is a big deal or not. Unfortunately I have to concur with the suggestions which state to cache the first 100 and re-run the query on the 101st if necessary concurrent with a list of "known bad" It should be unnecessary but the advantages definitely outweight the downsides: – You get fast mode when you can – Only the stupid servers are punished – Stupid servers are not punished as badly when found in the cache – You’ve had to do this kind of crap before, what’s one more hack? (e.g. the lists of known-bad optical drives, usb devices, etc that I’ve found in the registry) Again, it is important that it be written to the event log. It’s unfortunate that this crap is necessary but it’s a necessary evil if you want to play with the retarded kids in the sandbox. If you get the wierd error code log it in the event log (being careful not to spam the event log to much of course). Popping up UI that nobody understands (and they won’t) sucks. Having done that doing nothing (else) would be best if you can get away with it. If not then starting fast and falling back when the error is detected is good. However you said that was problematic because you’ve already returned partial results. The solution to that would be to just ask for 100 items up front regardless of what the app actually asked for and then return results back from this cache. My first choice: auto-detect the buggy driver and put up a warning dialog. My second choice: do nothing. But in either case, I’d also disable fast mode in explorer (only) by default. I think the proponents of "do nothing, it’s the file server’s fault" and "show an error message to the user" do not fully appreciate the way users view Windows. If there’s a silent data loss, it WILL be considered by most to be a bug in Windows, and if users are exposed to an error dialog they don’t understand and probably can’t fix, it WILL (rightfully) be considered an abusive interface design, regardless of the fact that both are "correct" from a strictly developer’s point of view. I was initially thinking along the lines of "start with a fast query, and if you get the error silently switch to the slower query" but now I see that this presents insurmountable issues with the possibility that the directory has changed. I think the only option is to modify the API. Calls to IShellFolder::EnumObjects using the current syntax (meaning legacy programs) should continue to use the slow queries. As <a href="#564887">Brad Corbin suggested above</a>, you should make a new function that allows the application to explicitly request a fast query and that explicitly defines an error for "directory listing truncated" or something (I’m not sure whether the error should be specific to this case or should describe any case where the listing can’t be completed). It’s possible that you could accomplish this with an extra parameter to the existing IShellFolder::EnumObjects function, but this might be too much of a change in what is rightfully considered a stable API by developers. Advantages: – Does not break any existing applications – Does not require complex caching, more user-level configuration, abusive error messages, or re-querying of directories – Allows new applications to leverage the fast queries, but only on the condition that they are able to handle the possible side effects Disadvantages: – Legacy programs will not benefit from the possible speed increases in Vista (although they will not run any slower) – this could be perceived as bias on the part of Microsoft – Expands the API How about adding a parameter or overloaded version of IShellFolder::EnumObjects to implement a fast mode query, have an error code that describes the bug, and let whomever implements this version deal with the bug? Advantages: Moves the problem to the application layer, which would need updated anyway. Less ugly hackery. No worries for current code. Disadvantages: Is it possible? (My gut says that it wouldn’t make any sense putting the option up at this layer.) Apps need updated to use fast queries. Makes one vendor’s buggy driver everyone’s problem. Some basic tenets: Do what’s expected. Work by default. Clearly fall back to slow mode always is a nice floor for this, so we can use that. One question is how much "better" fast mode is than slow mode. Obviously there must be some reason to use one mode or the other. Is a user or application ever going to notice that slow mode is actually slow? If there’s no appreciable difference and fast mode vs slow mode was a design fubar, then "just stop using fast mode" is a possible solution. Now let’s say that isn’t an option. Why can’t you just restart the enumeration if the fast mode operation fails? Are you displaying the results on the fly and that’s why the enumeration can’t be restarted? Perhaps you could buffer the results if that is the case, to allow a restart in slow mode to be possible if a specific error code is detected. I don’t think merging the results is a good idea due to possible race conditions that the enumerator might be accounting for. Another idea is to use slow mode all the time initially, and probe when an opportunity presents itself for fast mode compatibility. This would ensure compatibility and allow the speed up to occur pretty quickly. Can you probe the server via OS prior to any enumerations? That might help. I do not believe enabling or toggling slow/fast mode via the registry is appropriate. This requires manual user intervention and will still break certain people by default. It’s our job as programmers and people shipping actual product to take work away from our users, not give it to them. In all cases, you should notify the vendor and request a fix to be made. And add a regression test if possible. Richard Kuo If this software has any reasonable market share, then IMO it is absolutely not acceptable to "do nothing." Vista would be introducing a regression. It doesn’t matter whose fault it is, the directory listing would work with XP and not work with Vista. That is an unacceptable result from a performance optimization. Now, if servers running this software are very rare, or you could work with the distributors to get the fix to customers preemtively, then it might be worth the tradeoff to "do nothing." Also, several people supporting "do nothing" suggested something along the lines of "get the vendor to fix their product." Apparently they missed the fact that the bug has already been fixed, but distributers are still shipping the broken version to customers. Getting the developer of the server software to fix the bug doesn’t magically make that fix available to every customer, for reasons mentioned in the article. I don’t like the way this problem has been approached. There are really only two possible outcomes: Option A: The user requested a directory listing; give them their files. This is just a simple matter of coding. Option B: Fail. The thing is, this is a largely non-technical decision that no one here is qualified to make. It depends on all sorts of factors we know nothing about: how common the problematic server is, where it’s used, is it worth the risk to add this specical case in Explorer, etc. Now let’s say we want to discuss option B, how to fail. Well, this really becomes a generic discussion, doesn’t it? The bottom line is that this is a problem for institutions that have a diverse computer network, and solution is to make it as straight forward as possible for IT departments to diagnose and solve to this network problem. For them, that would be either disabling Fast mode on their users’ machines, or updating the problematic servers. If the other software contains a bug, the onus should be on that vendor to fix their software. Incorporating any kind of workaround into the system means there’s more code for the system to carry forward, more code in the system that could itself contain bugs, and more behaviours to be honoured when implementing future changes. Ignoring CS-level OS research, the two most important developments in deployed operating systems over the last 20 years have been the move to abstraction (don’t expose the implementation, so it can be changed later) and minimising side effects (an API to do X should only do X, not X/Y/Z). Both Microsoft and Apple started off at one of this spectrum, but over time both are moving to the other. It’s a fact of life that you can’t resolve every interoperability bug, and there will always be situations where buggy software that’s already shipped has to be broken. You will obtain a far more reliable system in the long run if the default policy is that broken software can be allowed to break rather than supporting it by any means necessary (there are obviously exceptions, but those should be reserved for "affects 95% of our customers" issues). So I would do nothing, and treat this as any other kind of error (log it so that it can be noticed, but if the server is faulty then the server is faulty). If you add extra code to make this particular case work, you’re just deferring the problem – the real question is what would you do if, after shipping the workaround, you discover the workaround causes a conflict with something else? Add another workaround? If Vista is going to be thrown away again in 5 years and rebuilt, that approach is fine. However it’s intended to be around for a decade or longer, bugs need to be fixed. You need a solid foundation for any project, and in software terms that means short term pain ("your new OS doesn’t work with my old server") for long term gain (fast mode becomes standard, and slow mode can be retired). The solution should be: -Report the error to the user when it occurs. Simply tell them the result set is incomplete and give a KB #. -Supply a reg key or something to use the slow (call it high-compatability) mode. Document this in the KB or even the error message. -Notify the vendor(s) that you will not be fixing it in Vista. You could always send this to your marketing department for a solution. I predict they’d recommend having Vista servers identify themselves as such, and allow clients to use fast querying. Anything else is stuck in slow mode. One more bullet point for the feature list. Seriously, I assume we’re talking about a Linux/Unix system that is causing the problem based on your use of the term "distributors". Unfortunately you cannot be the "Good guy" in this scenario and fix it for them and get all that nice publicity for fixing a hole for them since the distributors would need to integrate your fix. If this is an open source system we’re discussing, you could always analyze the code and find if there’s any way to detect this particular system based on other abnormalities that may be more in line with your existing process for handling queries. Hard to say if that would be worthwhile though. You guys may just end up spinning your wheels taking the time to do that. Are we talking about Samba here? If so, then lets not forget that you guys don’t allow the Samba team useful access to the SMB documentation. Microsoft clearly don’t want to interoperate with Samba, so isn’t this more of an opportunity than a problem? The most consistent action would be to detect and blacklist any such servers, refusing to talk to them at all. I know no specifics, but let’s assume the following: 1. Error can be detected while in slow mode. 2. There’s no significant performance difference between the two modes while querying directories with a limited number of files. If that’s so, why not try the other way around? Start in slow mode, and keep using it until a query is met that could trigger the error. If the query gives no errors, mark the server as fast-mode-enabled and use fast mode in subsequent queries. And if the list can have no arbitrary limit in length, even better. But most probably I have no clue and I’m just speaking nonsense. I think this could absolutely be done transparently (save for a mention in the event log). If the fast iterator fails with this weird error, replace it with a slow iterator and simply refresh the view. It seems unnecessary to record the bad server. Assuming fast queries really are faster, performance will be faster for small queries and the hurt for >100 file results will be unnoticeable. Raymond: I think a lot of the commenters are confused about the scenario. Some believe the problem is with a local driver on Vista supplied by the vendor and most do not know what protocol is being used. Presumptions: The server is from another company, likely a non-OSS competitor such as Novell. They are emulating SMB for MS clients. SMB natively includes slow and fast modes but the competitors server has a bug in its SMB driver so there is nothing MS can do on the client side to fix or work around the actual bug. Suggestions: If a server reports an error to the client, display the error in a dialog. In this dialog, include a the text "The server has reported an error that may be resolved by using a slower access method. Would you like to attempt to access the server using the slower method?" with buttons "Yes", "No" and "Cancel". If they click "Yes", switch to slow mode, store the name of the server for future reference (but only if the server share is mounted to a drive letter; otherwise slow mode is a session only setting) and requery. Pros: -This resolves the problem for any server from any vendor that may have a bug in their fast query implementation. -The user is now aware of the problem and where the problem occurred (i.e. not in Vista). -A knowledgable user/administrator can now contact their vendor with a complaint and the exact error text. -Windows has given the user an option that may fix the problem NOW from their perspective. Cons: -Nobody likes getting dialogs reporting an error. -Nontechnical users are more likely to press cancel initially (but will probably try yes eventually and their experience will improve). -On some other buggy implementation of SMB, the change in setting may not improve the user experience. This reminds me of one of the Key Diffrences between the NT & Windows teams. The NT team would be more likely to stay "pure": if they got the odd error, they probably wouldn’t return any data. Instead, an error to the extent of "Your server is incorrectly reporting the files available. Please contact the vendor for an updated version" or some such would be displayed. Or NT would have refused to work with the server anyway. The Windows team would take the "if it worked before the customer installed Vista it should keep working" and would likely disable "fast mode" by default. Have I mentioned how much less stressful life is now that I’m not on the shell team? :D Let’s not overlook the fact that this problem is not something to be fixed (detected: maybe) on the app level (Explorer), as any other (future) applications will be able to use fast directory lookup, introducing the same problem for their end-users (or keeping away developers from using this feature). Windows/Kernel API level falls out too, because such a single vendor/version related 3rd party bug is not something to be handled by a stable API. So if Windows OS would like to provide a solution for this, IMO it should be solved on a lower level. Network client code looks like a good candidate here, or maybe a filter driver for those who need it. The latter can even be implemented outside Microsoft. …"Do Nothing" still looks the most tempting ;) Make Explorer requery in slow mode if fast mode gives an error, and turn fast mode off by default for all other applications. Make an application that wants to use fast mode explicity recognize this problem. Result: Explorer, the most ubiquitous file-related application, can still benefit from fast mode. Other applications that don’t know about fast mode will be just as fast (or slow) as they were before. New applications that do know about fast mode can take advantage of it. Advertise all over the internet this bug making sure all vendors, distributors, admins and support people know about it. I bet in this condition distributors would switch to the new fixed version. Given that Vista won’t be released for about a year it’s a huge time. In short, learn how Open Source community deals with these issues by making the problem public. I don’t think any senerio where the user sees an incomplete list is ever acceptable. That is important. What is M.S.’s stance on that? My comment got eaten, i’ll try again. Fix: Don’t make fast the default. Make it something that can be enabled, by install option, sms whatever. Document the hell out of the problem. Use something akin to the HCL? Good: Least surprising behavior for the user. Works the way the expect. They NEVER see incorrect data/listings. managed environments, where the "admins" have a clue about the services on the network, or "power users" who are on their own but know enough to get this stuff can enable on their own terms. Also the act of enabling it can present a disclaimer of known issues with venders/versions. Bad: someone has to take a bullet point off their powerpoint. variant of do nothing. If the bad driver is already signed, blacklist it in the driver installation process so it never gets installed (or generates lots of additional warnings). If it isn’t signed, well, then it won’t run on Vista x64. (And I hope x32 will complain bitterly) Make sure this test case is added to any certification process (so any future drivers don’t get signed by MS without testing). For Vista to work well, we need to raise the bar on what we expect vendors to provide. Is the server in question running Windows? If so perhaps a patch issued through WindowsUpdate would allow windows to detect the dodgy driver and always run in slow mode. Advantages: – Everything works. – For 99% cases Vista is faster than XP. Disadvantages: – Assumes server is running windows. – Assumes server is updated regularly. My second choice would be for fast mode to be disabled by default. Correctness is much much much more important than performance. Remember those blue-screen-of-death t-shirts? Ever seen a t-shirt that says "My Linux server is 1.5% faster than my Windows server"? The problem with adding a bunch of code to handle this special situation is that the code will likely end up hanging around as long as Microsoft is still in the business of making Operating Systems. I don’t know what the best solution is, but the one that sounds good to me is to keep fast on by default, make it user configurable, detect the error and display a message about how to resolve it. As someone pointed out, it doesn’t have to be a modal dialogue (how annoying!), but it should be present. When Vista becomes more popular, there will be more pressure on vendors or distributors to start using the fixed version of their software. For everyone who can’t get updates for whatever reason, slow mode still works. If fast query returns an error, populate the first 100 with an arrow-hourglass cursor and then do a slow query to populate the rest of the list. Then reset the cursor. Cache the name (is 256 any more of a security risk than 16 names?) and slow query from then on. Write the KB article showing where the server list is cached (large networks might want to prepopulate it on Vista rollouts). And rejoice, because for once there’s an error code that you don’t have to bubble up to the user. To the ones who think Microsoft should not try to work around bugs in third-party software and punish those bugmakers this way. You see, Microsoft’s business is not punishing someone, but making software as good as it can, for users (well, as well-sellable as it can, of course). Reporting wrong file listings in fast mode is a bug, no doubt. But can Microsoft do anything for user happiness? Finally, users pay money for Windows. This bug is not such a big deal Microsoft can’t work around. Why not make Windows work even with buggy servers? I understand that my computer is running tons of useless code, which checks for this and that to ensure everything works right. Maybe 0,1% of this code really does something for me. But for you, another 0,1% does something. You will be the first to shout at Microsoft when you find that Windows doesn’t work with whatsoever, even if it’s buggy. I, personally, support this practice of working around others’ bugs. Escecially, if every hack is thought-out as thoroughly as this one. Disable Fast mode by default. It’s a pain for a few people, you need extra documentation but the option will be there. It might not be worth having the fast mode because of it, but it will at least be there. Also keep in mind you can’t assume the 100 file rule will always be 100 files. What in the server fixes the bug, but it happen again and 400 files. If your looking for it to happen only with the 100th file you will be out of luck (and the user will be missing data). You have to assume if it can happen it can happen at any time, at least until it has happened and you can mark it as bad. This also means you can never know if a server is actually good or not, it’s either bad or not known to be bad, never good. And remember, popping up a dialog box for the user for this is a very bad thing. You get the combination of an unexpected message that means nothing to them, so the default is to cancel it (probably without reading it). The only really workable solution is to disable it by default. I agree with what has been said before: providing an incomplete file listing is not an option. If it is possible get all the files to the user by requerying (even if it is a lot of work) then do so. It doesnt sound like that is possible so the only options I see are to either return an error to the user or to not use fast querying by default. I prefer using slow querying by default with the option to use fast querying as a configuration setting. In future versions of Windows when servers have better support for fast querying then turn fast querying on by default. I believe this solution is similar to what has happened <a href="">with Windows File Protection</a>. Plan for the better solution to be implemented in the future but for now implement the one that is best for today. After thinking about it a bit more and reading the additional comments here, this turns out to be a pretty tricky problem. Up to now, I was leaning toward the option that I mentioned above, and a few other people have discussed: 4) Always use slow mode for "standard" calls, and introduce a new API for "New Fast Mode", that is aware of this particular issue, and throws a new error message. Advantages: Old apps just work New apps can be coded to use fast, but failover to slow when necessary User isn’t faced with the issue The more I think about it, though, the more I think that this is a pretty ugly hack. Imagine what the programmer will be thinking 3 years from now when he’s trying to code his network-aware application: "Ok, I can do a standard query in slow mode, or I can add a flag for Fast Mode (from the original API), but because of one weird corner case, that perfectly good API flag doesn’t work! So to REALLY do fast mode I have to use this new flag called PleasePrettyPleaseUseFastModeIReallyMeanItThisTime?? What the heck is this??? MS sucks!!!!" Ah, I think I got it. In Raymond’s original article he uses the quote: "If somebody asks whether a server supports fast queries, always say No, even if the server says Yes." Ok, does that mean that the proper way to use the current API is to always ask first if the server supports fast mode?? Then its easy! Just change the CanIUseFastMode routine with additional checks that look for this particular problem! If there is an easy way to query the exact build number of this particular server OS, then that should be easy! If that kind of version query is not possible, then you’d probably be stuck. :( With just a server name, you couldn’t test it experimentally (by reading a big folder and actually see if you hit this error), because everyone sets up their fileshares differently! You could test it experimentally once the application passes in the network path to query, but that sounds like a massive amount of overhead (read 101 files first to see if we hit the error, then decide whether to use the current results or re-query in slow mode). I have newfound respect for the work you guys do. Good luck with this one. Thanks for a great interview question :) Dunno if anyone’s posted this yet, but I would do something like: – Use fast mode by default – If you get the weird error code, add the server to the list of bad servers and remember what the last file returned was – Then, re-try with the slow mode, and give the application the rest of the file listing (starting at the file after the last one returned with the fast list) Is that unfeasible for some reason? — I see Wesha just posted this solution, in code nonetheless! I like it the best. But I’d also earmark it to be removed from the next version of Windows after Vista (circa, what, 2012?), cause no one likes cruft, and by 2012 the problem of dealing with one buggy version of an obscure decade old third-party server implementation server will be beyond worrying about. Do Nothing I don’t believe you should taint the OS just because somebody else didn’t do their job properly. And if you cover it up for them they are a lot less likely to fix the error or even try to avoid making such errors in the future. To me this is missing the larger issue – the logic should not be based on a single weird code specific to this case. If explorer gets an error, any error, during directory listing it needs to handle it appropriately, not simply list what it received so far. So my take on this is simple – try fast mode, if it fails, for whatever reason, make an attempt at slow mode and if that one fails as well just show an error message and go on with an unavailable directory. There’s no point in building a special logic to handle one specific case. Dunno if this is possible: If SMB has a version number, make a certain version number on the server side a requirement for fast queries. Then ship a patch for Windows Server 2003 and 2000 that bumps up this version number, so Vista Clients can use fast queries if they see such an SMB server with high version number. Lots of people here suggest that a FindFirstFile query should always start in fast mode and query again if it fails with this weird error and match the queries’ results against each other and only issue the diff between the two to the client. Is this at all feasible with regards to the memory that is required to keep at least 100x_MAX_PATHx2 Bytes for each FindFirstFile call in memory on the client side? I guess FindFirstFile is nothing but a function that returns an RPC context handle. So if the client does the same job as the server (keep the state) and then update its state with state that might be completely different after the second call, this would be a perversion of the idea behind context handles and it would need lots more of memory than necessary. I recommend different things for different layers of the software. As near as I can tell, you are going to have to work around this problem, as any failure to do so will be a regression in Vista. (By definition: a change was made that breaks existing functionality.) The question is what layer should work around the problem to make everybody happy. 1. Default to "slow mode" below IShellFolder::EnumObjects, i.e., do not change the default behavior of the object from the way it behaved in XP. This allows existing clients to continue to work as written. 2. Provide a "fast mode" switch on the object that implements IShellFolder. This mode issues fast queries. It is otherwise no different. Specifically, it does not attempt to remember whether or not fast mode worked in the past. 3. Allow explorer specifically to use fast mode until fast mode breaks. Teach explorer that sometimes fast mode breaks against certain servers, and it needs to fall back to slow mode in those cases. Presumably explorer is in a position to refresh any data that it has when it encounters this problem. Use caches as required to keep from making too many redundant requests. 4. (Optionally) Teach the same tricks to as many clients of IShellFolder in Vista as you can. Despite this, I would not make a public object model or API set that would automatically work around this particular glitch 5. (Optionally) Document the problem in a KB article somewhere, and possibly in the documentation for the new "go fast mode" API in MSDN. Here is a wild idea. Sell two versions of Windows….its not like Microsoft does not sell enough versions. Version 1: Hack free. No buggy third party compatiblity crap compiled in. I am sure you could do something in the source like #ifdef _VENDOR_HACK buggy_workaround(); #endif You guys would then push the crap out of this to the corps that have all of these buggy custom written in house applications. Sell this at a reduced cost via the OpenLicense program you guys have to try to get more businesses to fix the broken apps. Of course some systems would still need the buggy backwards compat version so you have….. Version 2: Hacked version This would still have all of the compatiblity tweaks for the end users Backyard Baseball type games. In fact let me take it a step further. You could make this a simple compatiblity pack if you did not want to invent all new SKUs for another version. Dialogs, switches, and remembering buggy servers are too complex. There’s a simple solution. Always try the fast method. If it returns any error that might possibly be fixed by using the slow method, try that. In that case it always works fast for servers which aren’t buggy. Buggy servers are a little slower, but still correct. Perhaps log the problem for sysadmins who are wondering why their servers are slow. Buggy servers will be slower that they would if you bothered to remember to skip the fast method, but it’s much simpler to implement. Also, my guess is that most people will blame the slowness on the server — if they realize it’s a different machine. People don’t generally blame slow websites on their own machines. Assumptions: 1. You have a complete list of known vendors/drivers that suffer from this fault. 2. At some point in the future you will be distributing a fixed driver for the vendor. Possibly via some automatic update system. Solution: 1. Distribute Vista with ‘the list’. 2. Whenever a driver is installed, check it against the list, if it’s a known broken driver, set it to slow mode, otherwise set it to fast mode. (this implies a fast/slow mode option per driver) 3. Allow users to enable/disable fast mode at will. 4. If in fast mode, and you get the error, popup a dialog telling them the problem, disable fast mode, and tell them to close and restart the application. Pros: 1. The ‘workaround’ code runs only during driver install/upgrade and when detecting the error. It doesn’t run on every list/mount and it doesn’t need extra memory/processing etc. 2. Users don’t need to know about fast/slow mode, they will automatically get the best possible mode at all times. No errors. No new APIs. Consequently… run in slow mode unless you can identify a fast mode server prior to requesting data. 112 comments wow. Anyway if you didn’t see in my original comment I said ‘in explorer’. I propose fixing only explorer, not the api’s themselves. Explorer can certainly detect the error and refresh the directory listing. Maybe other MS programs need to do this too, those that don’t use the standard file open dialogs and the like. As for the api’s used by other apps, that is up to the application developer to fix. But what about the old applications no longer supported? They wouldn’t be using fast mode anyway, correct? But say they did, maybe then there would need to be a compatibility option for running the program that forced slow mode. 2. If you have to keep compatibility, the first time you connect to a new server that claims to support fast query and might be a candidate of having this bug, prompt the user "Fast query is a new feature of Windows Vista to make directory listing faster over network. Server xxxx claims to support fast query but there is a chance it might have bug described in KBxxxxyy. Do you want Windows to verify it? [Yes] [no] [Never prompt again]" There you go. Funny, seems that nobody has thought about this the other way around. For each connected server, initially set a registry value SupportsFastMode=2 (uncertain). Then run in slow mode until you happen to see more than 100 files returned. If that happens, immediately re-run the same query in fast mode and check for The Error. If you get >100 files set SupportsFastMode=1. If you get The Error set SupportsFastMode=0. If you get <=100 files then presumably you lost a race, and will need to try again next time. Advantages: – Don’t have to waste time searching for a valid test, you wait until the test finds you. – Performance will always be as good or better than XP. – Users never experience a fault, thus do not need to be notified. – Power users/Admins can set the key manually, if they know that their server is good/bad. – Relatively simple implementation. Disadvantages: – Performance will be sub-optimal if user never browses to folder with more than 100 files (but still as good as XP). More than 100 comments, wow. Everyone wants to solve a windows bug (is this a commercial for OSS or what! ;-) Generally: make it work. I’m with Richard Kuo here, when he says "Some basic tenets: Do what’s expected. Work by default." As a user, I don’t want to go upgrade stuff so it’ll work with Vista, I want Vista to work with my stuff. I could not imagine any one who has ‘managed’ more than one computer thinking different, in practice. (I really don’t want to waste my time and frustration on such a problem.) I see that totally I’m wrong here based on the ‘do nothing’ comments. I was thinking like everyone else: switch to slow mode (impossible, you then specified), or return the rest of the list using slow mode, by keeping the first 100 or by re-querying for them, if a fast query always returns the same 100 files. Another option, and I saw this one above, but only once: how about whitelisting, instead of blacklisting? Always start with the slow query for a machine not seen before. Evaluate if the conditions for testing this bug are fullfilled (e.g. slow query returns >100 files). Send a message to some the system component asking it to evaluate this machine for fast access, giving it a path this testing can be based on (the one you just found more than 100 files in). Continue using slow mode, until the system component discovers that fast mode is ok. (Variation: do this testing in the same thread that does the slow mode accesses, at the same time, if fast mode is really so fast, it won’t make a dent…) Optionally, optimize by whitelisting ‘known’ systems (MS OS, Novell OS, Good samba versions if you can detect them, whatever…) automatically. Advantages: – always works. No weird, non-reproducable behaviour (I swear it returned only 100 files last time! wait I’ll restart my program and show you.. now it does return them all! <Next day>Turn on computer, there it is again, only 100 files. Go get admin, doesn’t happen again, … repeat at infinitum). – still uses fast mode most of the time – not much slow down at all, not on the first query (except that it uses slow mode, but it doesn’t have to restart, doesn’t have to do several querys, doesn’t have to try a fast query, wait for the error (or not), then do an appropriate query which actually returns results to the suer, or anything like that that would slow down the first query) – Vista user knows nothing about it Disadvantages – very ugly hack – ungodly amount of code for such a thing, probably – ‘unexplainable’ directory accessess seen on target machine (and the Vista machine too), possibly by a strange user (if not using impersonation for the test) which you will have to explain after Mark Russinovich fires up filemon and has an hour to kill looking for strange accessess ;-). Then again, this blog entry explains it already! – possibly ‘brittle’ (is that even a correct english word?), i.e. this may easily go wrong. then again, it’s fail safe: if it does go wrong, you’re in slow mode, and everything still just works. – Vista user knows nothing about it (=> this will remain unpatched with all the distributors) And I see JI has thought of the same thing. (His post wasn’t there when I started on mine.) My opinion is that off-the-shelf distributed code should be as low on vendor specific hacks and fixes as possible. I’ve had trouble, from time to time, with files being deleted/truncated instead of overwritten when saving them to smb shares. This as a result of the smb connection being reset at an awkward time during the update. In most of these cases I’ve been able to track it to some hardware failure (misbehaving firewall), and in others it’s been caused by some random VPN client. Annoying? Yes, of course, but hardly something one expects the OS to handle ("connection dropped? never fear, we’ll generate random files for you to browse!"). The same goes for this matter, IMHO. Leave the fast query on by default, and provide a registry setting to switch it off. Perhaps even let users switch it off for specific IPs or NETBIOS names. Show an alert dialog if the server returns the weird error code you describe, with a link to the proper KB article. This is after all an error, so it should be noted as such. If necessary, drop another registry setting in there to disable specific error messages/codes. I’m guessing other errors can/will be shown like that, so being able to disable them would be a peach for the advanced users. The mentioned KB article should of course either contain the description of how to resolve this by using the registry settings, or maybe even have a patch applying some other solution described somewhere else on this page. Disadvantages: – The user will see the error popup, if he’s as unlucky to browse the flawed servers, and might be puzzled for a minute or two Advantages: – The user will know why it happened, and how to fix it – Patches may be distributed by network administrators if it’s a company wide problem – By being told what happens, the (advanced) users can patch or upgrade their file server software to some decent version – Most users and file server providers will benefit from the fast query mode So you can’t test the server for its version or for some kind of flag saying that it has been certified for compatability with Vista? In that case, make the API behave identically to the Windows XP case, and introduce an EX version of the API with a flag to enable the faster query. Document the bug in a KB article, and put the link to the KB article in the MSDN documentation of the EX API. Existing code won’t suddenly change behavior. New code (such as the Vista Shell) can work around the issue if it encounters it. The end user doesn’t need to be bothered with these details. I know it sucks that you can’t get perf gain for backwards compat with existing hardware in this case, but it only works if it’s well tested. That’s why we have the WHQL. Better safe than sorry. You never know – if you don’t fix this bug, you may end up with a QFE request later. Raymond: Does the error code happen every time a directory listing is gotten, or just when a directory listing is too long? If it happens every time, then do a quick directory listing of a root directory when the server is mounted. If you get the error or don’t have permissions to do a directory listing on any of the roots, silently enter slow mode. Otherwise, you can safely use fast mode. Advantages: * Everything just works. Disadvantage: * Slightly slower mounting time. Fixing other people’s errors is just wrong, unless there are virtually no chances of the error to be fixed, so I’d chose "Auto-detect the buggy driver and put up a warning dialog" Also write an event log essage stating that the remote server is faulty and needs to be updated; create a new error number for this exact issue. As a workaround, suggest reverting to the slow mode by default (of course, there should be the registry setting to allow that). However, knowing about multiple various hacks employed by Microsoft in order to fix wrong code, I would suggest the workaround already proposed by Andy, Joe Butler, Michael Cook and probably others. That is, keep last 100 recent entries in the internal list (or maybe in the file cache) for network requests, and re-query the server in slow mode upon discovery of the error. Then comparie new listing to the saved one so only changed items will be submitted in the next batch. Write the error in the event log, as mentioned above, and provide the means to disable this workaround through the registry. Advantages: applications get the complete directory listing and gets no error messages Disadvantages: OS nneeds to cache at least 100 items for every directory listing request to every network server, which impacts performance (the cache could probably be discarded after first succesful 100 items, but still needs to be recreated on every new request) Anyway, my suggestion is to release an optional update that makes Windows XP run in fast mode. Add a note to the update that there are more details in the KB article as to how the administrator could test their third-party devices for compatibility. Since people will want to switch to fast mode hopefully people will complain to brand X about their incompatible server. > (If the list of "known bad" servers were unbounded, then an attacker could consume all the memory on your computer by creating a server that responded to a billion different names and using HTTP redirects to get you to visit all of those servers in turn.) How is this supposed to work? Redirecting from http:// to file:// perhaps? Does the security team know about this feature? (If it happened for every listing, not just "long" ones, then you could just make the network driver do the listing itself; this would have the bonus of not breaking anything, at the expense of *slightly* worse mount performance, and would be far and away the best fix. But I don’t think that’s what you’ve described.) How about this. I can see that it has "issues" but… When you run the query in Fast Mode, keep a hold of all the items you’ve been given until you’ve received enough that you can be confident that the server isn’t one that has this bug. If you get the error message that indicates it *is* the buggy version, rerun the query in slow mode. For each result that you get from the slow mode, check to see whether it’s in the list that you already received and filter it out of the results if it is. Return any results that *weren’t* in that list as if they were just a continuation of the result set of the original query. The biggest issue is, of course, what happens if the directory contents actually change in the time between the fast request and the slow one? You could end up with a listing that didn’t accurately represent the contents at any real point in time. Whether that’s a serious problem will depend on the nature of the API and what it’s typically used for. The other problem is that it introduces significantly more overhead even beyond what the "slow mode" would normally be – doing all the filtering over intermediate results etc. IMHO that’s fine – "this server performs excessively slowly when serving to Vista clients" is probably a more compelling upgrade reason for users of the buggy server than any number of theoretical arguments about bugginess. Finally, there’s the issue that you need to keep track of the results for *every* fast mode query until you’re sure the server is okay. Again, the nature of the API will determine whether that’s a problem or not. Wow, I’ve scrolled through a huge number of suggestions, and most of them would move Windows’s usability firmly towards that of lesser (and cheaper;-) OSs. Hope you ignore them. I think the code has to be reliable with any possible server, including ones with the bug. That’s an absolute must. It shouldn’t bother the user with information they most likely won’t understand either. If possible, check the server capabilities when you connect – maybe SMB has a way to get OS version/Driver version or something, so you could use that to work out if Fast Mode was usable. Then you’d only use Fast Mode if you’d verified the server supported it, and use Slow Mode otherwise. If not, and there isn’t a way to recover from Fast Mode fails on the fly without telling the application, I’d vote for only using Slow Mode. Or there’s another possibility. You could start off using Slow Mode. When you find (via a Slow Mode query) a directory big enough to benefit from Fast Mode, do a check to see if Fast Mode gives the same answers. If it does, use Fast Mode from then on. But don’t return bad data, and don’t tell the user to upgrade his server software. People that want to work with that kind of OS won’t use Windows anyway. /*joke*/ Every time new file server is found you just create there TEST directory with >100 files and read it. If it fails – mark it as slow :))) It seems obvious to me that the right thing to do is test the version of the server or driver and use slow mode if it’s old and fast mode if it’s new. This approach is almost all upside. The only downside is that Windows contains a hack which recognizes a particular version of a particular server. I’ve worked in operating systems before and committed many much worse compatibility hacks than this. This is so obvious to me that I assume RC would have listed it as a possibility if it were. That must mean it’s impossible to test such a version. Barring a version check, I’d run the query slowly, write a KB entry, and contact the developer to tell them what happened. Tell them to add a way to query the version of the server so you can use fast mode when appropriate. Otherwise, their most important client will ignore all the hard work they did adding fast mode. Tell them their competitors run X% faster because their fast mode doesn’t have the bug. I’m not sure there is a downside here because this will motivate them to distribute the fix and you can roll the version hack into a service pack you were going to ship anyway. And if you get lucky they will have distributed the version capability before Vista freezes so you don’t have to deal with the service pack. Solution: Build in a setting to enable/disable Fast Mode. Ship with the setting disabled. Include the code to detect the problem when Fast Mode is enabled. If the problem is detected, tell the user to disable "Fast Mode". Since they enabled it, they’ll know where to go to turn it off. Followup: If the status of the issue changes (i.e. distributors update the code from the vendor and problem goes away), enable Fast Mode in a future update. Result: Windows Vista ships working. If the situation changes, then people get a "free" speed increase in a later Service Pack. In an above-average environment – say a large corporation or knowledgeable person – they can enable the feature in their installation if they know it will not cause any problems. We’re all looking at the problem from the wrong end. We really should be looking at the machine on which the file server runs. What about a "Compatibility Catalogue" of apps / drivers on that machine which haven’t passed compatibility testing for some reason? That, combined with a Microsoft-side Compatibility News/Update webcenter would be the real way to get Administrators to pick up on the issue and deal with it *as soon as* there is a patch out. With an official-enough framework to deal with the issue, perhaps you might even find that the Compatibility webcenter becomes a Hall-of-Shame for apps…so much so that they feel the pressure to fix their issues, and do so. Nice post! I like guillermov’s auto-whitelist idea. It combines many of the advantages of the other approaches. To reiterate: The difference between the fast and slow methods probaby isn’t significant for small numbers of files. Always try the slow method until a query returns more than 100 results (using the slow mode). Then (even asynchronously) execute the same query using fast mode. The directory changing between isn’t a concern because the results of the second query aren’t actually used. If the fast query returns less than 100 results, because the directory changed in the interim, do nothing. If it returns 100 or more results, mark the server (using some of the schemes discussed above) as fast-capable, and use the fast method for all subsequent queries. If the test query returns the error, mark the server as being slow and don’t attempt to test it again for some number of days. I also don’t see why the length of the server list has to be so restrictive — it only has to be consulted at connect time, so why not store it on the filesystem (or in the registry) and consult the list once, when connecting? The cost of doing that would be lost in the noise. You’d still need some limit for the size of the server list, but it could be much higher than 16. OK, here are my ideas: 1. When a directory list is required, do it with the fast method. If you don’t get the error message from the server (lets say there are only 60 files, or there are 200 and the server doesn’t have the problem) you are done. If you DO get the error message, you know there are more files and the server is buggy. So as soon as you get the error message you re-query the server using the slow method. The user will already have a list of files (about 100) to look through while the rest are retrieved. Pros: * When people don’t run the buggy server, you use fast mode thus things are faster and there is no penalty (like using slow mode for everything) * If a user is just browsing in Windows Explorer, they’d probably never notice you were doing this * Uses fast mode if there are only a handful of files Cons: * It is a double hit on the server when this issue is there. That could be a big problem depending on how often people do this. To fix this, I would propose a registry setting to simply force slow mode that could be documented in a Knowledge Base article for those who need it. * More complex than just always use one mode or the other 2. Keep an internal cache of server names and whether you can use fast or slow mode. After a certain period of time (haven’t accessed the server in a week, perhaps) remove the entry of that server (to prevent it from being to big). You can also re-check the server once a month or something to see if it has been updated (or something like that). When you first connect, you do like in solution #1 to find out if the server suffers from the issue. You save the determination so that next time you can just use fast or slow mode as appropriate and skip the check. Pros: * Again, will use the fastest mode possible * Avoids double-hitting the server every time if it suffers from the issue Cons: * Requires a cache, not as elegant (code wise) and just always using one mode If you like either of my ideas, I’d love to hear about it. My e-mail address is on my website. — Michael Cook PaulM: > Am I the only one who is breaking into a cold sweat reading the majority of these suggestions? No, you’re not the only one. Speaking as a part-time ECMAscript developer, many of these suggestions (all the ones that involve blacklisting or whitelisting certain software versions) absolutely SUCK. The canonical example of this kind of thing going wrong is with DOM detection in JS — if the JS code does its "what feature set can I use?" detection by looking at the user-agent string (in this case, that’d be the server name and version), then it may work most of the time. But if that same user-agent string (or server name/version) starts working properly in the future, your code suddenly needs to be updated. Whereas just checking for DOM features that you need before using them, and including a fallback if necessary, is a *much* cleaner solution. Yes, the code is a bit bigger, but often the differences can be abstracted away in a wrapper function. (See the myriad attachEvent / addEventListener JS libraries that are available, for instance.) In this case, the only two decent options (given the frequency of OS updates from Microsoft) are do nothing, and detect the error *WHEN* *IT* *HAPPENS*. Microsoft can’t blacklist or whitelist certain server software, because if those servers get fixed in the future, the blacklists will need to be fixed too. Whereas if it just detects the error (or does nothing), it won’t care about future updates. After seeing that you’ve hit this bug, it doesn’t matter (much) what you do. But you *can’t* write code that expects to see the bug with certain combinations of software without actually seeing it, because that check will be WRONG in the future. (Now, blacklisting certain server *DNS names* (or NetBIOS names, either way) isn’t so bad, as long as the entries get maintained. A server that gets fixed can’t stay in the list too long.) Well, I’m in favor of "do the right thing" and "don’t give the user a problem they can’t do anything about." 1. I think Patrick Correia’s thinking, along with Mike Fried, about versioning the API is interesting. It doesn’t come at the real issue, but it is an interesting idea just the same. It holds a clean interface contract in terms of previous behavior. I do like that. 2. Along with Andy, Joe Butler, Jstin Bowler, Michael Cook, Mike, Richard Kuo, and Arnt Witteveen, I’m interested in seeing the fast case, if used, deliver correct results whenever possible. 2.1 For me, I would relegate this to a pacing and dataloss problem (even though it’s really a bug) and solve it as a pacing problem. Caching a flag against a failing fast-responder is fair to avoid incurring restart overheads, just like fixing a transmission window size to curb packet losses. 2.2 For splicing in the tail of a slow query with the lead of a fast one, I’m assuming that query responses aren’t atomic and are also immediately stale, so having a potentially inaccurate return is not that frightening in terms of minor discrepancies. There’s an explosion of new failure modes, though, and that makes me nervous, because they become rarely-seen and easily screwed-up cases. 2.3 I am concerned about how the user experiences what happens. Why is the fact of premature ending of the return not visible to the user now? If a connection is lost or a distant server fails in the middle of a query return (directory enumeration?), what happens now? 2.4 I think I would want to warn an user when a retry is in progress (with some non-modal indicator) so they don’t think the situation is exactly hung. I’d want to provide some indication that results may be inaccurate/incomplete if the strange error code occurs and recovery was not attempted or was unsuccessful. I suspect that this depends on what can get through an enumerator interface and what an application does with it. Not encouraging. 3. I keep coming back to wondering what happens now when an enumeration is truncated for any reason. How does that get dealt with, and how is it perceived by users? If it is application specific (don’t know why not), how does application code learn what it is there is an opportunity to be application-specific about? If the enumeration is not informative enough, maybe reving the API is ideal for that reason too. You could have a refresh/retry case there, perhaps. 4. OK, I can’t resolve myself about this, so no pros or cons. Thanks for challenging us with this. I am amazed at the number of people who advocate doing nothing special and letting the directory enumerations fail when querying the buggy servers. Everyone should (but apparently don’t) realize that Microsoft (and by extension, Raymond) don’t really care about the producer/vendor/whatever of the server software. If that were the only entity negatively impacted by the failure, then the answer would be easy – force the buggy server implementation to be fixed in order to get correct results. However, Microsoft does care about the people who are currently *successfully using* this buggy server implementation. Many, maybe even most, of those people don’t even know they’re using it and have no control over how, when, or if the server gets updated. So even if the buggy server implementation gets fixed by the vendor, that doesn’t mean the deployed buggy implementations will get fixed – those users are still screwed. If the directory queries fail, that leaves these people with 2 options: 1) don’t upgrade to Vista (that’s if they even realize there’s a problem) 2) lose data Neither of which is an acceptable solution for Microsoft, even if it makes Vista’s directory query implementation impure. So, please stop suggesting that these directory queries just fail (particularly silently). I was just reminded of the way XP deals with UDMA errors. If a device acts acts up a set number of times, XP will degrade performance by dropping down to PIO from then on. Enabling fast by default, obviously at some point an enumaration might fail and from then only slow is ever used again for everything. As someone else pointed out: why not deal with it in a similar way as a sudden network outage? It wouldn’t be terribly accurate, but it would simply cause the user to retry the action in almost all causes, and the next time around it succeeds so they will shrug and be none the wiser as to what happened behind the scenes. Advantage: * no confusing error messages and no user interaction required at any point * fair tradeoff between using the fast mode and workaround (which is no slower than what’s in XP today) Disadvantage: * the user will still encounter a problem once (but only once) * even if the server gets patched, the user might not know or remember to enable fast queries I would say that the first disadvantage is acceptable. After all, the server is defective in a way and as long as the failure doesn’t happen silently, no harm is done. For the second disadvantage I think the UDMA behaviour is a valid precedent. Argueably using PIO when a device supports UDMA is a terrible performance decision, but if it acted up in the past, you have little choice until the user fixes the problem and tells the OS to revert to the default – faster – setting. My solution is to run a diagnostic at driver installation time. Test for a failure situation after the relevent driver is installed. If there’s a failure set that box to do slow mode. If it passes, set the box to be fast. Simplicity :) Oh boy, a chance to give advice! It might or might not be a good idea to leave out the list of buggy servers, and just try every single call in fast mode. (If you get the error, retry in slow mode, as explained in detail by everyone else.) Someone should find out how often directories have more than 100 files. Also, put this bug fix all in one file, except for the calls to the functions in that file. That way the developers won’t forget about it. "What’s in this file?" is a more common question than, like, "What happens for this error code in this function in this 30K file?" :P It is unacceptable to nag the user regarding some driver issue which they do not care about. Not their problem. Don’t make it theirs. So, no $#%* modal user-dialog. I mean it! Don’t make me come over there! Nothing wrong with an Event Log entry though, so that a tech-savvy user can investigate. Next, what to do about the error. Again, we don’t want to bother the user. Not their problem, don’t make it theirs. Detect the problem (this is not a hack, it is just making robust code), and re-act. Switch to slow mode if necessary. Display a (non modal!) message if you like, just keep it unobtrusive and clear. "Switched to slow network mode. See this url for help." Think outside the box. Don’t tell me anything about "well half the result is already processed and um…". Deal with it. If only we lived in a perfect world! Just do what you can in the time you have. But don’t kid yourself into thinking your problem is a buggy 3rd party driver, because that’s just a symptom. Your problem is your design’s inability to handle protocol errors in a transparent and well-defined way. And while I’m ranting, what’s with the whole "shell and networking teams cooperated to find the problem" thing. If some stupid component ate an error, then *thats* a problem. Has anyone even suggested addressing that? I’m in a bad mood. Has anyone got any chocolate? If the error displays after a mere 100 (or so) entries, why not simply cache them (if doing a "fast query"), and on failure do a non-"fast query" and simply cull the already "fast query" gotten results already returned to the "client"? I mean, it’s not like Explorer is the hallmark of lean-and-mean anyway, why adding something like (up to) 25-30KB for each "live" Explorer dodah wouldn’t even be measureable in the megabytes after megabytes it consumes. I do however wonder what a "fast query" is. At first I thought you were talking about SMB, but then you added HTTP to the discussion. I suggest you stop the riddles and simply tell us what protocol we’re dealing with here. Trying to find a solution for a half-specified problem in an unspecified domain is about as easy as lifting yourself by the hair. I would suggest the following: 1. Use whatever method is reliable to detect the buggy driver. 2. If buggy driver detected inform the current user that buggy driver detected and that they should notify the system admin, with a note to check event log. 3. Write a detailed event log message that the buggy driver was detected along with a link to a MSKB article on the problem with links to the fix. 4. Disable "fast mode" It appears the program in question is indeed Samba, and the problem is described at: This bug certainly sounds like the one Raymond is talking about, at least. It was apparently fixed in Samba 3.0.21c, released on 25 February 2006. A few people seemed to have missed Raymond’s comment that requerying isn’t really possible (at least not without a lot of work). There are a few principles at work here: a) The user should *never* get an incomplete list of files from the server without any indication. This is bad. I don’t think there needs to be any special error handling here — there’s probably an existing error code that can be returned to the caller to handle a similar error condition. In addition, adding something more informative to the event log is a good idea. b) The user should have some way to enable "slow mode" so if upgrading the server isn’t possible they can still run Vista. Include a knowledge base article about it. If a server administrator encounters this problem during testing they will be able to figure it out (either by contacting their vendor — who will surely know the fix — or by consulting the MS knowledge base). Since there’s no good way to reasonably detect the problem and retry it — this seems like the best answer. The driver will get upgraded and this problem will eventually be long forgetten — no need to hack some complicated solution to it in the short term. I think the solution here is a combination of possible methods, which have basically all been described here. First: it’s not the user’s fault. He can’t do anything about it, so don’t bother him. Put an entry in the event log, if you want, but leave it at that. Second: It’s obvious the vendor doesn’t want his software to be classified as ‘slow’, so they will fix the problem ASAP. Unfortunately, it will still take time for the new version to filter through to the distributors and finally to the customers. This is not the vendor’s fault, so there is no need to punish him by displaying an error message either. The solution to the problem, under the above circumstances, has already been described here: Use fast mode to query, maintaining a list of the first 100 entries. When you get to entry 101st and receive an error, switch to slow mode and re-query. Match the slow mode entries with the ones previously obtained in fast mode and return only those that do not match. This is an enumeration, btw, so, as with all enumerations, things might change while you are doing it – but that’s a different problem altogether. Now add that server to your ‘last 16 buggy servers’ list. Next time you need to query that server, you will not have the overhead of doing fast mode first and then repeating in slow mode. Time stamp each entry in the ‘bad servers’ list so that each buggy server is removed from the list after one month or so. Why? Because the server software might have been updated in the mean time. Who cares if once per month *one* query will be a little slower than usual? Problem: Third party software implements a Microsoft protocol incorrectly. Solution: Microsoft fully documents the protocol. Advantages: Third party software doesn’t have to be developed in the dark. Users get to live in a blissful world of system interoperability. Happy users won’t be moaning that Vista doesn’t work come 20??. Might get the EU of Microsoft’s back. Disadvantages: Users can use non-Microsoft solutions with less fear, uncertainty and doubt in their minds. People will want NTFS documented next! I’d favour a combination of the suggestions from some previous posters. 1. Document this compatibility problem in the Microsoft Knowledge Base. 2. Have a setting for "always use slow mode", which is turned off by default. This is controlled through Group Policy (on/off/not configured). 3. If "always use slow mode" is on, then do what it says on the tin, and there’s no error. 4. If "always use slow mode" is off, and an error occurs, don’t display an error message to the user, but do write an error in the system log, which includes a link to the aforementioned KB article. a) If Group Policy for this setting says "not configured", then set "always use slow mode" to on (writing another entry to the system log), and repeat the query. b) If Group Policy says "stay off", then do what it says, i.e. stay off and leave the user with an incomplete list of files. Advantages: * The user interface doesn’t get cluttered with extra options. * A home user can use Group Policy to control their local machine. * A corporate admin can avoid schlepping round to 100 machines making this change; they can turn on slow mode when they know about the problem (which avoids other PCs on the network having to repeat the test), then turn it off when they know that the NAS box has been patched. * If people aren’t affected by this bug, then they get an immediate speed improvement, without having to go out and buy one of those magazines with dubious "tips and tricks to boost your PC!" * Anyone who knows what they’re doing will notice the problem in Event Viewer. Anyone who doesn’t know what they’re doing will (hopefully!) not have been screwing around with this setting in the first place, so they’ll get the default behaviour of silently succeeding after a small delay. * This doesn’t delay a roll-out of Vista for anyone who is affected by this bug. Disadvantages: * It is possible for someone to sabotage their own (or someone else’s) machine, if they have admin privileges. This could be done on their behalf by malware. In this situation, they will probably blame Microsoft. * Vista has to be sullied by code to work around someone else’s bodge job, and this may lead to accusations of "bloatware". — Personally, I would like to go for the option of "do nothing". However, as a customer, I wouldn’t do a rollout of Vista if I knew that it was incompatible with one of my servers. So, I’d push for a bug-fix from the server vendor/manufacturer/whoever, but in the meantime I would delay taking up Vista. Wow, what an ugly bug.. This may have been said, but it seems to me that any solution that solves "…Users will interpret it as a bug in Windows Vista." will involve extra code or workarounds that will be a liability and eventually become a bug in another version or two of Windows. (then again I love a day where I manage to end up with a negative net line count) /* oversimplified for clarity */ static int currentPos; static mode = MODE_FAST; static struct dir_item *buffer[101]; static struct dir_item *nextItem[1]; function MoveFirst() { int res = ReadUsingFastMode(&buffer, 101); if (res == WEIRD_ERROR_CODE) mode = MODE_SLOW; currentPos = 0; } function MoveNext() { struct dir_item *res; if (mode == MODE_FAST) { if(currentPos <= 100) res = buffer[currentPos]; else { ReadUsingFastMode(nextItem, 1); res = nextItem; } } else { ReadUsingSlowMode(nextItem, 1); res = nextItem; } currentPos++; return res; } How about simply crash? Either explorer, or the calling app, or some dummy process that doesn’t do anything other than die for the purposes of sending in a crash report. Then the user gets to send in an error report, and you can point them to a response page telling them to upgrade the offending driver. You can also set a regkey to use slow mode for the rest of the session, in case the user tries again. Clear the regkey once every few weeks so that people get reminded to upgrade their stuff eventually. Benefits: – users that don’t get to deal with buggy servers always get the fast behavior – users of buggy servers know what the problem is and maybe will fix it (and aren’t totally hosed in the meantime) – kernel stays (relatively) clean – noone sees incomplete results Drawbacks: – Vista crashes sometimes :( – Users that hit a buggy server sometimes will live in slow mode for a few weeks (can it be that bad if that’s all that exists in XP though?) – Vista crashes a lot if there’s a bug in the code that identifies buggy servers Whatever you do, please don’t "do nothing." It might be old samba, which my cheap NAS hdd box maight use. Even if so, I don’t think the vender cares about that. I think, if it can be used by XP, it should be used by Vista. My creative (ahem) solution: Shame the vendor into solving the problem. You’re absolutely right that users will interpret this as a Windows Vista bug. Not solving it leaves a bad impression, even if it’s not your fault. But as you describe it, this fundamentally seems like a social problem rather than a technology problem. The desirable outcome is to have the vendor fix the problem rather than to implement some nasty technical workaround. It’s as if the tires on a car were substandard; the car manufacturer could come up with all kinds of workarounds (limit speed to 65 mph if the temperature is over 70F, except when you turn on the hazard blinkers to indicate an emergency), but the real solution is to get the supplier to fix the underlying problem. Of course, the relationship is rather different here: this isn’t a supplier over whom you wield direct economic power. So what other kinds of power could you bring to bear? Legal is unlikely, but social might be possible. My social engineering skills are somewhat weak so I’m certainly the wrong guy to suggest tactics, but I bet you could find people who could figure out how to apply the right kind of pressure to make this problem go away without writing code on your end. Thinking in purely mercenary terms, perhaps you could even pay the vendor to fix the problem on their end? Yes, you’d have to rely on the installed base actually installing patches and whatnot, which might be totally unrealistic; still, I can’t help but thinking that their might be ways to resolve this that are not fundamentally technical hacks on Microsoft’s part. I don’t think it’s viable to offer a ‘classic’ directory listing and a new improved ‘rapid’ directory listing for apps to choose. All that will happen if this option is given is a lot of application developers will simply choose the ‘fast’ method without realising the implications. This, again, puts the end user at the mercy of sloppy developers. Haven’t we learnt anything from Raymond’s posts – give the developer the ability to do the wrong thing, and will jump right in – just look at how many people here are talking about ‘buggy drivers’. < :-) > An alternative fix would be to release Vista SP2. This would really be XP SP2 with a Vista theme. 99% of Vista users would not notice that they had been regressed. </ :-) > Upon first interaction with a server, run a query to test if they have the fixed code. If no such test exist, write one into the the next version of the server software and consider all current versions as failing. Cache the return value in a short list* (both positive and negative results) and use that to guide the call to the directory query functions. Expire the cache between connection resets or at known times the server software could be changed (I assume connections have to be reset to upgrade/downgrade the file server software). If your cache is full, do not add an entry. If the cache value is missing during a directory operation, use slow mode. Advantages: No user interaction Everything works always (assuming the connection stuff i mentioned) Fast mode preserved in common** use case Disadvantages: Requires potential change to server software*** Slow mode dominant if you regularly interact w/ large (> cache size) numbers of servers. Overhead on establishing a connection. *Expose the cache size param via tools/articles where interaction with a large # of untrusted file servers is expected, maybe the ‘Optimizing Bittorrent for Windows Vista Development Journal’. **Common given my understanding of how people use file servers, if this isn’t the case this solution is suspect. ***If necessary, this might be unreasonable to ask of a group you may/may not have political clout with and would kill this solution. Oh, and kudos to your test/shell/networking and team for finding and identifying this issue. Perhaps step 0 of this solution is giving whoever decided ‘we really need to test against X version of the file system software’ a promotion. My pick: do "Auto-detect the buggy driver and work around it next time", but requery *immediately*, not next time. Remember the server is "bad", but periodically try fast mode on it again anyway, just in case they’ve upgraded. While the requery is going on in the background, pop up a warning dialog saying that a buggy driver was detected and that they should upgrade their server, pointing them at the relevant KB article. I admit I usually side with MS on most things, and I also don’t know whether this is a result of MS not fully opening up its SMB information, but this seems like a case of reaping what you sow. If MS had opened its network protocols in the first place, Samba wouldn’t have had to reverse-engineer them. I feel like a Slashdot troll. It’s a very unpleasant feeling. Here are the ideas and assumptions that shape my point of view. 1. Breaking something by default in a new OS that worked in the prior version in unacceptable. This means "default to fast and do nothing" is out. This option is seductive. It seems like "doing the right thing", but the theory has to be carefully balanced against reality. Doing nothing might be fine, but only if you can get away with it. I don’t think Vista can get away with it. 2. Showing the users a technical error message in response to what would be percieved as an arbitrary condition and when things work fine 99.9% of the time is not acceptable. (If it would confuse my father and cause him to call me, it is not user friendly enough.) It’s even worse if they have to make a decision. Always remember: USERS CAN’T READ. This means that any message box that the end user sees is out. 3. Assuming that the faulty number of files will be constant is a bad idea. Sure, today it might be 100, but there is no way to know if the code might be modified at some point in the future to break in the same way at 50 files or 500 files etc. This means both the "cache the first 100…" and the "default to slow until you come across a directory with more than 100 files…" plans are both out. 4. Black or white listing server versions is a bad idea. While you might be able to build a comprehensive list of all faulty implementations in the wild today, it is possible that tomorrow a new faulty implementation will be released and it can not be assumed that the user will ever update the OS to recieve any additions or subtractions to the blacklist/whitelist. It is fairly irksome when I stumble upon a web page that doesn’t recognize my browser version it tells me to "upgrade" to a prior version because it only knows about browser versions x and y and I have upgraded to z. So, the static black/white lists are out. 5. It can not be assumed that the user can do anything to fix the faulty server version. Expecting the user to fix the server is bad in so many ways, it is hard to know where to start. First of all, many people use computers in enviornments where they have no control over the server(s) they use. For that matter, some sys admins do not have any control over the server version for various reasons ranging from auditing paranoia to black box systems with recalcitrant vendors or out of business vendors. Not to mention cases where the server is under the control of another party that the user cannot force change upon, even if that change is possible. This means suggesting upgrading of the server as the only possible workaround or solution is out. 6. It cannot be assumed that the broken versions will eventually die out or be upgraded in a useful time frame. If this is OSS code then it is possible that a company has a branch of the broken implementation that they might not fix, for whatever reason, but which they may incorporate into a future product. This means we may be stuck with this hack for a long, long time. Not to mention the fact that if the OS siliently works around the problem not only is there no incentive to fix it, but other products may someday intentionally implement the bad behavior for whatever silly reason. If you don’t think this might be true, go read all of the old entries on this site. 7. If performance is really a concern, blacklisting 16 bad servers is not a great plan for a couple of reasons. If the user never accesses more than 16 bad servers, and the servers are fixed later then they will be stuck in slow mode forever. Also, if the problem only exists for lists that are over 100 (well, n, because I don’t believe that 100 is a hard limit) files long, then having one instance on a server of 101 files could forever relegate that server to slow mode even if there is never another instance of a directory with more than 100 files on that server. Even if there is some automatic method to clear out this cache, that will just have a negative performance impact on the people that frequently have to access large directories on a server that is not upgraded. This takes out the option of keeping a black/white list of servers that are known to be good/bad. 8. Logging the error to the event log is acceptable, but putting a url or KB number in the text is not desirable. Describe the error and the problem as well as possible, but don’t assume that the KB number will be useful several years from now. From my point of view, this would be hard coding something that should not be hard coded. There is a possible future where the same exception is raised and logged for a different reason, making the hard coded KB number very misleading. It is also no fun to end up viewing KB404. This doesn’t rule out messages in the event log, but it rules out being overly specific about the assumed cause. 9. It is not acceptable to produce a file list that is not reflective of the directory at one single point in time. This rules out the "cache the first n records and requery merging the new results…" plans. I must admit that this is a somewhat attractive option in that it would make the issue invisible, but the problems outweigh the benefits. So, after all of that (and I’m sure I forgot something along the way), I would have to go with slow mode by default with a configuration option to enable it. Depending on how it is implemented, it may be accompanied by a warning that results may not always be reliable. (This is acceptable, by the way, because it is presented in a context in which it is more likely to make sense to the user and in response to a direct user action that does not have the historical expectation of "just working".) It would also be good to log the error to the event log. This would allow sys admins in controlled enviornments to enable it as well as "power users" (even if it isn’t a good idea, these types of users are more tolerant of broken behavior because they tend to cause it) who want the "speed boost". This keeps normal users from being exposed to strange and confusing behavior in almost all cases, and it limits the number of people who will assume that Vista is at fault. This approach reminds me of DMA settings for IDE devices in Windows 98. Since you’ve bothered to read this far, I’ll punish you with a reminder of an irritating bug in Windows 98. Every time I checked the DMA box for a hard drive for the first time, I had to go back into the properties and set it again for the setting to "stick". I’m not really sure how I ever figured that out. Advantages – No failures by default ("it just works") – Very few failures total – Allows for fast mode, but it is a conscious decision that someone has to make – Doesn’t require a gross hack Disadvantages – Slow by default – Malware could, in theory, tweak the setting and cause problems, intentionally or unintentionally (but I see this as a problem with almost all approaches, and there are bigger malware fish to fry anyway) – Fast mode is less useful because it will be rarely used – I think the anticompetitive disadvantage is a straw man, as long as everything is treated equally. I think it only really becomes a problem when only some specific servers start working slowly for some ineffable reason. I apologize for any spelling or grammar mistakes. In no case is a silently incomplete file list acceptable. The shell must either fail with an error or return the entire list in some way. Is there no place in IEnumIDList::Next()’s contract where an error can be indicated? If so, applications that ignore the possibility of the shell returning an error are already doing so at their own risk, and you can obviously update Explorer to respond to it properly. If not, your contract *requires* a full list, so you must guarantee the existence of such a list before you allow someone to start iterating through results, which may defeat the purpose of a fast query…. (Also, in that case, why does it use the same interface?) Like everyone else here, I’d love to know more about the problem before making wild guesses about what solution is best. You’ve got us cornered here! Whatever you do, you must make it trackable to the respecitve vendor + update. Event log + KB article + link in error message (if any) Fix automatically only if you can fix it perfectly (b detecting the driver version, or executing a requery – which I still don’t understand why this isn’t possible) Try to find a way to tell "good" from "bad" servers. If nothing works, pass through the error in the API. Maybe use a more friendly way than a message box to handle this in explorer (like the "popup blocked" in IE?) but LINK TO THE KB ARTICLE! MHO for the other options: "Do Nothing" camp: In two years there will be equally many shouting "Windows sucks because they still doesn’t use fast mode which has been fixed ages ago" The problem is not the vendor fixing it, but distribution. An Error message does promote distribution, but moreover it is a "It worked in XP, it broke in Vista". Customers don’t care who is guilty. They care about features, and the software industry getting their act together. "Explorer Option" Suitable only if this is a major product. I guess if MS always took this way, we’d have 65032 explorer options by now. One possibility would be a single checkbox "Compatibility Mode", aggregating multiple fixes, with help linking to a KB article that describes which things it fixes in detail "Certification" Please no. Development for Windows is getting harder and harder for small shops. Required certification may be the death for many applications written with passion, and move us toward 9-5 drone software. Voluntary certification (as it is NOW with drivers) will not stop people installing crap and blaming MS – but maybe that’s what it takes. Coax the vendor into a fix that allows you to detect which version is running. If you can’t detect, run in slow mode. Having read the thread, here’s my vote: I’d rather have Windows detect the buggy driver, then popup bubble to alert users about this… much like what you would get when you plug a USB 2.0 device to a USB 1.1 port. I see no problem for this. Just make sure the warning message contains specific string that’ll make it easily searchable in search engines.(With KB article number, maybe) One idea is that when you detect the error, re-enumerate. Then, return the item they asked for (i.e. keep a record of what item you would have returned if the error hadnt been detected and return that from IEnumIDList::Next after you re-enumerate) Only slowdown then is once when the error happens and the directory has to be enumerated again with slow mode. I’d say disable fast mode for the initial release of Vista and have an option to enable it. Put the story why it is disabled in the help for the option. Enable it later on (SP1?) so it will not be blamed on Vista itself but on some service pack. apologies if I’ve misunderstood the problem and for not reading the 153 responses. The problem is temporary based on a 3rd-part bug. Prioritise designing for the ideal – no 3rd party bug. Then prioritise letting the user know that they are not getting standard results in a manner that acts as a pressure for the 3rd party to improve its business by removing the bug/problem and moving towards the ideal. I recommend that you: * Include this issue in the Release Notes or a Read Me file and a KB article * Add a check box somewhere that disable fast mode. Advantages: * People can read the Release Notes or a read me file or search in the KB for it and find the issue. * People can disable fast mode, if nesserary. Disadvantages: * Not everyone read the Release Notes or aware of the KB. A lot of noisy people here. I’ve seen probably a total of 5 largely different solutions, and those are the original ones Raymond put up. Lay off the vendor. Raymond said in his second paragraph that the vendor fixed the problem and the distributors haven’t picked it up yet. If you’re not going to read the comments before replying, at least read the original post! There are a number of things that must be remembered: – For users it must just work – The problem should be limited as much as possible: – Make a difference between new stuff and old stuff. – If you automagically work around issues such that they don’t have a backlash, no-one will fix them. – While things should just work, they can be slow. Users can also be given unobtrusive warnings (in the statusbar, not a dialogbox) If it would be possible to detect the broken servers that would be preferred. If not, I’m in favour of (if possible) having two APIs. The compatible one for existing code that is slow, but works always. The new API should return an error if the fast access fails, that suggests the application to retry. (I suppose that dll’s support versioned symbols). Wow, lot’s of comments on this one. I just browsed through most of them. Anonymous: "Tremendous hack: Do the fast query, remember the files returned (up to 100, or the maximum number of files that a bugged fast query can return). If it fails, do the slow one, and don’t pass the files already reported." That could cause non-obvious strange behaviour years from now, I can see the tont blog entry coming in the far future ("When drivers don’t support something they claim to") AC: ." Almost exactly what I thought. Try to find a signature to detect the broken server. Advantage: It’ll "just work" Disadvantages: Ugly BC hack in the code, slightly slower query time Brad Corbin: number 4 entered my mind in the first part of your solution too, and indeed having a new API is a tremendous bummer, especially now that it’s been partially delayed again. Joe Butler: interesting solution, let’s hope a security bug doesn’t crawl in To those that say the vendor should be contacted or the driver should be fixed and BryanK, read the friendly article, it was already fixed but there’s still need for BC. (How on earth would you track down who uses or is going to use the buggy version?) BTW adding any sort of UI, be it an error or configuration setting is costly (requiring a help text, a translation, a translation of the help, possibly a KB article to explain it, etc.), especially now that Vista is in a late stage of development. And this one is especially funny: Martin: "The dialog should say something like this: The server you are accessing only returns the first 100 files when using fast queries. Do you want to turn fast queries off? Y/N." Some users aren’t gonna read or understand it and just press No, Raymond has pointed that out already: And I agree with JamesW, although bugs can happen while there’s proper documentation provided too. Do "almost nothing." Create a new field in installs that allow the installer to specify who the mystical entities (such as "Network Administrator") are, complete with E-mail or some other contact information. Have this also be built into the framework for Active Directory/whatever you want to call the Vista Server implementation, so enterprises can customize this (and update it) site side, in a single place. This way, the end user will know who to contact in a corporation. At home, they’ll know who to contact as well. If they don’t, I’m going to make the rash assumption that they probably don’t need this server sitting in home. If you do much else, what are you harboring? If I knew that I could make an *almost working* game/driver/program, and publish it, having Microsoft more or less "complete" the code for me, think of all the man hours I’d save. Everybody, take a look at the posts from Mark Sowul. He specifies which calls are in question: NtQueryDirectoryFile returns STATUS_INVALID_LEVEL after first 128 entries. After STATUS_INVALID_LEVEL is received, calling FileBothDirectoryInformation returns STATUS_NO_MORE_FILES. Now both are NTDLL calls, and both are (well of course, why do you ask :) ) undocumented. So it look obvious that such bug should not be "fixed" just in Explorer. Moreover Explorer should not even use these calls, but use "FindFirstFile" that’s the only thing that anybody else can use. Anything else would be cheating. And unfair advantage compared to any iother third party application which can only use the Win32 API. Skip the error messages or dialogs. Add a configuration option to the Network Client to "Allow Fast Mode". Default it to Off, but put it there. Obviously XP’s slow mode wasn’t all that slow, so defaulting people to slow mode isnt a problem. In the mean time, work with the vendor to upgrade their server. At some point in the future, simply change the default for the network client from Off to On for this option – once its deemed that the risk of users hitting old servers is minimized. Cons: Everyone is punished and forced to use slow mode – unless they happen to be aware of fast mode and how to enable it. Why I dont care about the con: XPs exclusive use of slow mode means that slow mode isnt THAT slow. Pro: The ability (and intention) to enable fast mode means the vendors can get fast mode compliant drivers tested and out there. I vote for retrying and continuing using the slow request when you catch this error. There have been some comments saying this is complex and nasty but I would think it is a lot simpler and cleaner to implement than a specific error message. And I think version checking is almost always the wrong way to do things. Check for functionality not a specific version. I think almost everything as been said on this topic. Anyone remeber the infamous OPLocks? They were enabled by default, and what we got..? The Fast Mode (whatever it is) seems something similar. They both have/had compatibility problems. The root causes are different (in most cases) but the point is, the new, a lot faster system was enabled by default. It was not a success. This and other reasonings brings me to say "Disable "fast mode" by default", at least on the business editions. Home editions might not. But let me an easy way to switch it on or off with the correct advises ("This option might…"). The system might even advise if automatically after a while (like performance center does) if no problems encountered. But that’s another problem. And lastly, because this seems to be, only the FIRST appearance of a problem with this new type of query. It might, and might not, be more pervasive that it seems now. The use data preservation is the first priority of an operating system. If I cannot find files, it’s not so secure, whoever the blame is. Laura As a developer in this particular problem space, the solution I prefer for the good of mankind is "do nothing and let the buggy file server get fixed". There are enough hacks and workarounds associated with SMB already. For bonus points, have Someone Important get on the phone right now to the CEO of Electronic Notworking Moving Control Appliances, or whoever they are, right now, and point out how embarrassed they’re going to be when Vista comes out. Care to describe what ‘fast mode’ and ‘slow mode’ really are in SMB terms? I’m curious. would use Centaur’s solution. i know the reason, why microsoft is doing so much "compatibility" code, but for buisiness, where the company is still alive and has a workaround, it should not be "hidden silently" BTW, I do think it’s really cool that Microsoft is actually catering for a bug in Samba. I’d expect MS to do everything possible to undermine Samba, and Samba to do everything possible to stay compatible. This is a major leap forward in that regard, thanks mr Chen! This is good for MS. PingBack from Do a slow mode by default, but try to see whether the fast mode can be properly supported. Keep bounded LRU splay tree with servers that support fast mode Do a lookup and if your server is not in the tree, do a slow mode. This should yield same practical results, as fast mode by default. For the comments that say the following: . " Some of you are just plain killing me. It appears you don’t understand the economics of writing software. James My take is: Do Nothing. All NAS devices are upgradable and all un*x server admins can upgrade their software as well. Can’t see why there should be another backcompat/bugcompat hack in Windows. So… * There is a buggy server that claims to support fast mode but does not. * The bug is fixed in the server software. * Distributors are distributing the old version. * Windows Vista is going to use fast mode. * Some time will pass until Windows Vista ships. Therefore, * Contact distributors, urging them to update to the new version. * Give them a sensible time frame. * After that period passes, issue a public advisory (or cause a public advisory to be issued), probably to a security-related mailing list such as Bugtraq (since the symptoms look like the condition of denial of service to the user who wants the 101st file). State that the bug is fixed in version X.YY. * When the error condition is detected, log an event with a descriptive details text that administrators and advanced users can google for. [Summary: I suggest a mix of "autodetect with dialog", plus "config setting for SMB client", plus "knowledgebase article", plus "syslog" plus…] Allowing developers to turn on (or off) fast mode in their code is important, but won’t fix the problem, so isn’t a relevant response. A "go slow" setting should be provided for the SMB client. Won’t fix the problem when it happens, but allows a fix for the general class of "problems from fast mode", rather than the specific class of "problems from samba returning a bizarre error code". This does not address the user experience, though. Requiring apps to deal with this will still make people say "Vista bug!", which it is. All my apps break except those upgraded by Vista? Vista bug. Anyway, you can’t code for every incompatibility caused by fast mode, whether you put your workaround in explorer or the SMB client. You also can’t rely on the server being fixed unless you tell people it needs fixing. So the answer is probably found in the answer to the question "how does SMB deal with the network being dropped before a findnextfile"? Users are used to networking issues. They don’t (generally) blame networking issues on MS. Do what you normally do then, but with a different message and (imperative) a pointer to the KB article. Log it in the syslog (also imperative). The KB article should list how to turn fast mode off systemwide, how to upgrade the server so they don’t have to, and importantly, how to remove the files you can see in order to see the others. And as people pointed out earlier, open up and document the SMB standard now that there are clear financial and development costs to having it a closed standard. Advantages: * Operating system remains "pure", unsullied by specific compatibility hacks. * SMB client now has the ability to deal with all unexpected error codes and direct people to a relevant KB article, which gives a generic answer, and directs them to a specific subpage for each known unexpected error (this samba bug being the only one, for now). * Customers with this problem will know that they have it. * Customers are given a specific place to look for the solution. * Customers can turn off fast mode clientside, so support contracts are unaffected and the problem can be resolved even without firmware upgrades or talking to an admin. * The customer is told how to reduce the number of files in the folder and recover all data even without resolving the problem on either end, even if they’ve no access to the SMB client settings on their own machine. * A clear enough message causes users not to interpret it as a bug in Windows Vista. * Administrators can choose not to upgrade, and instead tell all users to configure their clients for slow mode, or just to not store more than N files per dir. * None of the disadvantages of the autodetect/blacklist methods. Disadvantages: * Programming and testing required, and it’s very late in the day for Vista. * It doesn’t silently "just work". * Makes all servers use slow queries if there’s a single buggy server (unless the client can be given something like a "slowservers.ini", which accepts wildcards – a manual whitelist). * Users have to read an error dialog, understand it, and follow it. Do the same as described in MSKB article 896427! Stefan A number of people have been saying something like the following: There is no real option. Explorer must either show a list of all the files all of the time, or an error message. Everything else is just details. These people are right. Showing an incomplete list without explaining to the user that something is wrong is broken behaviour, and must be fixed. Further, the order of preference is clearly "show a complete list" followed by "show an error message", because a working tool is better than one that tells you it isn’t working. Version checking doesn’t work. Comments have made it clear that the software at fault is samba; samba has a feature that allows the user to customise the system name and version number returned, hence there are probably thousands of different cases here, many of them indistinguishable from working ones (e.g. I’ve seen samba servers installed that claim to be W2K servers). At the very least, every single NAS appliance manufacturer who uses it will have their own custom system name and version number scheme. So, for me there are only two possible solutions: 1. Make it work. The best way of doing this depends on the precise nature of the problem, but it seems clear from Raymond’s comment above that the current API is not adequate for the purposes. This doesn’t make it impossible to make the file listing work by switching to slow mode on receiving the error: it just makes it harder. Deprecate the current API and make the current version behave identically to XP (i.e. it should use a slow query if that is what XP does); introduce a new version with a new contract that allows fast queries but requires the user to check and reissue a slow query if the fast one fails. Update Explorer to use this new API. This is by far the best solution possible. Advantages: Explorer works and is fast in most cases, other new apps work and are fast in most cases, old apps continue to work if they currently do. Disadvantages: Old apps don’t benefit from the speed increase. C’est la vie. Requires inclusion and maintanence of legacy code (a deprecated API). Windows doesn’t have a strategy for removing deprecated APIs and seems unlikely to introduce one in the foreseeable future, so this is a cost that will be ongoing indefinitely. It’s likely only a small cost: the old API can probably be implemented as a special case of the new API with only a few lines difference. What’s the cost of (e.g.) 50 lines of code? Not a lot, all things considered. 2. The other option is display an error message. This is Raymond’s second option, and by far the best he presents. It can be combined with a config option that makes either Explorer or the system degrade to slow queries, and this option can be mentioned in the text of the error message. Other things that should be considered: should Explorer show some UI to indicate what has happened if option 1 is implemented? I think yes; perhaps something in the status bar like "139 objects (using slow mode – click here for details)". Another alternative is logging the failure in the event log. My opinion is that at least one of these should be done, although I wouldn’t say it is incorrect to do neither. Doing nothing about the whole problem, though, or fixing it only by adding an option to work around it but being silent about its existence, *is* incorrect. use try catch & see the error thrown. PS. :-). Jeremy Allison, Samba Team. As an aside, the Apache Axis SOAP Stack has some patches to deal with .NET 1.0’s SOAP stack which doesn’t handle all forms of XML. It went in, although its slightly more inefficient in terms of bandwidth, because the apache team knew they’d get the support calls for the interop problem. Saying ".NET SOAP doesnt handle all XML" may be true, but it doesnt meet customer needs of having things talk to each other. Pragmatism beats ideology when it comes to network interoperability. It seems to me that yet more "state" is a bad idea. blcklisting buggy SMB servers won’t really scale. The user needs to know that he’s not looking at the whole list, when the server fails on a "fast" list. All these extra error dialogs and such are just a cop-out – "help, it’s gone funny, not my problem any more". So: 1) Only Explorer uses "fast" mode, as it has the wit to work round it. Other programs should be (silently?) made to use slow mode. 2) When the "weird error" happens, the hourglass changes to "pointer-plus-hourglass" – in other words, you can select and work with any files shown, but you get the idea that things aren’t quite done yet. 3) Meanwhile, Explorer re-queries using slow mode. When the number of returned entries exceeds the number displayed (or there are no more entries) the Explorer view refreshes. Which would un-select any current selection, of course, but then you had the idea that things where’s quite done. This simple refresh idea removes any inconsistencies caused by the directory contents changing in between queries. So, what you see is: list, pause, list refreshes bigger. I’d honestly be surprised if there wasn’t a better way to diagnose this problem. That’s what I’d spend time looking into to. Especially if the Samba team is happy to help, I’m sure some sort of solution can be realised so that you can do a version check and solve all the problems. Clearly you can’t "do nothing" and the worst, but most likely, result would be to force fast mode =’s off always. *shrug* I do find it hard to believe you can’t work with the Samba team to figure out a way to do a version check. How about… 1) Default ‘off’ fast mode in Vista, perhaps with a GUI setting to turn it on. 2) Make fast mode have a big bold link "WARNING: This may cause problems with…" that leads to a KB article shaming the vendor(s) involved for their bad code. 3) Work with third-party vendors to get fixed code out and give the broken code some time to exit the marketplace (read: be updated or have its parent device die). 4) Enable fast mode by default when a higher percentage of the marketplace runs non-buggy code, perhaps in the next release. 5) Issue an event log entry (with a cap of once a day or so) that indicates if a buggy server is found while fast mode is on. Advantages: * Vista "just works". * Vista is at the same speed/compat by default as XP. * If the speed differential is significant, some users will complain to their buggy NAS/software vendors to force them to upgrade if it doesn’t work with the new Vista feature. * Users who run into the problem will likely have an idea of the cause and be more able to fix it. * Not as much of a compat hack as re-issuing the query and legitimately adds to the flexibility of the product. * Allows users to adjust a setting if OTHER products have similar problems with the new feature. Disadvantages: * Vista doesn’t use its full speed potential. * No proactive way of handling broken servers — would probably require an administrative change to restore compatibility after fast mode was turned on. I find it funny how people propose that Explorer should use undidclosed API functions… weren’t suspected OS-specific "ties" considered a gravely bad thing and a reason for major criticisms of Microsoft products? Look here for just one single example…; } } Sorry, I haven’t read all the comments. Firstly, did you discuss it with the Samba guys? If it was a misunderstanding of the protocol arising from lack of documentation, I think Windows should work properly with Samba. But if the Samba guys agree it’s a bug in their server, I’m strongly in favor of Do Nothing (or, if that’s not acceptable, Put up a dialog). Windows should not accumulate (even more) cruft to compensate for bugs in third-party apps. Do what M$ is best at: Sue ’em! Disable fast mode by default. Inform the vendor that this is the default setting in Windows and that fast mode can and should be enabled by the vendor’s software installer when a newer (and functional) version of the driver is installed. Slow mode is default. Network driver detects server version. (If there is no way to do this, add it to server code, and e-mail the Samba people how to do it as well). If server version is bad (this should be in Explorer of course, because here’s where we go into kludge territory), add to "Bad server" list (keep 256 or so) so you don’t have to repeat your inquiries, warn the user (once): "Retrieving file listings is not as fast as it could be. Please upgrade the server you are connecting to or contact your administrator" — but only in business versions; this is exactly the kind of message that could freak a home user out, log in the event log (ALWAYS) and stay in slow mode. If server version is good, add to "Good server" list (keep 256 or so) so you don’t have to repeat your inquiries and kick to fast mode. Advantages: – Transparent to user – Never any missing files – No problem if a server Disadvantages: – Slight perf hit for the detection. – If there is no way to detect the server version, all will be slow until the newer, blessed versions of servers roll out. – Clients stay in slow mode even with a server upgrade (although you can work around that by making the Bad Server list entries time out every month or so). Keeping things transparent to the user is all well and good, but if you mask the issue on then how will the administrators know to perform the upgrade unless they happen to read the client event logs? The user needs to know about the issue. Raise a warning which says contact your admin and links to a KB article. Use some sort of reg key to configure it and have an option to set this via group policy for the rest of the clients on the network. Then I’d follow this advice: Sunday, April 02, 2006 8:54 PM by Morrog; } } How many flags do you want? The base note lacks some important information, but I gather from several replies that APIs such as FindNextFile are affected as well as user-visible applications such as Windows Explorer. When the API sees that an error occured, I agree with several replies that the API must inform the caller that the error occured AND the API must log the event. If Windows Explorer gets an error return from the API in retrieving information that it is supposed to display, then Windows Explorer should display a message box to inform the user instead of displaying incorrect results. (More on this later.) If other applications get an error return from the API, their coders can decide what to do, such as getting the error text from the system and displaying the error text, or deciding how they want to do a retry. It should be moderately easy for the user to configure what to do. I agree with another poster that the fallback from DMA/UDMA to PIO is a partial precedent. The obstacles that users encounter in discovering that they’re suddenly running PIO, and finding why, and finding how to undo it, are not good precedents. Good logging is needed. I agree with another poster that the choice of turning write caching on or off is another precedent. A third precedent would be digital audio from CD-ROM drives. By default it’s off, the user can turn it on, and the user can turn it back off if it doesn’t work. A fourth precedent would be the control panel applet that lets users set and store some usernames and passwords for access to various network servers. I think that in principle the above precedents do not limit the system to remembering 16 most recently used devices or servers. If the insufficiently described problem in this case is Samba, then a limit of 16 will likely be too small. There should not be a limit. Now which should be the default, fast queries or slow queries, I cannot really say. The above precedents include good precedents for both possible decisions about the default. Now, here are some precedents which should not be copied. In the command-line version of FTP, if an mget command is used, it stops after retrieving 511 files. Suppose the server’s directory had 520 files including some starting with a capital Z and suppose the server’s sort ordering puts capital Z in the middle of the list. The user will look at the result, see Z-filenames at the bottom, and will not notice that some of the lower-case z-filenames and y-filenames weren’t copied because they came after the 511 point. If the FTP command would display a warning (copying stopped after 511 files) then the user would know to look for the missing files and do a retry, but no, who wants to let command line users have an easy time of it. Windows Explorer should not copy this precedent. When it can’t retrieve the entire list, it should tell the user. If Windows Explorer can’t expand a folder in the tree view in the left-hand pane, it simply deletes the "[+]" box and doesn’t display an error message. The user doesn’t guess that they needed to input a user name and password in order to view the network resource. This precedent should not be copied. When Windows Explorer can’t display the contents, it should tell the user. In Windows XP prior to SP2, and Windows Server 2003 prior to SP1, if the user connected or disconnected a USB hard drive or DVD drive then sometimes Windows Explorer did not update its display of drive letters. This precedent should not be copied. In a few cases (not enough) Windows Explorer overlays the drive’s icon with a red question mark or something like that. When a drive letter is in use, Windows Explorer should show the drive letter, and if Windows Explorer has encountered a problem with that drive letter then it should show the fact. When Virtual PC 2004 is installed, with Windows XP SP2 installed on both the host and the guests, and a directory on the host is shared on a guest using VM Additions, Windows Explorer on the guest often gets a corrupted view of the share. Sometimes the corruption has resulted in virtual BSODs in a guest but usually it’s invisibility of some files or corruption in their contents. Either Samba is not involved or Virtual PC 2004 uses Samba. Either way this precedent should not be copied. The reason why users would tend to blame Vista for bugs is that we’ve seen enough cases where other Windows versions (up to and including XP SP2, 2003 SP1, XP x64 SP1, and Vista betas) have lost files due to their bugs. Part of the way to stop users from blaming Vista for bugs which Microsoft knows to be non-Microsoft bugs is for Vista to log the errors when they occur, and the source of the error. For example maybe "Files could not be listed properly because \someserversomepartition returned invalid error code 993." Another part of the way is for Windows Explorer to display the error so that the user who was expecting to look at a list of files will know not to believe the list, instead of getting a rude surprise months later. Users don’t read errormessages, that has been proven time after time. If the directory contents is wrong, ms will be blamed. Thursday, April 06, 2006 1:42 PM by luser > Users don’t read errormessages, that has > been proven time after time. True. But presenting an incorrect list without an error message is unconscionable. Present an error message that 99% of users won’t read. If the error message is short and to the point ("Files could not be listed properly because \someserversomepartition returned invalid error code 993.") then you’ve done your job. Of course if the error message is 18 pages of gobbledegook then you haven’t done your job. If you don’t say "files could not be listed" then you’re actively deceiving your user when you present a fake list. I’d go for a solution like Auto-detect the buggy driver and work around it next time, but in not "next time". An error code is returnedafter fast? 1) (Optional: make user aware, expecially if this procedure can lead to time waiting, with a "do not display again" checkbox) 2) REDO the query in slow mode. Do this every time, no registry, nothing. It will eventually force admins reconsider upgrading samba, it will work, it will not poison registry, works like XP and Vista uses the new features Have explorer use fast mode, if fast mode fails for some "unusual" reason, then have it try slow mode. Advantages: * it will work. Drivers/versions that aren’t buggy will still work in fast mode. Disadvantages: * queries on buggy drivers will take a little longer due to the failed fast mode attempt. PingBack from > Have a configuration setting to put the network client into "slow mode" It’s not a bug with Windows, it’s not a bug with most file servers, and it’s not a bug with fast mode. It’s a bug with *some* file servers that lie about what they can do. If the error happens to pop up, then they user will get an error message. They google the error, find the KB article, add the registry key to disable fast mode everywhere, and all is well. Otherwise, they get to *SEE* that a device they have purchased is behavingly badly, and get to bitch out the real culprit, demanding updates, or sulking – all while having a temporary workaround. At best they see their Linux box needs to be updated because it’s buggy, at worst they know that the device they bought is buggy and they’ll know it’s inferior and want to get rid of it. Either way, i would rather Windows didn’t keep working around everything else. Go ahead, break it. If you hide it, i don’t get to know if my stuff is broken. If you work around it, things get slower. i don’t want things slower, i want them faster. PingBack from PingBack from Too many incompatible devices. PingBack from
https://blogs.msdn.microsoft.com/oldnewthing/20060330-31/?p=31723
CC-MAIN-2018-05
refinedweb
28,391
69.62
Discover Lightning Web Components Learning Objectives After completing this unit, you’ll be able to: - Explain the Lightning Web Components programming model. - List the benefits of using Lightning web components. - Find what you need to get started developing Lightning web components. An Open Door to Programming with Web Standards It’s time to bring together your Salesforce knowledge and familiarity with standard technologies like HTML, JavaScript, and CSS to build the next generation of Salesforce apps. Use these common standards to build components for your Salesforce org while maintaining compatibility with existing Aura components. Lightning Web Components is focused on both the developer and user experience. Because we’ve opened the door to existing technologies, you use the skill you’ve developed outside of Salesforce to build Lightning web components. All of this is available to you without giving up what you’ve already accomplished with Aura components. Before You Go Further You should have a basic understanding of Salesforce DX projects and Salesforce CLI. You’ll also need to use a properly configured org in your Trailhead account and VS Code with the Salesforce Extension Pack. You can learn about all of this by completing Quick Start: Lightning Web Components. If you’re using a Developer Edition org you’ve attached to your Trailhead account, you need to deploy My Domain in Setup (Trailhead Playground orgs automatically have My Domain deployed). Why Lightning Web Components? Modern browsers are based on web standards, and evolving standards are constantly improving what browsers can present to a user. We want you to be able to take advantage of these innovations.. You’ll find it easier to: - Find solutions in common places on the web. - Find developers with necessary skills and experience. - Use other developers’ experiences (even on other platforms). - Develop faster. - Utilize full encapsulation so components are more versatile. And it’s not like web components are new. In fact, browsers have been creating these for years. Examples include <video>, <input>, and any tag that serves as more than a container. These elements are actually the equivalent of web components. Our goal is to bring that level of integration to Salesforce development. Simple Component Creation The beauty of adhering to web standards is simplicity. You don’t need to ramp up on quirks of a particular framework. You simply create components using HTML, JavaScript, and CSS. Lightning web component creation is one, two, three steps. I’m not kidding. It’s really that simple. You create (1) a JavaScript file, (2) an HTML file, and optionally (3) a CSS file. - HTML provides the structure for your component. - JavaScript defines the core business logic and event handling. - CSS provides the look, feel, and animation for your component. Those are the essential pieces of your component. Here’s a very simple Lightning web component that displays “Hello World” in an input field. HTML <template> <input value={message}></input> </template> The template tag is a fundamental building block of a component’s HTML. It allows you to store pieces of HTML. JavaScript import { LightningElement } from 'lwc'; export default class App extends LightningElement { message = 'Hello World'; } We get into the import statement and class declaration details later, too. CSS input { color: blue; } At minimum, all you really need is an HTML file and a JavaScript file with the same name in the same folder (also with a matching name). You deploy those to an org with some metadata and you’re good to go. Salesforce compiles your files and takes care of the boilerplate component construction for you automatically. Play Time Let's go look at our files. Use an integrated development environment (IDE), to try out your components, tinker with them, and see instant results. If you don't already have an IDE you love, follow along as we use the third-party IDE, WebComponents.dev which includes base components and styles to help you create LWC prototypes. - Navigate to webcomponents.dev and select LWC. (Or go directly to webcomponents.dev/create/lwc.) When you first get there, you see an example you can explore. It includes styles from the Lightning Design System CSS framework. - Use your GitHub account to log in to WebComponents.dev. - Copy the above HTML, JavaScript, and CSS examples into the corresponding app.x files in the example. Replace everything in the app files and save each file. The Stories tab shows the output. If you reopen the initial template page, you'll see the default example again. Lightning Web Components and Aura Components Do Work Together Wondering if you can keep your existing Aura components? Yes you can! You can use Lightning web components without giving up your existing components. You’ll likely migrate your existing components to the Lightning Web Component model eventually, but we’re introducing Lightning web components without taking anything away from the existing support of the Aura components. Aura components and Lightning web components live well together. In fact, Aura components can contain Lightning web components (though not vice-versa). But a pure Lightning web components implementation provides full encapsulation and evolving adherence to common standards. Cool Stuff You Can Use To develop Lightning web components efficiently, use the following tools and environments. - Dev Hub and Scratch Orgs Scratch orgs are disposable Salesforce orgs to support development and testing. Dev Hub is a feature that manages your scratch orgs. Both are part of the Salesforce DX tool set. Salesforce DX is an integrated set of development tools built and supported by Salesforce. - Salesforce Command Line Interface (CLI) The Salesforce CLI provides a quick way to run operations for creating and configuring scratch orgs, and also for deploying components. This is also part of the Salesforce DX tool set. - Lightning Component Library The reference for both Aura and Lightning web components and how to use them is found at. You can view the library through your org’s instance, too, at http://<MyDomainName>.lightning.force.com/docs/component-library. By viewing the library through your instance, you see only the correct version for your org. And, as you create your own custom components, they appear in the library too. - GitHub We share extensions, samples, and more through GitHub repos. Get a GitHub account to make sure you can take advantage of these offerings. - Visual Studio Code Salesforce Extension Pack We’ve focused on Visual Studio as a development tool, providing an integrated environment for you to build your components. The Salesforce Extension Pack for Visual Studio Code provides code-hinting, lint warnings, and built-in commands:. - Lightning Web Components Recipes We provide a GitHub repo to help you see how Lightning web components work. You can clone, tinker, and publish this mix of samples to your own scratch org and see them in action. Get it at. - E-Bikes Demo This GitHub repo is another great way to see how Lightning web components work. The e-bikes demo is an end-to-end implementation of Lightning web components to create an app. Try this example in your own scratch org. Get it at. - Lightning Data Service (LDS) Access data and metadata from Salesforce via Lightning Data Service. Base Lightning components that work with data are built on LDS. Customize your own components to take advantage of LDS caching, change-tracking, performance, and more. - Lightning Locker. A Look Ahead We’ll use the eBikes demo to see what you can do with the HTML and JavaScript files.
https://trailhead.salesforce.com/en/content/learn/modules/lightning-web-components-basics/discover-lightning-web-components
CC-MAIN-2021-21
refinedweb
1,240
58.08
Hello, I am using the elektron platform's category=data/symbology, endpoint=convert and I am also using python3's urllib for making GET/POST request. I can use the following GET method for converting symbols (which means my headers, auth, etc are all good): However when I tried to use the POST method and construct the body with a list of symbols I get 400 error. More specifically, what is the anticipated body for the following data: data = { "universe": [ "IBM.N", "MSFT.O", ] } is it 'universe=IBM.N&universe=MSFT.O', 'universe=IBM.N,MSFT.O', 'universe%5B%5D=%5B%27IBM.N%27%2C+%27MSFT.O%27%5D' or 'universe=%5B%27IBM.N%27%2C+%27MSFT.O%27%5D'? The later two are given by urllib's urlencode method. Thanks, The body of the POST message should be a JSON object. when you pass the dictionary object to the library, the object gets split up and is sent as individual name/value pairs. I used: data = json.dumps(requestData) and it works. Try using a debugging proxy to catch and verify the actual request that your application is sending down and crosscheck it with the capture that I have posted here. Http Status code 400 is a Bad Request. Your request body is not a valid format. Please remove ',' after "MSFT.O". The correct one should be data = { "universe": [ "IBM.N", "MSFT.O" ] } You can also try your JSON request data with APIDocs Playground before using it with your python codes. It still does not work without the ','. python's urlencode treat them the same as an input Dict[str, List[str]] and after encoding, I am looking for the right format of the body your API expects. Again, I am struggling to find whether it is (1) 'universe=IBM.N&universe=MSFT.O' (2) 'universe=IBM.N,MSFT.O' (3) 'universe%5B%5D=%5B%27IBM.N%27%2C+%27MSFT.O%27%5D' (4) 'universe=%5B%27IBM.N%27%2C+%27MSFT.O%27%5D' after the encoding? None of them seems to work for me. This is the raw HTTP capture: POST HTTP/1.1 Host: api.refinitiv.com . . Content-Length: 31 Authorization: Bearer eyJ0eXAiOiJK**** {"universe":["MSFT.O","IBM.N"]} If you post your code snippet, we can help you tweak it. Here is a minimal code snippet I have been using, it employs OpenerDirector from urllib because my company environment needs SSLCertificate, ProxyHandler, etc that goes beyond the functionalities of the request library. Below, 'YOUR TOKEN STR' is the access_token from auth/oauth2, token endpoint. from urllib.parse import urlencode from urllib.request import build_opener, Request opener = build_opener() opener.addheaders = [ ('Authorization', 'Bearer %s' % 'YOUR TOKEN STR'), ('Accept', 'application/json'), ('Content-Type', 'application/json') ] # GET works for me ans = opener.open(Request('')) # which one of them is correct? None works for me data = '{"universe":["MSFT.O","IBM.N"]}' data = urlencode({"universe": ["MSFT.O", "IBM.N"]}) data = urlencode({"universe": ["MSFT.O", "IBM.N"]}, doseq=True) ans = opener.open(Request(''), data=data.encode('utf-8')) Thanks for the tip, it helps solving the problem, json.dumps would give a string that equals the first data I listed above: data = '{"universe":["MSFT.O","IBM.N"]}' The problem is with my function call of Request, my opener has the header of (Content-Type, application/json) but it is not added to the Request object. when I changed it to the following it works req = Request('') req.add_header('Content-Type', 'application/json') ans = opener.open(req, data='{"universe":["MSFT.O","IBM.N"]}'.encode('utf-8'))
https://community.developers.refinitiv.com/questions/40544/urlencode-list-universe-in-datasymbologyconvert.html
CC-MAIN-2019-51
refinedweb
591
52.15
Sounds like a well-thought-out plan to me. If we're going through and changing Guava, it may also be worthwhile to try to eliminate the use of Guava in our "public API". While the shaded guava eliminates classpath compatibility issues, Guava could (at any point) drop a class that we're using in our API and still break us. That could be a "later" thing. The only thing I think differently is that 4.x could (at some point) pick up the shaded guava artifact you describe and make the change. However, that's just for the future -- the call can be made if/when someone wants to do that :) On 6/2/20 10:01 AM, Istvan Toth wrote: > Hi! > > There are two related dependency issues that I believe should be solved in > Phoenix to keep it healthy and supportable. > > The Twill project has been officially terminated. Both Tephra and Omid > depend on it, and so transitively Phoenix does as well. > > Hadoop 3.3 has updated its Guava to 29, while Phoenix (master) is still on > 13. > None of Twill, Omid, Tephra, or Phoenix will run or compile against recent > Guava versions, which are pulled in by Hadoop 3.3. > > If we want to work with Hadoop 3.3, we either have to update all > dependencies to a recent Guava version, or we have to build our artifacts > with shaded Guava. > Since Guava 13 has known vulnerabilities, including in the classpath causes > a barrier to adaptation. Some potential Phoenix users consider including > dependencies with > known vulnerabilities a show-stopper, they do not care if the vulnerability > affects Phoenix or not. > > I propose that we take following steps to ensure compatibility with > upcoming Hadoop versions: > > *1. Remove the Twill dependency from Omid and Tephra* > It is generally not healthy to depend on abandoned projects, but the fact > Twill also depends (heavily) on Guava 13, makes removal the best solution. > As far as I can see, Omid and Tephra mostly use the ZK client from Twill, > as well as the (transitively included) Guava service model. > Refactoring to use the native ZK client, and to use the Guava service > classes directly shouldn't be too difficult. > > *2. Create a shaded guava artifact for Omid and Tephra* > Since Omid and Tephra needs to work with Hadoop2 and Hadoop3 (including the > upcoming Hadoop 3.3), which already pull in Guava, we need to use different > Guava internally. > (similar to the HBase-thirdparty solution, but we need a separate one). > This artifact could live under the Phoenix groupId, but we'll need to be > careful with the circular dependencies. > > *3. Update the Omid and Tephra to use the shaded Guava artifact* > Apart from handling the mostly trivial, "let's break API compatibility for > the heck of it" Guava changes, the Guava Service API that both Omid and > Tephra build on has changed significantly. > This will mean changes in the public (Phoenix facing) APIs. All Guava > references will have to be replaced with the shaded guava classes from step > 2. > > *3. Define self-contained public APIs for Omid and Tephra* > To break the public API's dependency on Guava, redefine the public APIs in > such a way that they do not have Guava classes as ancestors. > This doesn't mean that we decouple the internal implementation from Guava, > simply defining a set of java Interfaces that matches the existing (updated > to recent Guava Service API) > interface's signature, but is self-contained under the Tephra/Omid > namespace should do the trick. > > *4. Update Phoenix to use new Omid/Tephra API* > i.e. use the new Interface that we defined in step 3. > > *5. Update Phoenix to work with Guava 13-29.* > We need to somehow get Phoenix work with both old and new Guava. > Probably the least disruptive way to do this is reduce the Guava use to the > common subset of 13.0 and 29.0, and replace/reimplement the parts that > cannot be resolved. > Alternatively, we could rebase to the same shaded guava-thirdparty library > that we use for Omid and Tephra. > > For *4.x*, since we cannot get rid of Guava 13 ever, *Step 5 *is not > necessary > > I am very interested in your opinion on the above plan. > Does anyone have any objections ? > Does anyone have a better solution ? > Is there some hidden pitfall that I hadn't considered (there certainly > is...) ? > > best regards > > Istvan >
http://mail-archives.apache.org/mod_mbox/phoenix-dev/202006.mbox/%3C6191ba33-bda3-aa19-ab86-055270369346@apache.org%3E
CC-MAIN-2021-31
refinedweb
735
62.98
Hi, I am a very new programmer ( 1 month and counting). I am presented with a problem from a tutorial I recieved online and I have no idea how to even start with it. My only assumption is to use array's, but I am not sure how the function protypes are to work and what not. Also, its just plain complicated! (to me at least) I will write up what I need done, and if you can, give me tips on how and where to start, and what else I can do. It shouldn't be any more than one page of code. Inside the directions for it, I include the most I could come up with as far as psuedocode for it. (I just threw comments in there basically) Here is the problem: Calculate the time it takes a Marathoner to travel the distance traveled (provided below). *Of course you must assume that 8 hours is 1 day of travel* Have the user enter the following (cin): (7 individual Marathoners) Marathoners Name Running Speed Distance Elevation gain/loss The running speed cannot be less than 1 and not greater than 3 (mph). The running speed should be a floating point number (so that "1.5"s can be presented (ie, decimals (sig figures))) Somehow, create a function to calculate the marathoners's time traveled. The time (in hours) is: distance/running speed. Create a function to calculate for elevation gain/loss. For every 1000 feet of elevation gain/loss, add one hour to the running time. (elevation gain/loss, is going up and down a hill in essence) Output (cout) the result of each marathoner to the screen. The Marathoner's name and the time must be separated by a white space. Preferably the output should be as follows: "Name Time" (ie, Joe 2days, 4hours). Or at least thats how i see it. If you have an opinion on how to make it better let me know! Also, how does one go about create a data file? I would like to create a data file called "distance.txt". using the 'marathoner' problem; The file should contain the following on each line, separated by a white space: marathoners name, running Speed (MPH), distance to travel (miles), elevation gain/loss (feet). This one I REALLY have no idea how to start out! If I could recieve some help, that would be great! Please email me some of the coding, if you can, and don't feel like typing it up to post or whatever (^_^ ) Darkninja007@ninjahome.zzn.com Signed In desperate need of help , THE FU FYTER Addendum: (Tell me if i'm getting anywhere) #include <iostream.h> //Function protype...is this correct? void runnername_array (int *, int); void main (void) { }
http://cboard.cprogramming.com/cplusplus-programming/13802-dire-need-help-lots-printable-thread.html
CC-MAIN-2015-14
refinedweb
461
72.97