text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
sump up of setting up amp-jekyll in my project I am working on a jekyll project and already asked some questions on stackoverflow and jekyll talks site. This post is a sump up for the issue I have about setting up amp-jekyll. amp-Jekyll is a jekyll plugin. The full documentation is at this adress : The tree of my project is the following My Version of jekyll is jekyll-3.6.2 I installed the plugin using the Gemfile (gem "amp-jekyll", "1.0.2" # installed). I checked that the plugin is well installed. Then I followed the documentation: 1) I Placed the layout file (amp.html at) to the _layouts folder. I have a first question about that. I looked at this file and don't know how to use it. Have I just to copy it in the _layouts folder and that's all or have I to take a part of it and copy it to another file (and if so where ?). I can see in the body part the use of {{ header | amp_images }} or {{ footer | amp_images: false, 24, 24 }}. What does that mean ? . And what about the part included in head tag ? 2) Then, I Added amphtml-link to post heads (Add CSS styles to the html template). In my case I just copy the following code {% if page.path contains '_posts' %} <link rel="amphtml" href=""> {% endif %} with a faked href url in my footer.html file ().. I have a second question about this snippet of code. What is this link ? and is it taken into consideration if this page a post ? 3) And I run bundle exec jekyll serve The first page witch is published is index.html. the code is --- layout: default title: Hank Quinlan, Horrible Cop lang: fr --- <div class="blurb"> <h1>This is content</h1> </div> The default.html file (in _layouts folder) is the following <!DOCTYPE html> {% include header.html %} {{ content }} {% include footer.html %} And the header.html is at the following location : The first images that I display (and that I can't see) are in this file. So my main question is : Why I can't see the images ? Thank you in advance for your answer
http://codegur.com/48215672/sump-up-of-setting-up-amp-jekyll-in-my-project
CC-MAIN-2018-05
refinedweb
369
84.88
Walkthrough: Creating a Windows Service Application in the Component Designer This article demonstrates how to create a simple Windows Service application in Visual Studio that writes messages to an event log. Here are the basic steps that you perform to create and use your service: Create a project by using the Windows Service project template, and configure it. This template creates a class for you that inherits from System.ServiceProcess.ServiceBase and writes much of the basic service code, such as the code to start the service. Write the code for the OnStart and OnStop procedures, and override any other methods that you want to redefine. Implement service pending. Add the necessary installers for your service application. (Optional) Add startup parameters, specify default startup arguments, and enable users to override default settings when they start your service manually. - Install your service on the local machine. Access the Windows Service Control Manager and start your service. - To begin, you create the project and set values that are required for the service to function correctly. To create and configure your service In Visual Studio, on the menu bar, choose File, New, Project. The New Project dialog box opens. In the list of Visual Basic or Visual C# project templates, choose Windows Service, and name the project MyNewService. Choose OK. The project template automatically adds a component class named Service1 that inherits from System.ServiceProcess.ServiceBase. On the Edit menu, choose Find and Replace, Find in Files (Keyboard: Ctrl+Shift+F). Change all occurrences of Service1 to MyNewService. You’ll find instances in Service1.cs, Program.cs, and Service1.Designer.cs (or their .vb equivalents). In the Properties window for Service1.cs [Design] or Service1.vb [Design], set the ServiceName and the (Name) property for Service1 to MyNewService, if it's not already set. In Solution Explorer, rename Service1.cs to MyNewService.cs, or Service1.vb to MyNewService.vb. In this section, you add a custom event log to the Windows service. Event logs are not associated in any way with Windows services. Here the EventLog component is used as an example of the type of component you could add to a Windows service. To add custom event log functionality to your service In Solution Explorer, open the context menu for MyNewService.cs or MyNewService.vb, and then choose View Designer. From the Components section of the Toolbox, drag an EventLog component to the designer. In Solution Explorer, open the context menu for MyNewService.cs or MyNewService.vb, and then choose View Code. Add a declaration for the eventLog object in the MyNewService class, right after the line that declares the components variable: Add or edit the constructor to define a custom event log: To define what occurs when the service starts In the Code Editor, locate the OnStart method that was automatically overridden when you created the project, and replace the code with the following. This adds an entry to the event log when the service starts running: A service application is designed to be long-running, so it usually polls or monitors something in the system. The monitoring is set up in the OnStart method. However, OnStart doesn’t actually do the monitoring. The OnStart method must return to the operating system after the service's operation has begun. It must not loop forever or block. To set up a simple polling mechanism, you can use the System.Timers.Timer component as follows: In the OnStart method, set parameters on the component, and then set the Enabled property to true. The timer raises events in your code periodically, at which time your service could do its monitoring. You can use the following code to do this: Add code to handle the timer event: You might want to perform tasks by using background worker threads instead of running all your work on the main thread. For an example of this, see the System.ServiceProcess.ServiceBase reference page. To define what occurs when the service is stopped In the next section, you can override the OnPause, OnContinue, and OnShutdown methods to define additional processing for your component. To define other actions for the service Locate the method that you want to handle, and override it to define what you want to occur. The following code shows how you can override the OnContinue method: Some custom actions have to occur when a Windows service is installed by the Installer class. Visual Studio can create these installers specifically for a Windows service and add them to your project. Services report their status to the Service Control Manager, so that users can tell whether a service is functioning correctly. By default, services that inherit from ServiceBase report a limited set of status settings, including Stopped, Paused, and Running. If a service takes a little while to start up, it might be helpful to report a Start Pending status. You can also implement the Start Pending and Stop Pending status settings by adding code that calls into the Windows SetServiceStatus function. To implement service pending status Add a using statement or Imports declaration to the System.Runtime.InteropServices namespace in the MyNewService.cs or MyNewService.vb file: Add the following code to MyNewService.cs to declare the ServiceState values long dwServiceType; public ServiceState dwCurrentState; public long dwControlsAccepted; public long dwWin32ExitCode; public long dwServiceSpecificExitCode; public long dwCheckPoint; public long dwWaitHint; }; Now, in the MyNewService class, declare the SetServiceStatus function by using platform invoke: To implement the Start Pending status, add the following code to the beginning of the OnStart method: Add code to set the status to Running at the end of the OnStart method. (Optional) Repeat this procedure for the OnStop method.). Set the StartType property to Automatic. In the designer, choose serviceProcessInstaller1 for a Visual C# project, or ServiceProcessInstaller1 for a Visual Basic project. Set the Account property to LocalSystem. This will cause the service to be installed and to run on a local service account. For more information about installers, see How to: Add Installers to Your Service Application.. Adding startup parameters In the Main method in Program.cs or in MyNewService.Designer.vb, add an argument for the command line: Change the MyNewService constructor as follows: public MyNewService(string[] args) { InitializeComponent(); string eventSourceName = "MySource"; string logName = "MyNewLog"; if (args.Count() > 0) { eventSourceName = args[0]; } if (args.Count() > 1) { logName = args[1]; } eventLog1 = new System.Diagnostics.EventLog(); if (!System.Diagnostics.EventLog.SourceExists(eventSourceName)) { System.Diagnostics.EventLog.CreateEventSource( eventSourceName, logName); } eventLog1.Source = eventSourceName; eventLog1.Log = logName; } This code sets the event source and log name according to the supplied startup parameters, or uses default values if no arguments are supplied. To specify the command-line arguments, add the following code to the ProjectInstaller class in ProjectInstaller.cs or ProjectInstaller.vb: This code modifies the ImagePath registry key, which typically contains the full path to the executable for the Windows Service, by adding the default parameter values. The quotation marks around the path (and around each individual parameter) are required for the service to start up correctly. To change the startup parameters for this Windows Service, users can change the parameters given in the ImagePath registry key, although the better way is to change it programmatically and expose the functionality to users in a friendly way (for example, in a management or configuration utility). To build your service project In Solution Explorer, open the context menu for your project, and then choose Properties. The property pages for your project appear. On the Application tab, in the Startup object list, choose MyNewService.Program. In Solution Explorer, open the context menu for your project, and then choose Build to build the project (Keyboard: Ctrl+Shift+B). Now that you've built the Windows service, you can install it. To install a Windows service, you must have administrative credentials on the computer on which you're installing it. To install a Windows Service In Windows 7 and Windows Server, open the Developer Command Prompt under Visual Studio Tools in the Start menu. In Windows 8 or Windows 8.1, choose the Visual Studio Tools tile on the Start screen, and then run Developer Command Prompt with administrative credentials. (If you’re using a mouse, right-click on Developer Command Prompt, and then choose Run as Administrator.) In the Command Prompt window, navigate to the folder that contains your project's output. For example, under your My Documents folder, navigate to Visual Studio 2013\Projects\MyNewService\bin\Debug. Enter the following command: If the service installs successfully, installutil.exe will report success. If the system could not find InstallUtil.exe, make sure that it exists on your computer. This tool is installed with the .NET Framework to the folder %WINDIR%\Microsoft.NET\Framework[64]\framework_version. For example, the default path for the 32-bit version of the .NET Framework 4, 4.5, 4.5.1, and 4.5.2 is C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe. For more information, see How to: Install and Uninstall Services. To start and stop your service In Windows, open the Start screen or Start menu, and type services.msc. You should now see MyNewService listed in the Services window. In the Services window, open the shortcut menu for your service, and then choose Start. Open the shortcut menu for the service, and then choose Stop. (Optional) From the command line, you can use the commands net start ServiceName and net stop ServiceName to start and stop your service. To verify the event log output of your service In Visual Studio, open Server Explorer (Keyboard: Ctrl+Alt+S), and access the Event Logs node for the local computer. Locate the listing for MyNewLog (or MyLogFile1, if you used the optional procedure to add command-line arguments) and expand it. You should see entries for the two actions (start and stop) your service has performed. To uninstall your service Open a developer command prompt with administrative credentials. In the Command Prompt window, navigate to the folder that contains your project's output. For example, under your My Documents folder, navigate to Visual Studio 2013\Projects\MyNewService\bin\Debug. Enter the following command: If the service uninstalls successfully, installutil.exe will report that your service was successfully removed. For more information, see How to: Install and Uninstall Services. You can create a standalone setup program that others can use to install your Windows service, but it requires additional steps. ClickOnce doesn't support Windows services, so you can't use the Publish Wizard. You can use a full edition of InstallShield, which Microsoft doesn't provide. For more information about InstallShield, see InstallShield Limited Edition. You can also use the Windows Installer XML Toolset to create an installer for a Windows service. You might explore the use of a ServiceController component, which enables you to send commands to the service you have installed. You can use an installer to create an event log when the application is installed instead of creating the event log when the application runs. Additionally, the event log will be deleted by the installer when the application is uninstalled. For more information, see the EventLogInstaller reference page.
http://msdn.microsoft.com/en-us/library/zt39148a.aspx
CC-MAIN-2014-42
refinedweb
1,857
57.47
Em does not rollbackVijay Phagura Aug 3, 2007 3:38 PM Folks, I have the following code; @Enitity public class Cruise { private Collection<Reservation> reservations; ... @OneToMany(cascade = CascadeType.ALL, mappedBy="cartridge") public Collection<Reservation> getReservations() { return reservations; } public void setReservations(Collection<Reservation> reservations) { this.reservations= reservations; } ... } @Enitity public class Reservation { private Cruise cruise; ... @ManyToOne(cascade = {CascadeType.ALL}) @JoinColumn(name="cruiseId", nullable=true) public Cruise getCruise() { return cruise ; } public void setCruise(Cruise cruise ) { this.cruise = cruise; } ... } @Stateless public class PersistBean { /** Injected EntityManger */ @PersistenceContext( unitName="eg") protected EntityManager em = null; ... public void persist( Set reservations, Curise cruise ) { ..... Iterator it = reservations.iterator(); while( it.hasNext() ) { Reservation r = (Reservation)it.next(); r.setCruise( cruise ); em.persist( r ); } ...... } .... } Now, I'm trying to write just 4 reservations. And, on the 4th one I get a db error about a contraint being voilated! I find that the first 3 records are writen to the Reservation table. My point is that, if my persist() method is running in a transaction, which I believe it is, then the first 3 records should have been rolledback, following an error on the 4th record!!!! Please help! Any help will be appreciated. Thx in adv! 1. Re: Em does not rollbackKarl Martens Aug 3, 2007 5:47 PM (in response to Vijay Phagura) I ran into a similar problem when I started using EJB3. As it turned out it wasn't a problem with EJB3 but a configuration issue with my database. I was using MySql which by default uses a non transactional storage engine (at least in the Linux distribution). Here is some documentation from MySQL Once I altered the database configuration to use the innodb storage engine my transaction rolled back as expected. The default behaviour for the for the stateless session bean is to wrap each method call in a transaction or participate in an existing transaction (should it already exist). In the case where a transaction had not been created when you called the persist method on the entity manager you would have received a TransactionRequiredException. Not sure if this is your particular problem but gives you something to check into. 2. Re: Em does not rollbackVijay Phagura Aug 4, 2007 12:44 PM (in response to Vijay Phagura) Thx very much for the response! I think that may the issue as you mentioned. As, I'm using MySQL on Linux. I may have to read a little about its configuration. I'll post my more findings... Thx for your time. 3. Re: Em does not rollbackAndrew Rubinger Aug 6, 2007 11:28 AM (in response to Vijay Phagura) Also, make sure your database is not set to auto-commit. S, ALR 4. Re: Em does not rollbackVijay Phagura Aug 6, 2007 12:05 PM (in response to Vijay Phagura) Okay, here is what is happening; Even though the MySQL documentation says that innodb is enabled by default from version 4.0 onwards, it is partially true! On Linux the MySQl version 4.0.18(which I'm using) does not have innodb turned on. It is only true on Windows!! So that is the source of my problem!! This URL may also help anybody in the same boat...
https://developer.jboss.org/thread/109634
CC-MAIN-2018-17
refinedweb
532
58.28
roadhouse 0. Within roadhouse, a config can be applied to a VPC. This allows the same configuration to be used across multiple VPCs. It’s useful in cases where you want to run multiple VPCs with the same configuration. For instance, this is useful when running across multiple datacenters for fault tolerance. Config File Syntax The config file is YAML based. Groups are the top level object. Within a group are options and rules. Rules are specified using a syntax similar to tcpdump (at a very, very trivial level). For ICMP protocol we use ICMP Type Numbers for port. More information is available at: - <protocol:optional, tcp by default> <port> <group_or_ip_mask_optional> It should be easier to understand a valid configuration based on example: - test_database_group: - - options: - description: cassandra and redis prune: true # remove rules not listed here - rules: - - tcp port 22 166.1.1.1/32 # mysterious office IP - tcp port 9160, 6379 test_web_group # refer to a group by name - port 55 192.168.1.1 # /32 by default - tcp port 22-50, 55-60 192.168.1.1 - icmp port 0 192.168.1.1 # ICMP Type 0; Echo Reply - test_web_group: - - options: - description: web servers prune: false # false by default - rules: - - tcp port 80 0.0.0.0/0 - icmp port 8 192.168.1.1/32 # ICMP Type 8; Timestamp Usage from roadhouse.group import SecurityGroupsConfig v = vpc.connect_to_region(‘us-west-1’) e = ec2.connect_to_region(‘us-west-1’) # assuming you only have 1 vpc already created # otherwise you’ll need to pick the right VPC you want # to apply your changes to vpc = v.get_all_vpcs()[0] config = SecurityGroupsConfig.load(“roadhouse.yaml”) config.configure(ec2_conn) config.apply(vpc) Development In a virtualenv, pip install -r requirements - Downloads (All Versions): - 11 downloads in the last day - 106 downloads in the last week - 360 downloads in the last month - Author: Jon Haddad - Keywords: aws,yaml,configuration - License: BSD 2 Clause - Categories - Package Index Owner: rustyrazorblade - DOAP record: roadhouse-0.6.xml
https://pypi.python.org/pypi/roadhouse
CC-MAIN-2015-40
refinedweb
329
57.98
.NET Framework: Task Parallel Library Dataflow Introduction Coupled to the new C# Async Community Technology Preview released during PDC 2010; is a library called Task Parallel Library Dataflow (TDF). TDF is part of a growing list of technologies built on top of the Task Parallel Library (TPL) that are part of Microsoft's Technical Computing Initiative. Like many of the other Technical Computing products TDF aims to make Parallel Computing products and patterns more accessible to the Microsoft Development Community. Like other Technical Computing products TDF builds on proven industry patterns and practices. TDF couples the Task Parallel Library to a set of classes/interfaces that leverage Message Passing to coordinate the behavior of a solution. Using a short sample application I'll demonstrate how to apply TDF. Overview Note: While I think it's important to study new products and patterns coming down the pipeline I would never use a CTP in a production application. Please remember that TDF is not a "production" grade release. According to the TDF documentation, TDF's architecture gains it's inspiration from a number of different sources. Among the sources is the Concurrency and Coordination Runtime (CCR). TDF is a new product in CTP so there are no clear guidelines on where and when to use TDF. Good TDF candidates would likely be applications that benefit from the composition and decoupling you get with messaging and the execution benefits of TPL. So, for example, intensive workload applications like the ones you find inside of a Windows Service. TDF is composed of a set of " Blocks". Blocks receive, send, or both receive and send messages to other Blocks. In general, the pattern for a block looks a lot like the graphic below. Figure 1: Block Architecture, Source: "An Introduction to TPL Dataflow TDF Messages are instances of the class the particular block is configured to interact with. Messages are stored in TPL data structures. Blocks leverage Tasks. I think of task classes as chunks of work. Blocks run tasks on a given TaskScheduler. Each Block exhibits different behavior in how it dispatches to the TaskScheduler and/or how it handles messages. Some concrete examples will demonstrate how a few of these blocks work. Sample Overview The full sample code appears below. using System.Threading.Tasks.Dataflow; using System.Threading.Tasks; using System.Threading; namespace TPL.Test.DataFlows { class Program { static void Main(string[] args) { var writeOut = new ActionBlock<string>(s => { Console.WriteLine(s); } ); var broadcast = new BroadcastBlock<string>(s => s.ToString()); var transform = new TransformBlock<string, string>(s => s.ToUpper()); var buffer = new BufferBlock<DateTime>(); var join = new JoinBlock<string, DateTime>(); var joinWrite = new ActionBlock<Tuple<string, DateTime>>(t => writeOut.Post(t.Item1 + " at " + t.Item2.ToString())); broadcast.LinkTo(transform); broadcast.LinkTo(writeOut); transform.LinkTo(join.Target1); buffer.LinkTo(join.Target2); join.LinkTo(joinWrite); //Begin activating everything Task.Factory.StartNew(() => { while (true) { Thread.Sleep(2000); buffer.Post( DateTime.Now); } }, TaskCreationOptions.LongRunning); var itr = 0; while (itr < 15) { broadcast.Post( "New string " + Guid.NewGuid().ToString()); Thread.Sleep(1000); ++itr; } Console.WriteLine("Execution complete, any key to continue..."); Console.ReadKey(); } } } There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/article.php/c18601/NET-Framework-Task-Parallel-Library-Dataflow.htm
CC-MAIN-2014-41
refinedweb
524
51.34
I have a template which displays a lot of values which are passed from a server, my question is how to i pass these values to the template file. My Handler code is as follows: class AdminHandler(tornado.web.RequestHandler): def get(self, *args, **kwargs): #respond to a get method #self.write("AdminHandler:: Inside GET function") userName = "Alwin Doss" welcomeMessage = "Good evening are you enjoying kids dance" items = {} items["userName"] = userName items["welcomeMessage"] = welcomeMessage self.render("web/admin.html", title="Admin Page", items=items) Here is a demonstration similar to what you seem to be doing. Look into the syntax of the template and see the different uses of {% %} and the {{ }} blocks. This code: from tornado import template t = template.Template('''\ {% for user in users %} {{ user['userName'] }} {{ user['welcomeMessage'] }} {% end %} ''') # create first user and append to a user list users = [] user = { "userName" : "Alwin Doss", "welcomeMessage" : "Good evening are you enjoying kids dance"} users.append(user) # create and append second user user = { "userName" : "John Smith", "welcomeMessage" : "Good evening, JS"} users.append(user) # render the template and output to console print t.generate(users = users) Produces this output: Alwin Doss Good evening are you enjoying kids dance John Smith Good evening, JS For more on Tornado templates have a look at this tutorial and of course at the Tornado templates documentation.
https://codedump.io/share/38FcwiE5X4js/1/how-to-pass-values-to-templates-in-tornado
CC-MAIN-2017-34
refinedweb
219
53.41
The Article is about the use of Java while loop. The Java while loop has two components, a condition and a statements. The statement may. The while loop has a similar counterpart called the do while loop. Syntax Below is the general code for creating a while loop. The while loop first checks the condition, and then decides what to do. If the condition proves to be false, the instructions in the code block will not be executed. while (condition) { //Instructions be executed } Example A basic example of the Java while loop. Other supporting code was removed to improve readability and keep the code short. int x = 0; while (x < 5) { System.out.println(x); x = x + 1; } 0 1 2 3 4 As shown above, the while loop printed out all the numbers from 0 to 4. The loop terminated once the value of x reached 5, before the output statement was even reached. Hence 5 was not included in the output. Using multiple conditions Using the and operator && we can connect two conditions together. Chaining two conditions together with && ensures the loop only runs when both conditions are true, and will terminate if any of them becomes false. public class example { public static void main(String[] args) { int x = 0; int y = 0; while (x < 5 && y < 6) { x = x + 1; y = y + 2; System.out.println("X: " + x + " " + "Y: " + y); } } } X: 1 Y: 2 X: 2 Y: 4 X: 3 Y: 6 The above loop stops after the value of y reaches 6, even though x was still less than 5. You can use other operators like the or and xor to change things up. See the Java Operators guide to learn more. Short hand While loop If you want to make a quick and small while loop, you can do it all within one line using the following format. while (condition) //statements; Loop control statements There are several loop control in Java, that as the name implies, can be used to manipulate the do while loop during it’s execution. continue statement The continue statement, when encountered will skip any remaining lines of code in the current iteration of the loop and skip to the beginning of the next iteration. while (x < 5) { x = x + 1; if (x == 3) continue; System.out.println(x); } 1 2 4 5 Since the continue statement is called when the value of x is equal to three, the System.out statement is skipped, and thus the value of x not printed to screen. break statement The break statement when called, terminates a loop during it’s execution. It is an effective way of forcing your loop to stop if some unexpected outcome has been encountered. while (x < 5) { x = x + 1; if (x == 3) break; System.out.println(x); } 1 2 The above code does not make it past the value 2, since as soon as the value of x becomes three, the loop terminates. This marks the end of the Java while loop article. Any suggestions or contributions for CodersLegacy are more than welcome. You can ask any relevant questions in the comments section below.
https://coderslegacy.com/java/java-while-loop/
CC-MAIN-2021-21
refinedweb
526
72.26
A Simple Image Viewer Fredrik Lundh | May 2003 | Originally posted to online.effbot.org Here’s a simple image viewer widget, designed for use with the Tkinter version of the Widget Construction Kit. from PIL import ImageTk from WCK import Widget class ImageView(Widget): ui_option_width = 512 ui_option_height = 512 def __init__(self, master, **options): self.photo = self.image = None self.ui_init(master, options) def ui_handle_config(self): return ( int(self.ui_option_width), int(self.ui_option_height) ) def ui_handle_repair(self, draw, x0, y0, x1, y1): if self.photo is None: return if self.image is None: self.image = self.ui_image(str(self.photo)) draw.paste(self.image) def setimage(self, image): self.photo = ImageTk.PhotoImage(image) self.image = None self.ui_damage() This widget uses the ui_image method to wraps an image object in a WCK-specific pixmap object. The pixmap is then pasted onto the window surface in the ui_handle_repair method. The image object must be of a type known to the actual WCK implementation. For the Tkinter version of the WCK, the ui_image method currently requires a Tkinter PhotoImage instance, or the corresponding PIL object. Note that the viewer creates the WCK object when the widget is about to be redrawn (in ui_handle_repair. The reason for this is that the ui_image method requires the widget to exist; if you try to call it on a widget that hasn’t yet been displayed, the method may fail. (in the current release, doing this may even crash the WCK library). Using this widget is straightforward. Just create the widget as usual (by calling the widget constructor, passing in the parent widget), and call setimage with a PIL Image object. If necessary, you can also call the config method to resize the widget. Here’s an example: from Tkinter import Tk from PIL import Image root = Tk() root.title("viewer") view = ImageView(root) view.pack() image = Image.open(filename) view.setimage(image) view.config(width=image.size[0], height=image.size[1]) root.mainloop() A drawback with the current ui_image interface is that the method requires a platform-specific image object. In future versions, the method will be modified to accept a standard PIL Image as well.
http://www.effbot.org/zone/simple-image-viewer.htm
CC-MAIN-2018-17
refinedweb
360
50.53
I'm writing a function that returns the number of times appeared of a word that appeared the most in the list of words. def max_frequency(words): """Returns the number of times appeared of the word that appeared the most in a list of words.""" words_set = set(words) words_list = words word_dict = {} for i in words_set: count = [] for j in words_list: if i == j: count.append(1) word_dict[i] = len(count) result_num = 0 for _, value in word_dict.items(): if value > result_num: result_num = value return result_num words = ["Happy", "Happy", "Happy", "Duck", "Duck"] answer = max_frequency(words) print(answer) 3 While I fully agree with the comments related to your I don't want to import anything statement, I found your question amusing, so let's try it. You don't need to build a set. Just go directly with words. words = words = ["Happy", "Happy", "Happy", "Duck", "Duck"] words_dict = {} for w in words: if w in words_dict: words_dict[w] += 1 else: words_dict[w] = 1 result_num = max(words_dict.values()) print(result_num) # 3
https://codedump.io/share/4AUxfwUIaBp9/1/python---my-frequency-function-is-inefficient
CC-MAIN-2017-09
refinedweb
168
62.98
The ones who are crazy enough to think they can change the world are the ones who do.- Steve Jobs In general terms, a function is also called a subroutine in some other languages like php, python and so on. Function is a self contained block of code that performs a specific task. A programmer define a function using its predefined syntax, a programmer can then call that function from elsewhere in your program. A function often accepts none, one or more arguments, which are values passed to the function by the code that calls it. The function is capable of read and work on those arguments. A function may also optionally return a value(non void type) that can then be read by the calling code. in this way, the calling code will communicate with the function. Program think of a function as a black box. The code that calls a function doesn't need to know what's inside the function, it just uses the function to get the job done. return-type function-name (arguments); main() { . . statements within main function; printf("%d", function-name (arguments)); //function call . } return-type function-name (arguments) //function definition { . . statements within user-defined function; . return variablename; } There are two types of functions they are, C provides many pre-defined functions to perform a task, those functions are defined in an appropriate header files. C allows users or programmers to define a function according to their requirement. The main() function is also a user-defined function because the statements inside the main function is defined by the user, only the function name is defined in a library. The following example program will clearly explain the concept of functions #include <stdio.h> int main() { //main function definition int a = 5, b = 10; int sum; printf("The value of a and b : %d %d ", a, b); sum = add(a, b); //function call printf("\nsum = %d ", sum); } //function definition int add(int a, int b) { int c; c = a + b; return c; //returns a integer value to the calling function } The above C program illustrates that a function declaration, function definition and function call in a program. We may make mistakes(spelling, program bug, typing mistake and etc.), So we have this container to collect mistakes. We highly respect your findings. We to update you
https://www.2braces.com/c-programming/c-functions
CC-MAIN-2017-51
refinedweb
389
63.59
Re: Resource in a dll From: Fireangel (Fireangel_at_discussions.microsoft.com) Date: 07/02/04 - Next message: Fireangel: "Re: Resource in a dll" - Previous message: sorpor: "Lnk2019 error" - In reply to: mphanke: "Re: Resource in a dll" - Next in thread: Fireangel: "Re: Resource in a dll" - Messages sorted by: [ date ] [ thread ] Date: Thu, 1 Jul 2004 17:36:02 -0700 Got mine to work. Really friggin weird. The name of the project MUST MATCH the name of your root namespace. In my case, my project name was MAP, and the root namespace I was using was ROFMap. The form was ROFMap::frmViewMap. The manifest root (looking in the dll) was MAP. The second exe (that was inheriting from the first) was looking for MAP.frmViewMap.resource. I ended up changing the name of the project from MAP to ROFMap and recompiled. Everything works fine now.... I guess the moral of the story is don't change what MS puts in the form at all ;) Ok, really. The project name and the namespace root must match inorder for everything to work well. Give it a try. GE PS> I created two little test programs to try this. If this is EXACTLY your problem, I'll upload them so they can look at it.. "mphanke" wrote: > Hi Gary, > > Sorry, I forgot to post a final reply. > > I created a new project and inserted everything manually - afterwards it > worked fine. I don't really understand what the difference is since all > the settings are the same... > > Thanks alot for your support. Have a look at the "using Forms in a DLL" > thread, I posted couple things in there. > > It seems like the resource file was included in the manifest, but the > images where empty. One of the problems might have been the > $(SafeInputName)-Macro, but I'm not sure at all. If I experience this > type of behaviour again, I will email it to you. Further I will check > whether I can extract a copy from the nightly backup, if I can do so I > will email the whole project to you. > > Regards, > > Martin > > Gary Chang wrote: > > Hi Martin, > > > > In order to isolate your problem effectively, would you please upload a > > small self-alone project(zipped) to us if possible?(you can send it as an > > attchment with reply this message or just email to me, juse remove the > > "online" phase of my emaill address) > > > > > > Thanks! > > > > Best regards, > > > > Gary Chang > > Microsoft Online Partner Support > > > > Get Secure! - > > This posting is provided "AS IS" with no warranties, and confers no rights. > > -------------------- > > > - Next message: Fireangel: "Re: Resource in a dll" - Previous message: sorpor: "Lnk2019 error" - In reply to: mphanke: "Re: Resource in a dll" - Next in thread: Fireangel: "Re: Resource in a dll" - Messages sorted by: [ date ] [ thread ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vc/2004-07/0051.html
crawl-002
refinedweb
461
72.87
Perl 6 Summary for 2004-10-01 through 2004-10-17 All~ Welcome to my first summary. Since I am relatively new at this game, I will just steal Piers' approach and start with Perl6 internals. But before that let me warn you that my ability to make strange characters with accents is not great, thus please do not be offended if I don't include them in your name. If you want them to appear in the future, a quick email about how to make them appear using a U.S. qwerty keyboard and Mozilla should suffice. Also, groups.google.com does not seem to have picked up perl6.compiler yet. With that legal disclaimer out of the way onward to: Perl 6 Internals Configure Problems and Improvements Leo noticed that Configure doesn't rebuild things correctly. Takers welcome. Nicholas Clark added a --prefix option for the make install target. Non-core Module Dependency Joshua Gatcomb accidentally introduced a dependency on Config::IniFiles. Since it is implemented in pure Perl he offered to add it to the repository. Warnock applies. OpenBSD Troubles Jens Rieks found and fixed a coredump on OpenBSD. Thanks, Jens. Threads on Cygwin Joshua Gatcomb discovered some trouble with threads on Cygwin. It seems that there are problems with both the thread implementation, and the tests not being generous enough if accepting out-of-order input. Still unresolved, I think. Parrot IO Not Quite Threadsafe Nicholas Clark discovered a double free in the IO system. While his problem was solved by Jens, the underlying problem still remains. make install Portability Issues Nicholas Clark asked why the install taget was not portable. Steve Fink responded that it was a quick hack at the time and made it better. Namespaces The namespace thread continues to churn. It is slowly making progress, but I believe there is a fair amount of people talking past each other going on. Perhaps Dan could step in and provide one final state of the namespaces? Parrot Abstract Syntax Tree a.k.a. PAST Sam Ruby decided to pick up the Python on Parrot ball. To that end he enquired as to what PAST is. Leo provided answers and help. Will asked for more help. Leo once again provided. JIT for Non-Branching Compare opcodes Stephane Peiry provided a patch to JIT with some more opcodes. Leo applied. Stephane then provided a patch with tests. Jens applied that one. Comparing Pythons Sam Ruby posted a link comparing various Pythons and their conformance to a test suite. Sam Ruby: Comparing Pythons Metaclasses? Dan admitted confusion as to what exactly metaclasses are/do. Papers and explanations were provided by Aaron Sherman, Michael Walter, and Sam Ruby. Plain Old Hash William Coleda wondered if Parrot had a basic hash implementation (not a PerlHas). Dan said "D'oh!" and asked for takers. Coleda added a TODO. Parakeet 0.3 Michael Pelletier's language Parakeet has hit 0.3 and been added to CVS. Everybody should play with it. make install Thoughts Leo conjectured about creating a parrot_config.c, which would encode all of Parrot's configuration. Then Parrot would know its own config. Jens suggested letting miniparrot generate it; Leo agreed. More piethon Dan's register spilling problems give him free time. He used it to work on piethon a little. Privilege Implementation Felix Gallo posted some questions/thoughts with respect to privileges. While Leo addressed Felix's question about the location of source files (and provided a nice plug for vim). The others all remain Warnocked. make in Languages/Tcl Matthew Zimmerman supplied a patch to fix Tcl's make. William Coleda modified and applied it. MANIFEST Fix Up Andy Dougherty provided a patch to remove some old files from the manifest. Steve Fink applied it. Parrot 0.1.1 "Poicephalus" There was a little talk about names. Then a little talk about getting it posted to perl.org. In the end the 0.1.1 release did happen and even made it to Slashdot. Thank you to everyone who contributed. Data::Dumper TODO Will Coleda added a TODO for Data::Dumper. Apparently it cannot dump FixedPMCArrays. Will conjectures that there are probably other new PMCs it cannot handle either. Patches welcome. Emacs, XEmacs, and pir-mode Jerome Quelin kicked off a thread that resulted in emacs getting better pir-mode support. Thanks to all involved, but I will continue to use vim. ;-) A %= B Dan noticed that we did not have support for %= in PIR. There was some confusion as several people rushed to the rescue. In the end, the problem was fixed. Thanks all. Pushing and Popping Arrays Will Coleda wondered why he could not push onto a Fixed Array. The answer was quickly provided that Fixed means that its size cannot be modified, not that it is bounded. This answer seemed reasonable enough to him. He then went on to ask why he could not pop a Resizable Array. The answer is, of course, because it is not implemented yet; patches welcome. imcc Reserved Words Sam Ruby wondered how to create a subname num in imcc. The answer, "No," and a workaround was provided by Jens. Quiet vs. Loud Build System Andy Dougherty provided a patch to improve the build system. Steve Fink applied it. Leo disliked it. Consensus seems to be not to use it. Cygwin Bugs Joshua Gatacomb has been fighting with Cygwin getting Parrot to work. Apparently we trip a few of its bugs. Read more if you like. Python Concerns Jeff Clites raised some concerns about dealing with Python's bound and unbound methods. There was some discussion about it. In the end it just means a little more work. Makefile Cleanup Will submitted a patch to clean up the Makefile. There was some back and forth about what parts of the patch should be kept. Win32 Issues Ron Blaschke provided a few patches for Win32 which Jens and Leo applied. Dynamically Loadable Modules Steve Fink spent some time chugging away at dynclass and dynamically loaded modules. There was some discussion of proper variable names and some working out of problems. Friendly reminder -- when in doubt make clean; perl Configure.pl; make. ICU Without --prefix Problems Steve Fink was apparently having troubles with ICU and locating necessary data files. Leo confirmed that it is a problem. MinGW/MSYS Jens has apparently had success working with MinGW. Yay! ICU Issues Will had some ICU issues. Various suggestions and solutions were attempted. Any success? Locals Inside Macros Will wants .locals inside .macros. Apparently it can only be done if your macro only takes PMCs. t/pmc/signal.t Improvements Jeff Clites provided a patch with improvements to t/pmc/signal.t. Warnock applies. Co-routines and --python Michal at withoutane.com (whose last name I cannot find) wondered about the --python flag. The answers and discussion that followed indicate that it was a quick and dirty hack. Eventually it will be gone and Python will have PMCs of its own (relatively quickly if Sam Ruby has anything to say about it). Small Patches Sam Ruby fixed a small problem in PerlInt; Jeff Clites fixed a bug and cleaned some warnings in darwin/dl.c. He also fixed some tests and added a Parrot_memalign function for Mac OS X. Stephane Payrand provided a patch to allow multiple identifiers on one line in pir. Stephane also added a get_representation op. Bernhard Schmalhofer added support for synchronous callbacks. Ion Alexandru Morega added more functionality to complex PMC. Leo applied the patches. Link Failure on amd64 Adam Thomason pointed out that amd64 lost a vital link flag somewhere in the shuffle. Non-vtable Methods on Built-in pmcs Sam Ruby wondered about allowing pmcs to implement additional non-vtable methods. Leo thought it would be good. Sam provided a patch, chromatic tweaked it, and Leo applied it. Tinderbox? Jens wondered what happened to tinderbox.perl.org. Apparently it died a while back and has not yet been resurrected. Robert Spier also noted that it was a little difficult to deal with and said he is interested in creating a new one. He is also looking for help in this endeavor. Register Stacks [Again] Leo has once again pushed forward his idea for register stacks and been Warnocked. Ncurses Troubles Andy Dougherty had some ncurses trouble. In particular, one of the error messages was exceptionally unhelpful. Now it is a little more helpful. Coding Standards Leo would like to draw everyone's attention to docs/pdds/pdd07_codingstd.pod, and the fact that it should also apply to Perl code where appropriate. Thank you. Rx_compile and FSAs Aaron Sherman has some ambitious stuff in the works with respect to regular expressions. Stay tuned for details. Single Character from STDIN Matt Diephouse wants to know how to get a single character from STDIN. The answer seems to be no, but this has triggered Dan to go back to the IO/Event doc. Best of luck, Dan. Parrotcode.org Samples Paul Seamons noticed that some of the samples on parrotcode.org are really out of date. Will Coleda agreed and suggested that they should all be updated to PIR, too. Takers welcome. Newcomers Pratik Roy wondered if he could join into the work on Parrot or if he would remain an outsider looking in. Many people rushed to suggest ways in which he could help. So, please don't feel afraid to contribute. At worst a patch of yours will be turned away with an explanation as to why. At best, you will start being referred to by first name only in the summaries. Python and Perl Interop Sam Ruby wondered about how Python and Perl would interoperate. Leo, Steve Fink, and Thomas Sandlass have ideas and suggestions. I think the answer will be "magically." Inline::Parrot Ovid pointed the world at Inline::Parrot. It looks cool. Problems with 0.1.1 on x86-64 Brian Wheeler had a problem with Parrot on x86-64. He provided a patch, but Leo couldn't apply it and asked for a resend. Silence thereafter. . . . Bug in Factorial? James Ghofulpo wondered if the Parrot examples page's factorial program was supposed to truncate output. Warnock applies. Configure.pl Auto-detection Sridhar discovered that Configure.pl would fail if he told it to --cxx=/usr/bin/g++-3.3; Jens pointed out that he would also need to tell it --link=g++-3.3. Parrot Forth 0.1 Matt Diephouse releases Parrot Forth 0.1. It has many cool features. Nice work. Perhaps this should go into languages/? .return Directives Stephane Payrard wants to remove the multiple meaning of .return to provide a cleaner path forward. Everyone agrees. YAGCB (Yet Another Garbage Collection Bug) Will thought he had turned up another GC bug with Tcl. This time he hadn't and it was his fault. Can't win them all I guess. Testing a PMC for Truth Will wondered how to determine a PMC's Boolean value. Leo pointed him toward istrue. Grsecurity Problems Christian Jaeger noticed taht grsecurity was stopping on Parrot's attempts to execute JITted code. Leo pointed out that we already have support for doing the right thing, we are just failing to detect that we need to. PMCs and Inheritance Sam Ruby wanted inheritance for PMCs. Apparently what we really need is a scheme for multiple inheritance of PMCs. Sam Ruby supplied several patches and Leo applied parts of them. C++ and typedef struct Parrot_Interp *Parrot_Interp Don't Play Well Jeff Clites discovered that Parrot has troubles on Mac OS X because C++ does not like typedef struct Parrot_Interp *Parrot_Interp. We need C++ to link ICU. Fortunately, that particular typedef is out of the ordinary and violates our normal conventions. Brent "Dax" Royal-Gordon threatened to apply the (fairly large) fix. Will tested Brent's patch and gave it the thumbs up. JIT, Exec Core, Threads, and Architecture Proposal Leo proposed mandating that all JIT must be done interpreter-relative. This allowed some happy optimizations including constant table access. General response was favorable. pmc2c2.pl --tree Option Bernhard Schmalhofer submitted a patch fixing the aforementioned --tree option. No answer yet, but it has not really been long enough to officially invoke Warnock. Unununium Jacques Mony asked for help/advice porting Parrot to unununium. No responses yet... Perl 6 Compiler Google groups still doesn't have Perl 6 Compiler, so this section won't have links. Sorry. Compilation Paradigms Jeff Clites put out a request for some basic P6 examples and resultant bytecode so that everyone would be on the same page. Warnock applies. Internals, rules, REPL Herbert Snorrason had some basic questions about Perl 6's interaction with Parrot. He also wanted to know about whether he should be playing with the re_tests or if that was wasted effort. Finally he wanted to know about read-eval-print loops. Sadly Warnock applies across the board. Perl 6 Language Google groups has nothing for Perl6.language between October 2 and 14. Is this really the case? (I had not signed up until shortly before volunteering to summarize.) If there is email that I just can't find, I would be appreciative if someone could produce a summary or a pointer to the missing mail for me. Thanks. Updating Multiple Rows Nitin Thakur attempted to de-Warnock himself, apparently unsuccessfully. Sorry, Nitin. Perl 6 Summaries Piers raised the white flag after several years as a wonderful summarizer. Having now just finished my first summary, let me say thank you for all of your hard work and if you ever want the job back its yours. ;-) I hope you don't mind my stealing your general format for these things. The Usual Footer
http://www.perl.com/pub/2004/10/
CC-MAIN-2014-52
refinedweb
2,291
68.47
[This post was written by Dave Bartolomeo and the Clang/C2 feature crew]. This compiler uses the open-source Clang parser for C and C++, along with the code generator and optimizer from the Visual C++ compiler. This lets you compile your cross-platform code for Windows using the same Clang parser that you use for other targets, while still taking advantage of the advanced optimizations from the Visual C++ optimizer when you build for Windows. Because the new toolset uses the same Clang parser used for non-Windows targets, you won’t need to have annoying #ifdefs throughout the code just to account for differences between the compilers. Also,. Code compiled with the new toolset can be linked with other code compiled with the Visual C++ 2015 C and C++ compilers. Typically, you would compile the cross-platform parts of your code with Clang with Microsoft CodeGen, and compile any Windows-specific code (e.g. your UI) with the regular Visual C++ toolset. Note that Clang with Microsoft CodeGen is currently a preview feature. There are areas we know are incomplete or have bugs. We wanted to get this feature into your hands to give you a chance to try it out, understand how it will work in your code, and give us feedback. Installation In the feature selection UI of Visual Studio setup starting with Visual Studio 2015 Update 1, you’ll see a check box for Clang with Microsoft CodeGen under “Cross-Platform and Mobile Development\Visual C++ Mobile Development”. Checking this box will install the new toolset and the corresponding project templates. Using the New Toolset To create a new static library or DLL project that uses Clang with Microsoft CodeGen, you can use one of the new project templates under “Visual C++\Cross Platform”. Migrating Existing Projects To use the new toolset in an existing project, go to your project properties and change the “Platform Toolset” dropdown from “Visual Studio 2015” to “Clang 3.7 with Microsoft CodeGen”. Make sure to hit “Apply” before you switch to editing other properties to let the project system load the corresponding toolset definition. In this preview release we do not provide any support for automatic migration of values of the relevant properties between “Visual Studio 2015” and “Clang 3.7 with Microsoft CodeGen” toolsets, so you will have to fix any invalid values manually. The project system will emit errors when property value in the old toolset is not valid in the new toolset. You might encounter these errors only on values that were overridden from toolset’s defaults (they are marked with bold font in project properties); the values that were not changed would automatically be switched to defaults in the new toolset. In most cases, changing the invalid value to “inherit from parent or project defaults” would fix the problem. There are a few cases you might need to fix in other ways: - Some of the defaults are not what you might expect in this release. We’ll be changing defaults in future releases. - When changing from MSVC toolset, exception handling is currently off-by-default even though exception handling works. The developer can override this switch manually. The default will change in the next release. - Precompiled headers are not currently supported so you must disable their usage. Failure to do so would usually manifest itself in an “error : cannot specify -o when generating multiple output files”. If you still see this error even with precompiled headers disabled, please make sure that the value of “Object File Name” property is reset to its default value “$(IntDir)%(filename).obj” in Clang toolset. - If source files in the project you are trying to convert are UTF-16 encoded, you will need to resave them as UTF-8 when Clang gives you an error about UTF-16 encoded source with BOM. - If you get an “error : use of undeclared identifier ‘O_WRONLY'”, define in your project settings macro __STDC__=0. - If. - If you plan on using “VS 2015 x86 Native Tools Command Prompt”, please note that in the current release clang.exe is not immediately available via PATH environment variable. You’ll need to modify your environment to use clang.exe directly. Identifying your Platform and Toolset Compiler toolsets generally define a set of macros that help your code adapt to different versions of the compiler toolset and target. Here’s a table showing some relevant macros that are defined in Clang with Microsoft CodeGen and a code sample you can include in your code to help determine your platform and toolset combination. // Include (and extend) this code in your project to determine platform and toolset combination puts(“I am targeting 32-bit Windows.”); #endif #ifdef _WIN64 puts(“I am targeting 64-bit Windows.”); #endif #ifdef __clang__ printf(“I am Clang, version: %s\n”, __clang_version__); #endif #if defined(__clang__) && defined(__c2__) puts(“I am Clang/C2.”); #endif #if defined(__clang__) && defined(__llvm__) puts(“I am Clang/LLVM.”); #endif // Not tested: __EDG__, __GNUC__, etc. #if defined(_MSC_VER) && !defined(__clang__) printf(“I am C1XX/C2, version: %02d.%02d.%05d.%02d\n”, _MSC_VER / 100, _MSC_VER % 100, _MSC_FULL_VER % 100000, _MSC_BUILD); #endif Known Issues in the First Preview Because this is a preview release we released even though we’re aware of a few issues. We want to get bits into our developer’s hands as quickly as possible so that you can evaluate the toolset and see how it works with your code. Unfortunately, that means shipping with a few known bugs. - No support for ARM. It will be there in the next update via the existing Universal App project template - PCH build is not supported. Turn off PCH usage for any project where you explicitly set this new Platform Toolset - No OpenMP support. You will get a diagnostic that says “OpenMP is not supported” - No inline asm on any architecture. You will get a diagnostic that says “Inline assembly is not supported” or “GNU-style inline assembly is disabled” - No LTCG for compilation units compiled by Clang. Such compilation units can still link with other compilation units that were compiled for LTCG by MSVC. - No PGO instrumentation or optimization for compilation units compiled by Clang - /bigobj is not currently supported. The number of sections in the object file is limited to 2^16. - std::atomic_flag is not supported due to silent bad code gen. This will be fixed in the next update. - No hashing of source files. File hashes are used by the debugger to ensure that the source file is the same version as the one used during compilation. - Debug type information will always be placed in the .obj file (equivalent to cl.exe /Z7, not /Zi). No functionality is lost, but generated .obj files will likely be larger. - In some cases IntelliSense may not emulate Clang behavior - Clang diagnostics are currently only available in English Contributing Back to Clang and LLVM Clang with Microsoft CodeGen isn’t just a private fork of the open-source Clang compiler. We’ll be contributing the vast majority of the Clang and LLVM changes we’ve made back to the official Clang and LLVM sources. The biggest of these changes is support for emitting debug information compatible with the Visual Studio debugger. This will eventually allow code compiled with the LLVM code generator to be debugged with Visual Studio as well. As Clang releases new versions, we’ll update our installer to include the new version. I must say that I'm super excited about this, and I'm super duper excited to hear that it's going open-source. I have a project based on Clang's code-gen, and one of the biggest problems is that I can't handle Microsoft C++, only Itanium. I would be incredibly excited to see complete Microsoft compatibility with the Clang front-end and to be able to use VS with Clang. Just to point out, the bullet point: "•If." There is nothing linked to, or near "See here". The link is fixed now. Thank you! In Visual Studio C++ development is getting cross platform as C# does, but what about interoperability between them? Will we be able to call C# code from C++ on Linux or MacOS? How? C++ CLI? Thanks. @Pierre, I don't believe the cross-platform .NET SKUs support C++/CLI. Just to clarify, no x64 support at all at the moment? Only 32bit x86? Can't wait for the full-fledged support, thank you! @Peter: What they mean is that for VC there are essentially two sets of compilers. The first set are x86/32 bit programs that target the currently supported architectures x86, x64 and arm. If you look in the VC directory, in bin, these correspond to the bin directory itself and the subdirectories x86_amd64 and x86_arm. These are x86/32 bit hosted compilers because they run as 32 bit applications. The second set are x64/64 bit programs that reside in the subdirectories amd64, amd64_x86 and amd64_arm. These are x64/64 bit hosted because they run as 64 bit applications. What the post is saying is that there is a 32 bit executable for clang, and it can target x86, x64 and arm. But there is no 64 bit executable for clang. So if you have used one of the methods to tell VS to use the 64 bit compilers, then using the clang/c2 stuff will fail. This seems like an important step for MSVC… very exciting! @Darran: Ah, I see. Thank you! I knew all of VS itself was 32-bit, but of course it is able to generate 64-bit code. But this post was just confusing! :) This code will produce C2248 after update1 but no error in RTM and gcc or clang. it's bug? Many third party framework using this pattern very much in their code, so please fix it. class a { public: a() {} protected: ~a() {} friend class b; }; class b { public: a initA; }; static b init_b; Great that the source code for emitting Visual Studio debug information will be made available (will it be complete source code or will it be glue code talking to some closed source dll?) @cokkiy2001: Thank you for reporting the C2248 issue. The fix will be in the Update 2 release. Other than changing the accessibility, an alternate workaround is make 'b' a non-aggregate class type; one way to do that is to provide a default constructor "b() {}". @JonWil: check the RFC: lists.llvm.org/…/091847.html. It's source. @Andrew Pardoe Is this already implemented in Clang/C2? @Andrew Pardoe, I know very well that Clang does not support C++ CLI. My question was rather what are your intentions for the future? (for cross platform interoperability between C++ and .NET) Thanks I join to commenters saying that MSVC with full C++14 support is a great tool. (however, when it will be out of beta, MSVC 2017 or so may support it internally). Being a cmline-attracted developer, i want to ask – how to use the new compiler from cmline? – No inline asm is it planned and in what way – MS or CLANG? will it support x64? @Pierre, interoperability between C++ and .NET is largely a .NET question. The C++ compiler (c1xx, not Clang) can generate managed code, regardless of the machine target. Support for the managed code generated by the C++ compiler varies on different .NET platforms. @Ivan: >> Is this already implemented in Clang/C2? Is what already implemented? CodeView debug info? Yes. @BulatZ: Better x64 support is planned in a future update. Inline asm is more of an issue–either we'd change how Clang emits inline asm when it's targeting c2 (which is an ugly proposition for the Clang codebase) or we'd add a whole new facility to c2 (which is ugly for c2…) So while we want to do better with inline assembly it's not clear yet what we'll ultimately do. As for the command line, the tools are there. They aren't in the path set by the VS command prompt but you can find the tools in C:Program Files (x86)Microsoft Visual Studio 14.0VCClang 3.7 with a default install. Does it mean that I can build a project using range-v3 (github.com/…/range-v3) with Clang 3.7 with Microsoft CodeGen? This is very cool and will make a thing I am trying at the moment* much easier! Will it be available for the Community edition too? Is there a way to install this in an already existing install or do you have to make a fresh install? * I am (naively) trying to make the Linux Kernel Library (lkl) build with MSVC instead of MinGW for the NT target. I have already made vcxproj and sln file based on the list of objects from a defconfig for MinGW so the library should build natively on Windows without dependencies of GNU make, bc, … Right now I am just fighting with a lot of GCC-isms in the headers (making some progress not pushed to github yet). I know for a fact that Clang can compile this source with a few patches from the llvmlinux project (I also have that in a branch). Why building lkl with MSVC? I imagine that this will be far more "native" than a MinGW build and might even be possible to combine with the Windows Driver Kit (WDK) to recompile Linux drivers for native NT. For x86, this might not be such a big deal since Windows is pretty well covered (but perhaps some legacy device drivers could be found in Linux) but for hobbyists trying to bring up NT / Windows on odd ARM chips, it might be interesting. clang front-end is a great tool, very like using it. Now I can use it with vs, really incredible. @Jens Staal, this should be available for all editions of Visual Studio 2015 Update 1. What is the preferred way to report clang/c2 bugs? It would be nice if a bug tracker was available, too, to avoid the effort of reporting already known bugs. Clang/c2 is already speeding development of multi-platform code for me. Thanks, –Beman Hi Beman, The standard way to report bugs would be through. You're also welcome to send bugs to clangc2@microsoft.com. We've been testing Boost 1.55. We're eager to address Boost bugs. I'll send you a mail separately. I'm happy to hear that our Clang/c2 work has helped speed your work! Thanks! Andrew Hi – this is great news. I am trying to use clang front-end in my CMake project but this fails as CMake does not recognise the toolchain. Any tips? Thanks and Regards Is there any way I can avoid having the compiler define NDEBUG for me? Unfortunately that breaks some sanity checks in my code. If you are on Windows and compile everything with MinGW (the DLL and the application), linking directly to the dll is fine. it is perfectly fine to directly link the DLL if the DLL was also created by MinGW. But this is not the common way to work with shared libraries on windows because just a few compilers support this – the Microsoft Visual Studio Compiler is not among them. On Windows, a DLL usually has a matching so called “import library”. If an executable wants to use functions of a DLL it does not link the DLL directly but it’s import library. I hope that VC++ supports MnGW style shared linking.!!! @Marcel Raad: Even with the Clang frontend, this is in the preprocessor options. If you go to Project Properties->Configuration Properties->C/C++->Preprocessor then you will find the defined preprocessor definitions there. This works in the same way that VC does it using cl.exe. @ck: There are a few things that could be said about the choice that they made with the format that they chose. But personally, since you would want to make the interface C like for compatibility, it is actually quite fast and easy just to produce a .def file and build the import library from that. @Darran Rowe it's necessary. for example. if you use MinGW tool then you can build clang/llvm shared library seemlessly. but if you use VC++ then you must change clang/llvm tool chain source codes. it's necessary too many works. clang/llvm tool chain is a very vast code frame wich composed of c++/template. @ck: I'm sorry, I don't quite understand what you are getting at there. There are two important things that you are seemingly missing though. First, the Clang/LLVM project are quite willing to support Windows natively. The Clang compiler and the LLVM backend are quite happy reading and writing Windows specific things. Secondly, the current setup uses an external linker to do the final linking of the binaries. It is completely possible to use link.exe as this linker. If you use this, and the compiler correctly, then it will happily read VC import libraries. When building dynamic(shared) libraries, the linker will be the thing that creates the import library. So I think you may be worrying about nothing here, since the Clang team are well aware and committed to Windows compatibility. See clang.llvm.org/…/MSVCCompatibility.html . Great tool, can clang/c2 support all intrinsics that the Visual C++ compiler supports in x64 platform? x64 (amd64) Intrinsics List: msdn.microsoft.com/…/hh977022.aspx when I use MemoryBarrier() function, clang/c2 output: unresolved external symbol __faststorefence referenced @Darran Rowe if you use VC++ then you can make a static clang/llvm compiler libraries. but, if you can not make shared clang/llvm comiler libraries with using VC++. you must modify clang/llvm compiler toolchain sources. it's required too vast works. it is necessary to support MinGW style shared linking for flatform compatibility. @ck: Well, I personally don't see your problem. I can write a program with Clang/LLVM and it has no issues linking against a DLL created with Visual C++. I was testing and using the command line: clang -S -fms-compatibility -masm=intel For 32 bit compiles, I was getting mov eax, dword ptr [__imp__test] call eax and for 64 bit compiles, I was getting mov rax, qword ptr [rip + __imp_test] call rax and these match what is in the import library. Using the command line: dumpbin /all If you look through the output, you will find, under public symbols 4 __imp__test 4 _test or 4 __imp_test 4 test So given what code Clang/LLVM generated, this will link happily to the VC import libraries. So, what about the other way around? Well, given the right sets of flags, Clang/LLVM will generate names compatible with VC, and it happily outputs COFF object files. From a c++ source compiled with Clang/LLVM I got 007 00000000 SECT1 notype () External | _test for 32 bit or 00A 00000000 SECT1 notype () External | test for 64 bit. If you compare this to what VC gives 008 00000000 SECT3 notype () External | _test or 008 00000000 SECT3 notype () External | test You notice how the compiler has some differences, but the name ends up the same. The rest of this ends up as a tooling issue. Surprisingly, it all depends on what linker you use to generate the import library, since the DLL generated will export the same symbols regardless. Microsoft's link.exe will obviously generate VC compatible import libraries, but if you use ld from GNU binutils(mingw) then that is known not to create VC compatible import libraries. If you look at the LLVM linker, lld, then there is a lot of information on VC support. lld.llvm.org/windows_support.html But importantly, it says that it uses lib.exe, Microsoft's librarian to create import libraries. So they will be VC compatible. So given Clang/LLVM does support creating and using of DLLs with VC compatibility, and there is such an easy work around for the cases where you use Clang/LLVM with GNU binutils to generate the DLL, then I'm sorry, I really don't see your problem. Oh, if you are trying to Microsoft to support the import libraries generated by the Mingw backed versions of Clang/LLVM on Windows, say it for what it really is. Clang/LLVM has been getting much better at Windows support for quite a while now, and when it is backed with the VC toolchain, really has no issues with creating or using the import libraries that are VC compatible. But when you are using GNU binutils as the additional toolset for Clang/LLVM, then the problems that you get aren't actually a problem with either Clang/LLVM or VC, it is binutils. If I remember correctly, this was a decision that they made early on, to not generate DLL import libraries using the same format as VC. @Darran Rowe if you use MinGW then you can build clang/llvm compiler tool chain library. (static compiler library) but, you can not make a shared clang/llvm compiler library. (i say compiler components itselft) you must modify all clang/llvm source codes. it's required a vast code works. clang/llvm source codes is composed of a vast c++/template codes. @nabi: Ah, I apologise for the confusion, but you are talking about the loadable module support, right? Well, please remember that the Windows support for Clang/LLVM is still heavily under development. Right now, getting it to work and having it generate good Windows based code has been the highest priority. We will know what will be done next when the major Windows compatibility work is complete. But lets face it, you could class adding support for the PE format, or even full Windows support as "too much work", but it has been done anyway. is it possible to develop c++ website by clang?? is there any example The article describes how to choose the clang frontend via the UI. I would like to use it via CMAKE. How can I convince CMAKE to use the clang frontend? set the following environment variables: CC=/clang CXX=/clang++ ensure these are visible (export in bash), and when you cmake -G “…” .. you’ll see that CMake picks up your compiler settings (unless you’ve explicitly set them in CMakeLists.txt). angle brackets removed… should read: CC=[path_to_clang]/clang CXX=[path_to_clang]/clang++ Y'know, you could just bundle in the x86 and x64 versions of clang into the build. It produces better codegen than the Microsoft C/C++ compiler in most situations. Too radical an idea? @WEB, yes, you could develop a website using clang and C++. I don't know of any examples. @Roland Bock, you can run clang from the command line. Hooking it up to CMake should be straightforward. Again, I don't have any examples of this. @Byrd, no, it's not too radical an idea. But it is a bit simplistic. Many VS customers rely on MSVC. If we were to just swap out MSVC and stick in clang we'd break them all. And your claim that it "produces better codegen than the Microsoft C/C++ compiler" appears speculative. Also please note that the code generation is done mostly in c2, which is the Microsoft C/C++ compiler. We added clang to Visual C++ to help customers with cross-platform code. It's not intended as a replacement for MSVC. Hello I just made a first try of Clang with my current VC project. At a first glance, it seems to work globaly well. I nevertheless have found an internal error because of an "Intrinsic not yet implemented" (BTW it's an avx intrinsic "_mm256_permute2f128_ps"). I would like to test the newest (3.8?) version, because it's only 3.7 yet? Is it possible to install a newer Clang version? Otherwise I do not know what to do to solve the missing intrinsic problem. Any idea? For cross-platform development I prefer to use the Intel C++ compilers. They have the same compilers on Linux,OS X and Windows. The Intel compiler works seamlessly with VC generated code on Windows. Sure it costs $$$ but you 'get what you pay for'. @Patz, we still have work to do to implement intrinsics. These will be addressed in future updates. And you can't update the Clang version manually. We plan to keep current but there's necessarily a latency between official Clang releases and our support. Check for a Clang 3.8 to ship soon. @ACU, thanks for the recommendation. I don't see how it's relevant to our Clang/c2 support but Intel does have a fine compiler suite. ClangC like Crang (Ninja-Turtles) super 1. Is there any kind of public roadmap for Your work on Clang with MSVC CodeGen? 2. Is there any public repository from which I could check out code and compile my Clang – MSVC CodeGen? 3. Do you intend to go for open source development workflow in future for Clang – MSVC CodeGen like .NET did? I think you could get helpful hand from community in pushing project to its goals faster. My opinion in general on keeping VC++ toolchain as closed source is that it does not provide any kind of competitive advantage to Microsoft. In contrary it makes much harder to keep up pace with technology development i.e. MSVC is lagging in C and C++ latest standard features support behind other compilers. In my opinion having an open source VC compiler, linker and resource tools would make huge positive difference for it,s future. @Jacek Blaszczynski, there isn't a public roadmap as of yet. We're still in pre-release mode but we expect to become a fully supported product one of these days. As for #2 and #3, we haven't yet open-sourced the bridge between Clang and c2. That's just a matter of doing the engineering work to publish the bridge code (just a few lines of code, frankly) and a c2.lib that can be linked into the project. We intend to publish the bridge and a library-based optimizer but I don't have a timeline for that right now. I appreciate the open-source strategy that the .NET team is following–heck, I was a lead for the .NET runtime for a number of years–but the C++ compiler isn't there quite yet. Thanks, Andrew I hope that all intrinsics will be soon fixed :) I'm waiting forward to see the next Clang version in Visual Studio :) Just created project from "Dyncamic Library(Win) for Clang…", added source.c: #include "windows.h" #include "wingdi.h" compiling produces error: 1> Source.c 1> In file included from Source.c:1: 1> In file included from C:Program Files (x86)Windows Kits8.1Includeumwindows.h:166: 1>C:Program Files (x86)Windows Kits8.1Includeumwingdi.h(2809,5): error : enumerator value is not representable in the underlying type 'int' 1> DISPLAYCONFIG_OUTPUT_TECHNOLOGY_INTERNAL = 0x80000000, any solution to compile it as .c file ? (c++ compiles it without error) @shrike: I can't reproduce your failure with the January 2016 update of clang/c2. Either I'm not getting all the steps or the update fixed your issue. Can you please try the update and let me know through email (clangc2@microsoft.com) if it's still an issue? Tried Jan2016 update, and issue still persists. Just try to add ".c" (not ".cpp") file to project with "#include <windows.h> " inside and try to compile. And make sure there is no "-x c++" option in project prefs. I just reproduced it on two different installations of VS. PS: clangc2@microsoft.com doesn't accept messages from me ) @shrike-ru: You're right, that does error when you compile with Clang in C mode. It looks like Clang is being stricter than MSVC here–we should probably also be erroring. 0x8000 0000 is larger than an int, so the value undergoes a narrowing conversion in C++. Same for 0xFFFF FFFF. In C I think the error is correct. The only way I know to make it compile would be to change the code in wingdi.h. When the codeview type info library for clang (clang portion) is landed in clang trunk repository. ? possible to compile the C++ code against other platform, like Linux with this features? My quesiton is whether we still need othere compiler, like GCC in linux? My customers is just using VC++ to create applicaiton which should be compiled by GCC and run in linux. I suggest them to try the GDB debugger new features. they are happy with it. not sure whehter the Clang feature is useful for their secnario? @ck, the best way to tell that would be to watch clang commit notices. @JunQian, no, Clang/C2 is only capable of producing Windows programs. Clang/LLVM can generate Linux programs. Hi Switching to Clang I get the following error in my project in float.h, line 33: /* If we're on MinGW, fall back to the system's float.h, which might have * additional definitions provided for Windows. * For more details see msdn.microsoft.com/…/y0ybw9fy.aspx */ #if (defined(__MINGW32__) || defined(_MSC_VER)) && __STDC_HOSTED__ && __has_include_next(<float.h>) # include_next <float.h> Translated from German: "Function call is not permitted in constant expression." And there are 146 more errors. Cheers, g @Gon Solo, thanks for the report. It's hard to get all the predefined macros correct, especially when so much code assumes _MSC_VER means what it meant before we shipped Clang/C2. Note that the code you quoted is in Clang's float.h, it's not a change we made in the header: clang.llvm.org/…/float_8h_source.html I'll ask a dev to investigate what the right behavior is and we'll work on getting this fixed. Andrew any updates? I have run in to the exact same issue. I am not very familiar with CLANG toolchain but can provide any logs you need. thanks Hi @shrike-ru, You can work around the enum 0x80000000 issue in Clang by setting the Microsoft Compatibility Mode flag in the project: Yes (-fms-compatibility). This will allow the code to compile properly as a .c file. Andrew, I’m trying the clang 3.7 feature from Update 1. I’m getting “no input files”. Is there a setting that I’ve got wrong? @Gunnar, I haven’t seen that. I’d need more information or a repro to help diagnose. One advantage I see in using Clang with VS is the possibility to use recent OpenMP implementations (3.1 or more recent). This would be a very important feature for cross-platform scientific computing. Is supporting recent OpenMP implementation through LLVM a plan in the near future? Thanks for the suggestion. It’s more in the scope of this project to make OpenMP work through Clang and the C2 code generator. I’ll admit I haven’t tried it. Where is it failing? “If you get an “error : use of undeclared identifier ‘O_WRONLY’”, define in your project settings macro __STDC__=0” Where and how? If you define __STDC__=0 in Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, you get the following error: 1> In file included from :360: 1>(5,9): warning : ‘__STDC__’ macro redefined [-Wmacro-redefined] 1> #define __STDC__ 0 1> ^ 1> (355,9) : note: previous definition is here 1> #define __STDC__ 1 1> ^ 1> 1 warning generated. (Using undefine Preprocessor Definitions doesn’t work) (I also noticed while looking at the command line that for clang debug builds, the NDEBUG macro is listed as inherited. To answer my own question, if I added the following as “Additional Options”, it worked, even though the same entries existed as is in my previous attempts: -U “__STDC__” -D “__STDC__=0” Using VS 2015 update 2, trying to compile project which uses OpenCV. First , I get Error enumerator value evaluates to 4294901760, which cannot be narrowed to type ‘int’ [-Wc++11-narrowing] Though I have warnings disabled and treat warnings as errors also disabled :/ Error C1001 An internal error has occurred in the compiler. opencv2\core\mat.hpp line 366 ..But afterwards I just get this. Anyway, thanks for a great job as clang support in visual studio are just an awesome news! Hey! I get the error “clang.exe : error : no such file or directory: ‘/analyze:quiet'” if I run the code analyzer, what is going wrong? Tried how you toolset ‘v140_clang_c2’ is working – managed even to compile something with it. Added support to syncProj to be able to use this toolset and generate .vcxproj files. However – quite late noticed that inline assembly is not supported, and I’m missing exactly that one. There is no warning or anything displayed, but when you start to debug you notice that instead of your written assembly there is only int 3, which stands for breakpoint. Bit disappointing. Also wondering how project settings are working comparing for example to Android clang project settings. If you want to see use cases for inline assembly – here is some links / walkarounds – they were done for vlc: in order to compile plugin using inline assembly, but luckily I can survive for now without compiling that code – but it’s possible that my need will return later on after requiring some performance. Can you estimate when you make clang inline assembly working with year precision ?
https://blogs.msdn.microsoft.com/vcblog/2015/12/04/clang-with-microsoft-codegen-in-vs-2015-update-1/
CC-MAIN-2019-39
refinedweb
5,589
64.91
Type: Posts; User: lch2 yes, i am using SetScrollSizes. Also, I like to maximize the dialog with the frame box and with that stretch the display area to cover the maximized dialog. right now only the dialog expands but the... How can I enlarge the filed of view to show more lines without scrolling or with less scrolling?. for some reason, had to call the RedrawWindow to update the window properly. Invalidate() would not update it after each buffer updates. since i use SetDIBitsToDevice(dc->GetSafeHdc(),.. , then I do not need to call the ReleaseDC, right?. If the buffer passed to the SetDIBitsToDevice is refreshed, I have to call the Invalidate to... already added: SetDIBitsToDevice(dc->GetSafeHdc(),.. and Release(dc); Won't have any performance drawback?. OnDraw is called continuously. Used the setscrollsizes already. I just can not get the scroll bar sizes right. Probably the OnDraw and SetScrollSizes needs changes. any one, any suggestion, help?. There was a bug in filling the buffer. Now, my problem is, I can not scroll to view entire image. it re-draws the top half only. It needs improvements. attached. Got a dialog based MFC that is 100% done. Would be nicer to draw a few raw images continuously using a pop up dialog. Worked with SDI long time ago and do not see major point to port everything to... Used some online code example to put a little project to display raw image. The display does not seem to work. For some purposes, I like this to work with a dialog based MFC project.XDVView is... What control to use to display it?. Got buffer with raw data. How to display it in MFC dialog and refresh it?. any source is appreciated.:confused: in my vs executable I like to be able to access a function of another executable which may be written in another IDE, and access its memory buffer. Like,to read its memory buffer after it is filled... I have a incoming packet of 75 bytes. It is packed of five 10 Bitters and five 5 Bitters. like 10-5-10-5-... What is the best way to receive-parse-modify and send the packet back using struct/union... Just started to create QT application on win7/visual studio 2008. Has anyone gone through the same thing?. what are the best steps?. Can I turn this into a cross platform easily?.:confused: how to add zoom to this dialog?. so that with mouse click one can zoom into the paint area. make res folder, copy .rc2 .ico files to it. yes, this is it: class MyDraw : public CStatic { // Construction public: MyDraw(); //removed items.... To understand the mfc dialog, onpaint better, after reading some online notes, made a little project. This draws a Horz. axis with a red color curve above it. What I can not figure how to do, is to... Got an old Mfc dialog project that needs a lot of controls to be added to it. What is the limitation on number of controls if I do the work under vc2008. How can I bypass such limitation if there is... in mfc, I am not sure how to handle a double click event on a listbox item. any code that does it?. I need a tool that I can develop win7/linux applications at the same time. Most demanded and stable in the market. QT?. Is it free?. forgive me for lack of more info. In the "start debugging" mode, in case 1, the main never begins, i see circled cursor, but in case 2 it does. why?. vc2008. case 1: int main(...) { std::cout... #include <iostream> using std::cout; using std::endl; int type; template<class T> class templatetask { public: inline T addnumbers (T x1, T y1) {
http://forums.codeguru.com/search.php?s=041c03700bc90331a88a97c765ad8ed3&searchid=6123335
CC-MAIN-2015-06
refinedweb
631
78.14
Galaxy User Guide¶ collections - Finding roles on Galaxy - Installing roles from Galaxy Finding collections on Galaxy¶ To find collections on Galaxy: - Click the Search icon in the left-hand navigation. - Set the filter to collection. - Set other filters and press enter. Galaxy presents a list of collections that match your search criteria. Installing collections¶ Installing a collection from Galaxy¶ You can also directly use the tarball from your build: ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz . play.yml ├── collections/ │ └── ansible_collections/ │ └── my_namespace/ │ └── my_collection/<collection structure lives here> See Collection structure for details on the collection directory structure. Downloading a collection from Automation Hub¶¶. Install multiple collections with a requirements file¶ You can also setup)' The version key can take in the same range identifier format documented above. Roles can also be specified and placed under the roles key. The values follow the same format as a requirements file used in older Ansible releases. ---. Configuring the ansible-galaxy client¶. See API token for details. [galaxy_server.automation_hub] url= auth_url= token=my_ah_token [galaxy_server.my_org_hub] url= username=my_user password=my_pass [galaxy_server.release_galaxy] url= token=my_token [galaxy_server.test_galaxy] url= token=my_test conjuction with username, for basic authentication. auth_url: The URL of a Keycloak server ‘token_endpoint’ if using SSO authentication (for example, Automation Hub). Mutually exclusive with username. Requires token.¶. Get more information about a role¶ Installing roles from Galaxy¶ see GALAXY_SERVER. Installing roles¶ Use the ansible-galaxy command to download roles from the Galaxy website $ ansible-galaxy install namespace.role_name Setting where to install roles¶. - Define roles_pathin an ansible.cfgfile. - Use the --roles-pathoption for the ansible-galaxycommand. The following provides an example of using --roles-path to install the role into the current working directory: $ ansible-galaxy install --roles-path . geerlingguy.apache See also - Configuring Ansible - All about configuration files Installing a specific version of a role¶,v1.0.0 It is also possible to point directly to the git repository and specify a branch name or commit hash as the version. For example, the following will install a specific commit: $ ansible-galaxy install git+ Installing multiple roles from a file¶ You namespace.role_name, if downloading from Galaxy; otherwise, provide a URL pointing to a repository within a git based SCM. See the examples below. This is a required attribute. - scm - Specify the SCM. As of this writing only git or hg are allowed. See the examples below. Defaults to git. - version: - The version of the role to download. Provide a release tag value, commit hash, or branch name. Defaults to the branch set as a default in the repository, otherwise defaults to Installing roles and collections from the same requirements.yml file¶ You can install roles and collections from the same requirements files, with some caveats. ---. Installing multiple roles from multiple files¶ - src: yatesr.timezone - include: <path_to_requirements>/webserver.yml To install all the roles from both files, pass the root file, in this case requirements.yml on the command line, as follows: $ ansible-galaxy install -r requirements.yml Dependencies¶ namespace.role_name. You can also use the more complex format in requirements.yml, allowing you to provide src, scm, version, and name. depending on what tags and conditionals are applied. If the source of a role is Galaxy, specify the role in the format namespace.role_name: dependencies: - geerlingguy.apache - geerlingguy.ansible Alternately, you can specify the role dependencies in the complex form used in requirements.yml Galaxy expects all role dependencies to exist in Galaxy, and therefore dependencies to be specified in the namespace.role_name format. If you import a role with a dependency where the src value is a URL, the import process will fail. List installed roles¶¶ Use remove to delete a role from roles_path: $ ansible-galaxy remove namespace.role_name See also - Using collections - Sharable collections of modules, playbooks and roles - Roles - Reusable tasks, handlers, and other files in a known directory structure
https://docs.ansible.com/ansible/devel/galaxy/user_guide.html
CC-MAIN-2020-05
refinedweb
644
51.65
On Sat, Jun 02, 2001 at 10:35:17AM -0400, Greg Hudson wrote: > Two more notes: > > >> Why doesn't apr's configure script avoid touching apr.h when there > >> is no actual change? > > > AC_OUTPUT() doesn't do that, AFAIK. > > AC_CONFIG_HEADER does. From experience, I believe it would be > possible to use AC_CONFIG_HEADER with apr.h without corrupting the > namespace with autoconf symbols. Hmm. I'll take a look at that. It would be nice not to touch that header on a simple reconfig. > (Even better would be to make apr.h > not vary per platform and per compiler, but that's asking a lot for a > portability layer.) The various type mappings are in there, so it will have to change. > > :-) > > If you don't like those enter/exit messages from gnu make, just use > make -S. Ah. Cool. Dang... where do you find this stuff? :-) (and don't say RTFM) I also realized that SVN users can do "make local-all" to just build the SVN portions and skip APR/Neon. Once you build the two external projects, the local-all trick can keep things much quieter. Cheers, -g -- Greg Stein, --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org This is an archived mail posted to the Subversion Dev mailing list.
http://svn.haxx.se/dev/archive-2001-06/0025.shtml
CC-MAIN-2016-18
refinedweb
223
76.62
Import Sage Worksheets Hello, I'm looking to be able to modulate my code in the Sage Math cloud. I've done some research on the topic and it seems that one could use the attach syntax to bring in the methods from another worksheet. Unfortunately I get the following error when I try to attach the sagews 'test' which is in the same folder as the worksheet I'm working in. File "/cocalc/lib/python2.7/site-packages/smc_sagews/sage_server.py", line 1013, in execute exec compile(block+'\n', '', 'single') in namespace, locals File "", line 1, in <module> File "/cocalc/lib/python2.7/site-packages/smc_sagews/sage_salvus.py", line 3443, in attach raise IOError('did not find file %r to attach' % fname) IOError: did not find file 'test' to attach Can anyone help me with this issue? Thanks, Adam
https://ask.sagemath.org/question/41924/import-sage-worksheets/
CC-MAIN-2020-29
refinedweb
140
55.98
<< Step 1: Go get stuff Step 2: Remove the movement Step 3: Hack the movement The clock movement has a single coil stepper motor inside. The basic theory here is that we want to disconnect the coil from the clock's timing circuit and then attach wires to the coil so that we can control it ourselves. So, knowing this, open up the clock movement and make careful mental note of where everything is (or take a picture). Take apart the movement until the circuit board is free. Locate the contacts on the circuit board where the motor is located. Notice these two contacts have traces that go off to the chip (hidden under the black blob). The idea is to use a razor blade or knife to scratch away at these traces until the connection with the chip is visibly broken. For good measure, I also cut away the timing crystal, rendering the circuit more or less useless. Lastly, I soldered about 6" of wire to each of the motor terminals. When this was all done I put the whole thing back together. There wasn't a spot in the case where I could conveniently slip the wires through and I needed it to go properly back together, so I ended up cutting a small hole for the wires to pass through. Step 4: Reassemble the clock Once your movement is good and hacked, but the clock back together. Important: Make sure the hour, minute and second hand all line up at 12:00. I did not do this the first time around and quickly discovered that the clock would not display right unless all the hands were lined up. Step 5: RTC Kit If you haven't done so already, but together your Adafruit DS1307 Real Time Clock Kit. Here are some instructions for getting the job done. Also, while you are at it, set the time on the RTC board. So long as you don't take the battery out, you should only need to do this once (at least for the next 5 years or so until the battery dies). You can get in-depth instructions for setting the time on Ladyada's site. Step 6: Build the circuit Step 7: Program the chip You will need to install the RTClib library for your code to work. Instructions to do this are on Ladyada's page. Download lunchtime_clock.zip, uncompress it and then upload the lunchtime_clock.pde code onto your chip. If you don't feel like downloading the file, here is the code: // Lunchtime Clock // by Randy Sarafan // // Slows down 20% at 11 and speeds up 20% at 11:48 until it hits 1. // The rest of the time the clock goes at normal speed // // Do what you want with this code. Just make certain that whatever you do, it is awesome. // #include <Wire.h> #include "RTClib.h" RTC_DS1307 RTC; int clockpin = 9; int clockpin1 = 10; void setup () { Serial.begin(57600); Wire.begin(); RTC.begin(); } void loop () { DateTime now = RTC.now(); TurnTurnTurn(1000); if (now.hour() == 11) { for (int i = 0; i < 1800; i++) { TurnTurnTurn(800); } for (int i = 0; i < 1800; i++) { TurnTurnTurn(1200); } } } int TurnTurnTurn(int TimeToWait){ analogWrite(clockpin, 0); analogWrite(clockpin1, 124); // sets the value (range from 0 to 255) delay(TimeToWait); analogWrite(clockpin, 124); analogWrite(clockpin1, 0); delay(TimeToWait); } Step 8: Put it all together Once programmed, transfer your ATMEGA168 chip from the Arduino to your circuit board. Plug in your RTC board into the socket. Make sure the pins are lined up correctly before powering it up. Attach your circuit board and battery to the back of the clock. In true last-minute DIY fashion, I used hot glue and gaffers tape to do this. Self-adhesive Velcro would be ideal. Step 9: Synchronize the clocks Put a new ATMEGA168 chip into the Arduino. Connect the Arduino once more to the RTC board. Run the sample code from Ladyada's page. Open the serial monitor. The time displayed here is the time you are going to want to sync your clock to. I found it was easiest to set a third clock (my computer clock) to be perfectly in sync with the RTC board. Then, I powered down the Arduino, transferred the RTC board back to my circuit and set the Lunchtime Clock to a minute later than my computer time. At just the right moment, when the minute changed on my computer, I powered up the lunchtime clock to achieve synchronicity. The lunchtime clock works extremely well and has thus far surpassed my expectations. ?
http://www.instructables.com/id/Lunchtime-Clock/?ALLSTEPS
CC-MAIN-2014-10
refinedweb
765
80.31
Applications. It is difficult to understand how to use the consumer API without understanding these concepts first. We’ll start by explaining some of the important concepts, and then we’ll go through some examples that show the different ways consumer APIs can be used to implement applications with varying requirements. In order to understand how to read data from Kafka, you first need to understand its consumers and consumer groups. The following sections cover those concepts. Suppose you have an application that needs to read messages from a Kafka topic, run some validations against them, and write the results to another data store. In this case your application will create a consumer object, subscribe to the appropriate topic, and start receiving messages, validating them and writing the results. This may work well for a while, but what if the rate at which producers write messages to the topic exceeds the rate at which your application can validate them? If you are limited to a single consumer reading and processing the data, your application may fall farther and farther behind, unable to keep up with the rate of incoming messages. Obviously there is a need to scale consumption from topics. Just like multiple producers can write to the same topic, we need to allow multiple consumers to read from the same topic, splitting the data between them. Kafka consumers are typically part of a consumer group. When multiple consumers are subscribed to a topic and belong to the same consumer group, each consumer in the group will receive messages from a different subset of the partitions in the topic. Let’s take topic T1 with four partitions. Now suppose we created a new consumer, C1, which is the only consumer in group G1, and use it to subscribe to topic T1. Consumer C1 will get all messages from all four t1 partitions. See Figure 4-1. If we add another consumer, C2, to group G1, each consumer will only get messages from two partitions. Perhaps messages from partition 0 and 2 go to C1 and messages from partitions 1 and 3 go to consumer C2. See Figure 4-2. If G1 has four consumers, then each will read messages from a single partition. See Figure 4-3. If we add more consumers to a single group with a single topic than we have partitions, some of the consumers will be idle and get no messages at all. See Figure 4-4. The main way we scale data consumption from a Kafka topic is by adding more consumers to a consumer group. It is common for Kafka consumers to do high-latency operations such as write to a database or a time-consuming computation on the data. In these cases,. Keep in mind that there is no point in adding more consumers than you have partitions in a topic—some of the consumers will just be idle. Chapter 2 includes some suggestions on how to choose the number of partitions in a topic. In addition to adding consumers in order to scale a single application, it is very common to have multiple applications that need to read data from the same topic. In fact, one of the main design goals in Kafka was to make the data produced to Kafka topics available for many use cases throughout the organization. In those cases, we want each application to get all of the messages, rather than just a subset. To make sure an application gets all the messages in a topic, ensure the application has its own consumer group. Unlike many traditional messaging systems, Kafka scales to a large number of consumers and consumer groups without reducing performance. In the previous example, if we add a new consumer group G2 with a single consumer, this consumer will get all the messages in topic T1 independent of what G1 is doing. G2 can have more than a single consumer, in which case they will each get a subset of partitions, just like we showed for G1, but G2 as a whole will still get all the messages regardless of other consumer groups. See Figure 4-5.. As we saw in the previous section, consumers in a consumer group share ownership of the partitions in the topics they subscribe to. When we add a new consumer to the group, it starts consuming messages from partitions previously consumed by another consumer. The same thing happens when a consumer shuts down or crashes; it leaves the group, and the partitions it used to consume will be consumed by one of the remaining consumers. Reassignment of partitions to consumers also happen when the topics the consumer group is consuming are modified (e.g., if an administrator adds new partitions). Moving partition ownership from one consumer to another is called a rebalance. Rebalances are important because they provide the consumer group with high availability and scalability (allowing us to easily and safely add and remove consumers), but in the normal course of events they are fairly undesirable. During a rebalance, consumers can’t consume messages, so a rebalance is basically a short window of unavailability of the entire consumer group. In addition, when partitions are moved from one consumer to another, the consumer loses its current state; if it was caching any data, it will need to refresh its caches—slowing down the application until the consumer sets up its state again. Throughout this chapter we will discuss how to safely handle rebalances and how to avoid unnecessary ones. The way consumers maintain membership in a consumer group and ownership of the partitions assigned to them is by sending heartbeats to a Kafka broker designated as the group coordinator (this broker can be different for different consumer groups). As long as the consumer is sending heartbeats at regular intervals, it is assumed to be alive, well, and processing messages from its partitions. Heartbeats are sent when the consumer polls (i.e., retrieves records) and when it commits records it has consumed. If the consumer stops sending heartbeats for long enough, its session will time out and the group coordinator will consider it dead and trigger a rebalance. If a consumer crashed and stopped processing messages, it will take the group coordinator a few seconds without heartbeats to decide it is dead and trigger. Later in this chapter we will discuss configuration options that control heartbeat frequency and session timeouts and how to set those to match your requirements. When a consumer wants to join a group, it sends a JoinGroup request to the group coordinator. The first consumer to join the group becomes the group leader. The leader receives a list of all consumers in the group from the group coordinator (this will include all consumers that sent a heartbeat recently and which are therefore considered alive) and is responsible for assigning a subset of partitions to each consumer. It uses an implementation of PartitionAssignor to decide which partitions should be handled by which consumer. Kafka has two built-in partition assignment policies, which we will discuss in more depth in the configuration section.. The first step to start consuming records is to create a KafkaConsumer instance. Creating a KafkaConsumer is very similar to creating a KafkaProducer—you create a Java Properties instance with the properties you want to pass to the consumer. We will discuss all the properties in depth later in the chapter. To start we just need to use the three mandatory properties: bootstrap.servers, key.deserializer, and value.deserializer. The first property, bootstrap.servers, is the connection string to a Kafka cluster. It is used the exact same way as in KafkaProducer (you can refer to Chapter 3 for details on how this is defined). The other two properties, key.deserializer and value.deserializer, are similar to the serializers defined for the producer, but rather than specifying classes that turn Java objects to byte arrays, you need to specify classes that can take a byte array and turn it into a Java object. There is a fourth property, which is not strictly mandatory, but for now we will pretend it is. The property is group.id and it specifies the consumer group the Kafka Consumer instance belongs to. While it is possible to create consumers that do not belong to any consumer group, this is uncommon, so for most of the chapter we will assume the consumer is part of a group.); Most of what you see here should be familiar if you’ve read Chapter 3 on creating producers. We assume that the records we consume will have String objects as both the key and the value of the record. The only new property here is group.id, which is the name of the consumer group this consumer belong to. Once we create a consumer, the next step is to subscribe to one or more topics. The subcribe() method takes a list of topics as a parameter, so it’s pretty simple to use: consumer.subscribe(Collections.singletonList("customerCountries")); Here we simply create a list with a single element: the topic name customerCountries. It is also possible to call subscribe with a regular expression. The expression can match multiple topic names, and if someone creates a new topic with a name that matches, a rebalance will happen almost immediately and the consumers will start consuming from the new topic. This is useful for applications that need to consume from multiple topics and can handle the different types of data the topics will contain. Subscribing to multiple topics using a regular expression is most commonly used in applications that replicate data between Kafka and another system. To subscribe to all test topics, we can call: consumer.subscribe("test.*"); At the heart of the consumer API is a simple loop for polling the server for more data. Once the consumer subscribes to topics, the poll loop handles all details of coordination, partition rebalances, heartbeats, and data fetching, leaving the developer with a clean API that simply returns available data from the assigned partitions. The main body of a consumer will look as follows: try { while (true) { ConsumerRecords<String, String> records = consumer.poll(100);ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records();} } } finally { consumer.close(); }} This is indeed an infinite loop. Consumers are usually long-running applications that continuously poll Kafka for more data. We will show later in the chapter how to cleanly exit the loop and close the consumer. This is the most important line in the chapter. The same way that sharks must keep moving or they die, consumers must keep polling Kafka or they will be considered dead and the partitions they are consuming will be handed to another consumer in the group to continue consuming. The parameter we pass, poll(), is a timeout interval and controls how long poll() will block if data is not available in the consumer buffer. If this is set to 0, poll() will return immediately; otherwise, it will wait for the specified number of milliseconds for data to arrive from the broker. poll() returns a list of records. Each record contains the topic and partition the record came from, the offset of the record within the partition, and of course the key and the value of the record. Typically we want to iterate over the list and process the records individually. The poll() method takes a timeout parameter. This specifies how long it will take poll to return, with or without data. The value is typically driven by application needs for quick responses—how fast do you want to return control to the thread that does the polling? Processing usually ends in writing a result in a data store or updating a stored record. Here, the goal is to keep a running count of customers from each county, so we update a hashtable and print the result as JSON. A more realistic example would store the updates result in a data store. Always close() the consumer before exiting. This will close the network connections and sockets. It will also trigger a rebalance immediately rather than wait for the group coordinator to discover that the consumer stopped sending heartbeats and is likely dead, which will take longer and therefore result in a longer period of time in which consumers can’t consume messages from a subset of the partitions.. And of course the heartbeats that keep consumers alive are sent from within the poll loop. For this reason, we try to make sure that whatever processing we do between iterations is fast and efficient.. It is useful to wrap the consumer logic in its own object and then use Java’s ExecutorService to start multiple threads each with its own consumer. The Confluent blog has a tutorial that shows how to do just that. So far we have focused on learning the consumer API, but we’ve only looked at a few of the configuration properties—just the mandatory bootstrap.servers, group.id, key.deserializer, and value.deserializer. All the consumer configuration is documented in Apache Kafka documentation. Most of the parameters have reasonable defaults and do not require modification, but some have implications on the performance and availability of the consumers. Let’s take a look at some of the more important properties. This property allows a consumer to specify the minimum amount of data that it wants to receive from the broker when fetching records. If a broker receives a request for records from a consumer but the new records amount to fewer bytes than min.fetch.bytes, the broker will wait until more messages are available before sending the records back to the consumer. This reduces the load on both the consumer and the broker as they have to handle fewer back-and-forth messages in cases where the topics don’t have much new activity (or for lower activity hours of the day). You will want to set this parameter higher than the default if the consumer is using too much CPU when there isn’t much data available, or reduce load on the brokers when you have large number of consumers. By setting fetch.min.bytes, you tell Kafka to wait until it has enough data to send before responding to the consumer. fetch.max.wait.ms lets you control how long to wait. By default, Kafka will wait up to 500 ms. This results in up to 500 ms of extra latency in case there is not enough data flowing to the Kafka topic to satisfy the minimum amount of data to return. If you want to limit the potential latency (usually due to SLAs controlling the maximum latency of the application), you can set fetch.max.wait.ms to a lower value. If you set fetch.max.wait.ms to 100 ms and fetch.min.bytes to 1 MB, Kafka will recieve a fetch request from the consumer and will respond with data either when it has 1 MB of data to return or after 100 ms, whichever happens first. This property controls the maximum number of bytes the server will return per partition. The default is 1 MB, which means that when KafkaConsumer.poll() returns ConsumerRecords, the record object will use at most max.partition.fetch.bytes per partition assigned to the consumer. So if a topic has 20 partitions, and you have 5 consumers, each consumer will need to have 4 MB of memory available for ConsumerRecords. In practice, you will want to allocate more memory as each consumer will need to handle more partitions if other consumers in the group fail. max. partition.fetch.bytes must be larger than the largest message a broker will accept (determined by the max.message.size property in the broker configuration), or the broker may have messages that the consumer will be unable to consume, in which case the consumer will hang trying to read them. Another important consideration when setting max.partition.fetch.bytes is the amount of time it takes the consumer to process data. As you recall, the consumer must call poll() frequently enough to avoid session timeout and subsequent rebalance. If the amount of data a single poll() returns is very large, it may take the consumer longer to process, which means it will not get to the next iteration of the poll loop in time to avoid a session timeout. If this occurs, the two options are either to lower max. partition.fetch.bytes or to increase the session timeout. The amount of time a consumer can be out of contact with the brokers while still considered alive defaults to 3 seconds. If more than session.timeout.ms passes without the consumer sending a heartbeat to the group coordinator, it is considered dead and the group coordinator will trigger a rebalance of the consumer group to allocate partitions from the dead consumer to the other consumers in the group. This property. Setting session.timeout.ms lower than the default will allow consumer groups to detect and recover from failure sooner, but may also cause unwanted rebalances as a result of consumers taking longer to complete the poll loop or garbage collection. Setting session.timeout.ms higher will reduce the chance of accidental rebalance, but also means it will take longer to detect a real failure. This property controls the behavior of the consumer when it starts reading a partition for which it doesn’t have a committed offset or if the committed offset it has is invalid (usually because the consumer was down for so long that the record with that offset was already aged out of the broker). The default is “latest,” which means that lacking a valid offset, the consumer will start reading from the newest records (records that were written after the consumer started running). The alternative is “earliest,” which means that lacking a valid offset, the consumer will read all the data in the partition, starting from the very beginning. We discussed the different options for committing offsets earlier in this chapter. This parameter controls whether the consumer will commit offsets automatically, and defaults to true. Set it to false if you prefer to control when offsets are committed, which is necessary to minimize duplicates and avoid missing data. If you set enable.auto.commit to true, then you might also want to control how frequently offsets will be committed using auto.commit.interval.ms. We learned that partitions are assigned to consumers in a consumer group. A PartitionAssignor is a class that, given consumers and topics they subscribed to, decides which partitions will be assigned to which consumer. By default, Kafka has two assignment strategies: Assigns to each consumer a consecutive subset of partitions from each topic it subscribes to. So if consumers C1 and C2 are subscribed to two topics, T1 and T2, and each of the topics has three partitions, then C1 will be assigned partitions 0 and 1 from topics T1 and T2, while C2 will be assigned partition 2 from those topics. Because each topic has an uneven number of partitions and the assignment is done for each topic independently, the first consumer ends up with more partitions than the second. This happens whenever Range assignment is used and the number of consumers does not divide the number of partitions in each topic neatly. Takes all the partitions from all subscribed topics and assigns them to consumers sequentially, one by one. If C1 and C2 described previously used RoundRobin assignment, C1 would have partitions 0 and 2 from topic T1 and partition 1 from topic T2. C2 would have partition 1 from topic T1 and partitions 0 and 2 from topic T2. In general, if all consumers are subscribed to the same topics (a very common scenario), RoundRobin assignment will end up with all consumers having the same number of partitions (or at most 1 partition difference). The partition.assignment.strategy allows you to choose a partition-assignment strategy. The default is org.apache.kafka.clients.consumer.RangeAssignor, which implements the Range strategy described above. You can replace it with org.apache.kafka.clients.consumer.RoundRobinAssignor. A more advanced option is to implement your own assignment strategy, in which case partition.assignment.strategy should point to the name of your class. These are the sizes of the TCP send and receive buffers used by the sockets when writing and reading data. If these are set to -1, the OS defaults will be used. It can be a good idea to increase those when producers or consumers communicate with brokers in a different datacenter, because those network links typically have higher latency and lower bandwidth. Whenever we call poll(), it returns records written to Kafka that consumers in our group have not read yet. This means that we have a way of tracking which records were read by a consumer of the group. As discussed before, one of Kafka’s unique characteristics is that it does not track acknowledgments from consumers the way many JMS queues do. Instead, it allows consumers to use Kafka to track their position (offset) in each partition. We call the action of updating the current position in the partition a commit. How does a consumer commit an offset? It produces a message to Kafka, to a special topic, with the committed offset for each partition. As long as all your consumers are up, running, and churning away, this will have no impact. However, if a consumer crashes or a new consumer joins the consumer group, this will trigger a rebalance. After a rebalance, each consumer may be assigned a new set of partitions than the one it processed before. In order to know where to pick up the work, the consumer will read the latest committed offset of each partition and continue from there. __consumer_offsets If the committed offset is smaller than the offset of the last message the client processed, the messages between the last processed offset and the committed offset will be processed twice. See Figure 4-6. If the committed offset is larger than the offset of the last message the client actually processed, all messages between the last processed offset and the committed offset will be missed by the consumer group. See Figure 4-7. Clearly, managing offsets has a big impact on the client application.. Just like everything else in the consumer, the automatic commits are driven by the poll loop. Whenever you poll, the consumer checks if it is time to commit, and if it is, it will commit the offsets it returned in the last poll. Before using this convenient option, however, it is important to understand the consequences. Consider that, by default, automatic commits occur every five seconds. Suppose that we are three seconds after the most recent commit and a rebalance is triggered. After the rebalancing, all consumers will start consuming from the last offset committed. In this case, the offset is three seconds old, so all the events that arrived in those three seconds will be processed twice. It is possible to configure the commit interval to commit more frequently and reduce the window in which records will be duplicated, but it is impossible to completely eliminate them. With autocommit enabled, a call to poll will always commit the last offset returned by the previous poll. It doesn’t know which events were actually processed, so it is critical to always process all the events returned by poll() before calling poll() again. (Just like poll(), close() also commits offsets automatically.) This is usually not an issue, but pay attention when you handle exceptions or exit the poll loop prematurely. Automatic commits are convenient, but they don’t give developers enough control to avoid duplicate messages. Most developers exercise more control over the time at which offsets are committed—both to eliminate the possibility of missing messages and to reduce the number of messages duplicated during rebalancing. The consumer API has the option of committing the current offset at a point that makes sense to the application developer rather than based on a timer., or you risk missing messages as described previously. When rebalance is triggered, all the messages from the beginning of the most recent batch until the time of the rebalance will be processed twice.();} try { consumer.commitSync(); } catch (CommitFailedException e) { log.error("commit failed", e)} catch (CommitFailedException e) { log.error("commit failed", e) } }} } Let’s assume that by printing the contents of a record, we are done processing it. Your application will likely do a lot more with the records—modify them, enrich them, aggregate them, display them on a dashboard, or notify users of important events. You should determine when you are “done” with a record according to your use case. Once we are done “processing” all the records in the current batch, we call commitSync to commit the last offset in the batch, before polling for additional messages. commitSync retries committing as long as there is no error that can’t be recovered. If this happens, there is not much we can do except log an error.(); }} Commit the last offset and carry on. The drawback is that while commitSync() will retry the commit until it either succeeds or encounters a nonretriable failure, commitAsync() will not retry. The reason it does not retry is that by the time commitAsync() receives a response from the server, there may have been a later commit that was already successful. Imagine that we sent a request to commit offset 2000. There is a temporary communication problem, so the broker never gets the request and therefore never responds. Meanwhile, we processed another batch and successfully committed offset 3000. If commitAsync() now retries the previously failed commit, it might succeed in committing offset 2000 after offset 3000 was already processed and committed. In the case of a rebalance, this will cause more duplicates. We mention this complication and the importance of correct order of commits, because commitAsync() also gives you an option to pass in a callback that will be triggered when the broker responds.); } }); }} We send the commit and carry on, but if the commit fails, the failure and the offsets will be logged. A simple pattern to get commit order right for asynchronous retries is to use a monotonically increasing sequence number. Increase the sequence number every time you commit and add the sequence number at the time of the commit to the commitAsync callback. When you’re getting ready to send a retry, check if the commit sequence number the callback got is equal to the instance variable; if it is, there was no newer commit and it is safe to retry. If the instance sequence number is higher, don’t retry because a newer commit was already sent. Normally, occasional failures to commit without retrying are not a huge problem because if the problem is temporary, the following commit will be successful. But if we know that this is the last commit before we close the consumer, or before a rebalance, we want to make extra sure that the commit succeeds. Therefore, a common pattern is to combine commitAsync() with commitSync() just before shutdown. Here is how it works (we will discuss how to commit just before rebalance when we get to the section about rebalance listeners):();} } catch (Exception e) { log.error("Unexpected error", e); } finally { try { consumer.commitSync(); } finally { consumer.close(); } }} finally { consumer.close(); } } While everything is fine, we use commitAsync. It is faster, and if one commit fails, the next commit will serve as a retry. But if we are closing, there is no “next commit.” We call commitSync(), because it will retry until it succeeds or suffers unrecoverable failure. Committing the latest offset only allows you to commit as often as you finish processing batches. But what if you want to commit more frequently than that? What if poll() returns a huge batch and you want to commit offsets in the middle of the batch to avoid having to process all those rows again if a rebalance occurs? You can’t just call commitSync() or commitAsync()—this will commit the last offset returned, which you didn’t get to process yet. Fortunately, the consumer API allows you to call commitSync() and commitAsync() and pass a map of partitions and offsets that you wish to commit. If you are in the middle of processing a batch of records, and the last message you got from partition 3 in topic “customers” has offset 5000, you can call commitSync() to commit offset 5000 for partition 3 in topic “customers.” Since your consumer may be consuming more than a single partition, you will need to track offsets on all of them, which adds complexity to your code."));currentOffsets.put(new TopicPartition(record.topic(), record.partition()), new OffsetAndMetadata(record.offset()+1, "no metadata")); if (count % 1000 == 0)if (count % 1000 == 0) consumer.commitAsync(currentOffsets, null);consumer.commitAsync(currentOffsets, null); count++; } }count++; } } This is the map we will use to manually track offsets. Remember, println is a stand-in for whatever processing you do for the records you consume. After reading each record, we update the offsets map with the offset of the next message we expect to process. This is where we’ll start reading next time we start. Here, we decide to commit current offsets every 1,000 records. In your application, you can commit based on time or perhaps content of the records. I chose to call commitAsync(), but commitSync() is also completely valid here. Of course, when committing specific offsets you still need to perform all the error handling we’ve seen in previous sections. As we mentioned in the previous section about committing offsets, a consumer will want to do some cleanup work before exiting and also before partition rebalancing. If you know your consumer is about to lose ownership of a partition, you will want to commit offsets of the last event you’ve processed. If your consumer maintained a buffer with events that it only processes occasionally (e.g., the currentRecords map we used when explaining pause() functionality), you will want to process the events you accumulated before losing ownership of the partition. Perhaps you also need to close file handles, database connections, and such. The consumer API allows you to run your own code when partitions are added or removed from the consumer. You do this by passing a ConsumerRebalanceListener when calling the subscribe() method we discussed previously. ConsumerRebalanceListener has two methods you can implement: public void onPartitionsRevoked(Collection<TopicPartition> partitions) Called before the rebalancing starts and after the consumer stopped consuming messages. This is where you want to commit offsets, so whoever gets this partition next will know where to start. public void onPartitionsAssigned(Collection<TopicPartition> partitions) Called after partitions have been reassigned to the broker, but before the consumer starts consuming messages. This example will show how to use onPartitionsRevoked() to commit offsets before losing ownership of a partition. In the next section we will show a more involved example that also demonstrates the use of onPartitionsAssigned(): private Map<TopicPartition, OffsetAndMetadata> currentOffsets = new HashMap<>(); private class HandleRebalance implements ConsumerRebalanceListener { public void onPartitionsAssigned(Collection<TopicPartition> partitions) {public void onPartitionsAssigned(Collection<TopicPartition> partitions) { } public void onPartitionsRevoked(Collection<TopicPartition> partitions) { System.out.println("Lost partitions in rebalance. Committing current offsets:" + currentOffsets); consumer.commitSync(currentOffsets);} public void onPartitionsRevoked(Collection<TopicPartition> partitions) { System.out.println("Lost partitions in rebalance. Committing current offsets:" + currentOffsets); consumer.commitSync(currentOffsets); } } try { consumer.subscribe(topics, new HandleRebalance());} } try { consumer.subscribe(topics, new HandleRebalance());"); } } We start by implementing a ConsumerRebalanceListener. In this example we don’t need to do anything when we get a new partition; we’ll just start consuming messages. However, when we are about to lose a partition due to rebalancing, we need to commit offsets. Note that we are committing the latest offsets we’ve processed, not the latest offsets in the batch we are still processing. This is because a partition could get revoked while we are still in the middle of a batch. We are committing offsets for all partitions, not just the partitions we are about to lose—because the offsets are for events that were already processed, there is no harm in that. And we are using commitSync() to make sure the offsets are committed before the rebalance proceeds. The most important part: pass the ConsumerRebalanceListener to the subscribe() method so it will get invoked by the consumer. So far we’ve seen how to use poll() to start consuming messages from the last committed offset in each partition and to proceed in processing all messages in sequence. However, sometimes you want to start reading at a different offset.). However, the Kafka API also lets you seek a specific offset. This ability can be used in a variety of ways; for example, to go back a few messages or skip ahead a few messages (perhaps a time-sensitive application that is falling behind will want to skip ahead to more relevant messages). The most exciting use case for this ability is when offsets are stored in a system other than Kafka. Think about this common scenario: Your application is reading events from Kafka (perhaps a clickstream of users in a website), processes the data (perhaps remove records that indicate clicks from automated programs rather than users), and then stores the results in a database, NoSQL store, or Hadoop. Suppose that we really don’t want to lose any data, nor do we want to store the same results in the database twice. In these cases, the consumer loop may look a bit like this: while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { currentOffsets.put(new TopicPartition(record.topic(), record.partition()), record.offset()); processRecord(record); storeRecordInDB(record); consumer.commitAsync(currentOffsets); } } In this example, we are very paranoid, so we commit offsets after processing each record. However, there is still a chance that our application will crash after the record was stored in the database but before we committed offsets, causing the record to be processed again and the database to contain duplicates. This could be avoided if there was a way to store both the record and the offset in one atomic action. Either both the record and the offset are committed, or neither of them are committed. As long as the records are written to a database and the offsets to Kafka, this is impossible. But what if we wrote both the record and the offset to the database, in one transaction? Then we’ll know that either we are done with the record and the offset is committed or we are not and the record will be reprocessed. Now the only problem is if the record is stored in a database and not in Kafka, how will our consumer know where to start reading when it is assigned a partition? This is exactly what seek() can be used for. When the consumer starts or when new partitions are assigned, it can look up the offset in the database and seek() to that location. Here is a skeleton example of how this may work. We use ConsumerRebalanceLister and seek() to make sure we start processing at the offsets stored in the database:());while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { processRecord(record); storeRecordInDB(record); storeOffsetInDB(record.topic(), record.partition(), record.offset()); } commitDBTransaction(); }} commitDBTransaction(); } We use an imaginary method here to commit the transaction in the database. The idea here is that the database records and offsets will be inserted to the database as we process the records, and we just need to commit the transactions when we are about to lose the partition to make sure this information is persisted. We also have an imaginary method to fetch the offsets from the database, and then we seek() to those records when we get ownership of new partitions. When the consumer first starts, after we subscribe to topics, we call poll() once to make sure we join a consumer group and get assigned partitions, and then we immediately seek() to the correct offset in the partitions we are assigned to. Keep in mind that seek() only updates the position we are consuming from, so the next poll() will fetch the right messages. If there was an error in seek() (e.g., the offset does not exist), the exception will be thrown by poll(). Another imaginary method: this time we update a table storing the offsets in our database. Here we assume that updating records is fast, so we do an update on every record, but commits are slow, so we only commit at the end of the batch. However, this can be optimized in different ways. There are many different ways to implement exactly-once semantics by storing offsets and data in an external store, but all of them will need to use the ConsumerRebalanceListener and seek() to make sure offsets are stored in time and that the consumer starts reading messages from the correct location. Earlier in this chapter, when we discussed the poll loop, I told you not to worry about the fact that the consumer polls in an infinite loop and that we would discuss how to exit the loop cleanly. So, let’s discuss how to exit cleanly. When you decide to exit the poll loop, you will need another thread to call consumer.wakeup(). If you are running the consumer loop in the main thread, this can be done from ShutdownHook. Note that consumer.wakeup() is the only consumer method that is safe to call from a different thread. Calling wakeup will cause poll() to exit with WakeupException, or if consumer.wakeup() was called while the thread was not waiting on poll, the exception will be thrown on the next iteration when poll() is called. The WakeupException doesn’t need to be handled, but before exiting the thread, you must call consumer.close(). Closing the consumer will commit offsets if needed and will send the group coordinator a message that the consumer is leaving the group. The consumer coordinator will trigger rebalancing immediately and you won’t need to wait for the session to time out before partitions from the consumer you are closing will be assigned to another consumer in the group. Here is what the exit code will look like if the consumer is running in the main application thread. This example is a bit truncated, but you can view the full example at. shutdowntry {();} finally { consumer.close(); System.out.println("Closed consumer and we are done"); } }System.out.println("Closed consumer and we are done"); } } ShutdownHook runs in a seperate thread, so the only safe action we can take is to call wakeup to break out of the poll loop. Another thread calling wakeup will cause poll to throw a WakeupException. You’ll want to catch the exception to make sure your application doesn’t exit unexpectedly, but there is no need to do anything with it. Before exiting the consumer, make sure you close it cleanly. As discussed in the previous chapter, Kafka producers require serializers to convert objects into byte arrays that are then sent to Kafka. Similarly, Kafka consumers require deserializers to convert byte arrays recieved from Kafka into Java objects. In previous examples, we just assumed that both the key and the value of each message are strings and we used the default StringDeserializer in the consumer configuration. In Chapter 3 about the Kafka producer, we saw how to serialize custom types and how to use Avro and AvroSerializers to generate Avro objects from schema definitions and then serialize them when producing messages to Kafka. We will now look at how to create custom deserializers for your own objects and how to use Avro and its deserializers. It should be obvious that the serializer used to produce events to Kafka must match the deserializer that will be used when consuming events. Serializing with IntSerializer and then deserializing with StringDeserializer will not end well. This means that as a developer you need to keep track of which serializers were used to write into each topic, and make sure each topic only contains data that the deserializers you use can interpret. This is one of the benefits of using Avro and the Schema Repository for serializing and deserializing—the AvroSerializer can make sure that all the data written to a specific topic is compatible with the schema of the topic, which means it can be deserialized with the matching deserializer and schema. Any errors in compatibility—on the producer or the consumer side—will be caught easily with an appropriate error message, which means you will not need to try to debug byte arrays for serialization errors. We will start by quickly showing how to write a custom deserializer, even though this is the less common method, and then we will move on to an example of how to use Avro to deserialize message keys and values. Let’s take the same custom object we serialized in Chapter 3, and write a deserializer for it: public class Customer { private int customerID; private String customerName; public Customer(int ID, String name) { this.customerID = ID; this.customerName = name; } public int getID() { return customerID; } public String getName() { return customerName; } } The custom deserializer will look as follows: import org.apache.kafka.common.errors.SerializationException; import java.nio.ByteBuffer; import java.util.Map; public class CustomerDeserializer implements Deserializer<Customer> { ); } catch (Exception e) { throw new SerializationException("Error when serializing Customer to byte[] " + e); } } @Override public void close() { // nothing to close } }} catch (Exception e) { throw new SerializationException("Error when serializing Customer to byte[] " + e); } } @Override public void close() { // nothing to close } } The consumer also needs the implementation of the Customer class, and both the class and the serializer need to match on the producing and consuming applications. In a large organization with many consumers and producers sharing access to the data, this can become challenging. We are just reversing the logic of the serializer here—we get the customer ID and name out of the byte array and use them to construct the object we need. The consumer code that uses this serializer will look similar to this example:.CustomerDeserializer"); KafkaConsumer<String, Customer> consumer = new KafkaConsumer<>(props); consumer.subscribe("customerCountries") while (true) { ConsumerRecords<String, Customer> records = consumer.poll(100); for (ConsumerRecord<String, Customer> record : records) { System.out.println("current customer Id: " + record.value().getId() + " and current customer name: " + record.value().getName()); } } Again, it is important to note that implementing a custom serializer and deserializer is not recommended. It tightly couples producers and consumers and is fragile and error-prone. A better solution would be to use a standard message format such as JSON, Thrift, Protobuf, or Avro. We’ll now see how to use Avro deserializers with the Kafka consumer. For background on Apache Avro, its schemas, and schema-compatibility capabilities, refer back to Chapter 3. Let’s assume we are using the implementation of the Customer class in Avro that was shown in Chapter 3. In order to consume those objects from Kafka, you want to implement a consuming application similar to this: Properties props = new Properties(); props.put("bootstrap.servers", "broker1:9092,broker2:9092"); props.put("group.id", "CountryCounter"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer"); props.put("schema.registry.url", schemaUrl);props.put("schema.registry.url", schemaUrl); String topic = "customerContacts" KafkaConsumer consumer = new KafkaConsumer(createConsumerConfig(brokers, groupId, url)); consumer.subscribe(Collections.singletonList(topic)); System.out.println("Reading topic:" + topic); while (true) { ConsumerRecords<String, Customer> records = consumer.poll(1000);String topic = "customerContacts" KafkaConsumer consumer = new KafkaConsumer(createConsumerConfig(brokers, groupId, url)); consumer.subscribe(Collections.singletonList(topic)); System.out.println("Reading topic:" + topic); while (true) { ConsumerRecords<String, Customer> records = consumer.poll(1000); for (ConsumerRecord<String, Customer> record: records) { System.out.println("Current customer name is: " + record.value().getName());for (ConsumerRecord<String, Customer> record: records) { System.out.println("Current customer name is: " + record.value().getName()); } consumer.commitSync(); }} consumer.commitSync(); } We use KafkaAvroDeserializer to deserialize the Avro messages. schema.registry.url is a new parameter. This simply points to where we store the schemas. This way the consumer can use the schema that was registered by the producer to deserialize the message. We specify the generated class, Customer, as the type for the record value. record.value() is a Customer instance and we can use it accordingly. So far, we have discussed consumer groups, which are where partitions are assigned automatically to consumers and are rebalanced automatically when consumers are added or removed from the group. Typically, this behavior is just what you want, but in some cases you want something much simpler.(); } }while (true) { ConsumerRecords<String, String> records = consumer.poll(1000); for (ConsumerRecord<String, String> record: records) { System.out.printf("topic = %s, partition = %s, offset = %d, customer = %s, country = %s\n", record.topic(), record.partition(), record.offset(), record.key(), record.value()); } consumer.commitSync(); } } We start by asking the cluster for the partitions available in the topic. If you only plan on consuming a specific partition, you can skip this part. Once we know which partitions we want, we call assign() with the list. Other than the lack of rebalances and the need to manually find the partitions, everything else is business as usual. Keep in mind that if someone adds new partitions to the topic, the consumer will not be notified. You will need to handle this by checking consumer.partitionsFor() periodically or simply by bouncing the application whenever partitions are added. In this chapter we discussed the Java KafkaConsumer client that is part of the org.apache.kafka.clients package. At the time of writing, Apache Kafka still has two older clients written in Scala that are part of the kafka.consumer package, which is part of the core Kafka module. These consumers are called SimpleConsumer (which is not very simple). SimpleConsumer is a thin wrapper around the Kafka APIs that allows you to consume from specific partitions and offsets. The other old API is called high-level consumer or ZookeeperConsumerConnector. The high-level consumer is somewhat similar to the current consumer in that it has consumer groups and it rebalances partitions, but it uses Zookeeper to manage consumer groups and does not give you the same control over commits and rebalances as we have now. Because the current consumer supports both behaviors and provides much more reliability and control to the developer, we will not discuss the older APIs. If you are interested in using them, please think twice and then refer to Apache Kafka documentation to learn more. We started this chapter with an in-depth explanation of Kafka’s consumer groups and the way they allow multiple consumers to share the work of reading events from topics. We followed the theoretical discussion with a practical example of a consumer subscribing to a topic and continuously reading events. We then looked into the most important consumer configuration parameters and how they affect consumer behavior. We dedicated a large part of the chapter to discussing offsets and how consumers keep track of them. Understanding how consumers commit offsets is critical when writing reliable consumers, so we took time to explain the different ways this can be done. We then discussed additional parts of the consumer APIs, handling rebalances and closing the consumer. We concluded by discussing the deserializers used by consumers to turn bytes stored in Kafka into Java objects that the applications can process. We discussed Avro deserializers in some detail, even though they are just one type of deserializer you can use, because these are most commonly used with Kafka. Now that you know how to produce and consume events with Kafka, the next chapter explains some of the internals of a Kafka implementation. No credit card required
https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html
CC-MAIN-2018-17
refinedweb
7,986
53.1
Write Once, Communicate… Nowhere? Back in August I made the off-hand comment "thank goodness for RXTX" when talking about communicating with serial ports using Java. I've been meaning to revisit that topic in a bit more detail, and now is as good a time as any. More Insights White Papers - Transforming Change into a Competitive Advantage - Accelerating Economic Growth and Vitality through Smarter Public Safety Management Reports - Strategy: How Cybercriminals Choose Their Targets and Tactics - Strategy: Apple iOS 6: 6 Features You Need to Know WebcastsMore >> I can't decide if I love Java or hate it. I've written books on Java, produced a special DDJ issue on Java, and was a Java columnist for one of our sister magazines, back when we had those. I even edited the Dobb's Java newsletter for a while. There must be something I like. However, I always find I like it more in the server and network environment. For user interfaces, I tend towards Qt these days. For embedded systems, I haven't warmed to Java, even though there are a few options out there, some of which I will talk about sometime later this year. The lines, however, are blurring between all the different kinds of systems. You almost can't avoid Java somewhere. The Arduino uses Java internally. So does Eclipse. Many data acquisition systems need to connect over a network and Java works well for that, either on the device or running on a host PC (or even a Raspberry Pi). Another love-hate relationship I have is with the serial port. My first real job was with a company that made RS-232 devices, so like many people I have a long history with serial communications. We keep hearing that the office is going paperless and serial ports are dead. Neither of those seems to have much chance of being true anytime soon. Even a lot of USB devices still look like serial ports, so it is pretty hard to completely give up on the old standard, at least for now. It isn't that Java doesn't support the serial port. Sun released the Java Communications API in 1997 and Oracle still nominally supports it. It covers using serial ports and parallel ports (which are mostly, but not completely, dead). The problem has always been in the implementation. When reading the official Oracle page, I noticed that there are reference implementations for Solaris (both SPARC and x86) and Linux x86. I guess they quit trying to support Windows, which is probably a good thing. The last time I tried to use the Windows port, it suffered from many strange errors. For example, if the library wasn't on the same drive as the Java byte code, you couldn't enumerate ports. Things like that. Pretty much everyone I know has switched to using an open project's implementation, RXTX. The Arduino, for example, uses this set of libraries to talk to the serial port (even if the serial port is really a USB port). The project implements the "official" API and requires a native library, but works on most flavors of Windows, Linux, Solaris, and MacOS. The project is pretty much a drop-in replacement unless you use the latest version. The 2.1 series of versions still implement the standard API (more or less), but they change the namespace to gnu.io.*. If you use the older 2.0 series, you don't have to even change the namespace from the official examples. Speaking of examples, instead of rewriting code here, I'll simply point you to the RXTX examples if you want to experiment. One thing I did find interesting, however, is that RXTX uses JNI to interface with the native library (which, of course, varies by platform). I have always found JNI to be a pain to use (although, you don't use JNI yourself if you are just using RXTX in your program). I much prefer JNA. JNA is another way to call native code in Java programs, and it is much easier to manage. Granted, you can get slightly better performance in some cases using JNI, but in general, modern versions of JNA perform well and typically slash development costs. I did a quick search to see if there was something equivalent to RXTX but using JNA. There is PureJavaComm. Even if you don't want to switch off RXTX, the analysis of the design of PureJavaComm at that link is interesting reading. Using JNA, the library directly calls operating system calls and avoids having a platform-specific shared library, which is a good thing for many reasons. Have you managed to dump all your serial ports? Leave a comment and share your experiences.
http://www.drdobbs.com/embedded-systems/write-once-communicate-nowhere/240149018?cid=SBX_ddj_related_mostpopular_default_testing&itc=SBX_ddj_related_mostpopular_default_testing
CC-MAIN-2013-20
refinedweb
795
63.59
Asked by: Why ilmerge doesn't work if a dll file contains a xaml user control? - To reproduce the error, follows the step below.1) Create a WPF User Control Library.2) Create a default user control as follow<UserControl x:<Grid><Label Content="UserControl2.xaml"/></Grid></UserControl>3) use ilmerge to create a test library.ilmerge /lib /out:wpfTestLib.dll WpfUserControlLibrary1.dll4) Add the wpfTestLib.dll to the reference of another Wpf window application and add the UserControl2 custom control.<Window x:<Grid><c:UserControl2/></Grid></Window>5) You will get the following compiler error.Could not create an instance of type 'UserControl2'.I am using vs2008 and I have downloaded the latest version of ilmerge. Thus I wonder what went wrong? Question All replies - You have your xml namespace defined as: xmlns:c="clr-namespace:WpfControlLibrary1;assembly=tcTestLib" This is saying to pull it from the assembly "tcTestLib.dll". However, with ILMerge, you created the assembly "wpfTestLib.dll" You'll need to change the corresponding XAML declaration, or it will not find the appropriate control. Reed Copsey, Jr. - - Hi Reed,Thanks for your reply. I made a typing mistake. Even thought I change from tcTestLib to wpfTestLib. The error is still there. What seems very strange is that the intellisense of vs2008 xaml editor shows "UserControl2" right after I finished typing "c:" which means the intellisense knows where to find the custom user control. However I just can't get it to compile. - Hi Microsoft team,I am not trying to be difficult. The reason I want to find out whether is this is a bug is that I have a lot of xaml user controls which are all in separated project files. I simply want to find out if this is a bug or, may be, there is a simple solution to fix it.thanks - Sarah,I've come across this problem myself in recent days. From a brief bit of searching it seems ILMerge doesn't work flawlessly with WPF as it can't handle xaml resources.There are a few other applications suggested here: I don't know if you are still having a problem with this - I sort of am. I found that anything saved as a resource seems to appear in the new assembly under the oldassemblyname.g.resources. Take a look in reflector. Normally you'd have the following: <oldAssemblyName>.dll: \Resources\<oldAssemblyName>.g.resources after the merge you have: <newAssemblyName>.dll: \Resources\<oldAssemblyName>.g.resources I've been able to load the XAML resources out (resource dictionary) by using the following: Stream stream = assembly.GetManifestResourceStream(oldAssemblyName+".g.resources"); //Read the found resource //Find the correct key in the resource using (ResourceReader resourceReader = new ResourceReader(stream)) { foreach (DictionaryEntry de in resourceReader) { if (string.Compare((string)de.Key, ResourcePath, true) == 0) { rd = ( ResourceDictionary)XamlReader.Load((Stream)de.Value); break; } } } Where the ResourcePath is <oldAssemblyName>.g.resource. I do however still have 1 problem. I have a reference to a view model inside my resource dictionary, and this bombs out when loading the resource dictionary. I've added the schema as: [ assembly: XmlnsDefinition("", "ResourceSupplierDLL.ViewModels")] Now, it's the way it bombs out that is funny. If I put my ResourceSupplierDLL.DLL as the first assembly to merge in ILMerge - it works. But if it's anywhere but the first I get the exception public type TestViewModel cannot be found - even though after loading the assembly the type does exist (checked it out by calling assembly.GetTypes() on my just loaded assembly). Very odd problem indeed. Ben
https://social.msdn.microsoft.com/Forums/vstudio/en-US/013811c2-63a8-474d-9bdf-f2ab79f28099/why-ilmerge-doesnt-work-if-a-dll-file-contains-a-xaml-user-control?forum=netfxbcl
CC-MAIN-2016-36
refinedweb
593
59.9
1. Introduction¶ First thing first. Definitely this is not the only book on data structures and algorithms. There are many great books on the subject. I will mention few of those. No links to any online shop will be given as it will show my bias towards that store. The first book is the most authoritative book on the subject which treats topics in great depth. It is the “The Art of Computer Programming” by Donald E. Knuth. The book is available in several volumes. Volume 1 describes Fundamental Algorithms, 2 describes numerical algorithms, 3 details sorting and searching and 4A deals with combinatorial algorithms. As of now only these volumes have been published. But these books are not for weak hearted people and I really mean that. This series is very heavy on mathematics and implementation is done using a computer designed by Knuth MMIX. However, it is a must read for advanced readers. The second book is also a classic. It is “Introduction to Algorithms” written by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein. This book is also known as “CLRS”. While Knuth is very deep, this book covers topics in breadth. Once again an excellent book but examples are given in pseudo programming language. There are many other introductory books on this subject with little difference in quality and almost all of them are a good read. The subject of this book is data structure and algorithms. That involves three words. Data, structure and algorithms. A computer stores and manipulates data i.e. information. We use computer to store, manipulate and use data. We present same information in many ways which is about structure. The data and structure determines that what kind of operations i.e. algorithms can be performed over them. For example, we cannot perform addition on two character strings but then we can concatenate them using + operator in an object-oriented language which support operator overloading. Thus, these words summarize the soul of computer programming and software. All programs which we use and all operations we do involve these three basic elements. Oh you would say that I missed the word mathematics. Well, then you have been observant! That was deliberate. :) Mathematics which will be treated in this book is although separate from the main subject but certain portions of the book like computer graphics and computational geometry will require a great deal of mathematics. Certainly the mathematics part can form a new book and can be read as such in isolation but I am a fan of thicker books so I intend to make it thick. Since I have no intention of getting it printed that, is all good. Such is miracle of digital technology. There are specific structures which facilitate specific operations. For example, a stack allows access of data only from top. A queue is helpful to realize a life-like queue. A linked list allows traversal in forward or backward or both directions but not random. Binary trees are helpful for faster searches. Graphs can be used for path-finding and to solve network problems. Hash maps are very good for finding information quickly. These are just few cases which I have cited. The area of data structures and algorithms is immense and ever expanding. This book is a natural successor of my first book on C99 programming. Examples will include only knowledge from C99 book. 1.1. Problem to Solution¶ Usually while programming you will face the situation when you have to write a program to solve a problem. You will end up using few data structures and a few algorithms to solve that program. This book describes the most common data strutures and algorithms which have evolved over several centuries of mathematical work by mathematicians. Given a problem we build a model of solution (read program) in our mind. The details of that mental model varies from individual to individual. Once that mental model is built our brains orientation is fixed to a certain way of thinking. Now because of this we choose certain data structures and algorithms and try to find out solution using data starutures and algorithms chosen. Sometimes we are successful other times we fail may be partially maybe fully. Now the most important thing when we fail is to think over the problem again and get a fresh thinking. So better take a break. This is most serious advice I can give. Whenever you have difficulty solving a problem for more than 30 minutes take a break. This will reset your thinking and allow you to think in new way and you can try to find the solution afresh. 1.2. Abstract Data Types¶ While code is definitely the final goal, to reach there we make models in our mind. For example, when we think of numbers as humans we can think of numbers without the restriction of range but in case of computers we are constrained by the amount of memory available. However, that does not stop us from making abstract models because typically computers have enough memory available for storing large enough numbers available for all practical purposes. An abstract data type specifies the logical properties of a data type. As you know a data type like int, char and float represent collection of values and these types determine what operations can be performed on those types. For example, you should not perform multiplication on characters although programmatically it is possible, it just does not make sense. Although, other meanings are possible to multiplication to a character or string i.e. repetition but certainly not multiplication. Thus, the collection of values and operations form an abstract mathematical entity that can be implemented using hardware and software. This is what is known as abstract data type. A formal way to put what an ADT is class of objects whose logical behavior is defined by a set of values and a set of operations. From experience I know that beginners do not really care for ADTs and they simply skip to the implementation part which is not good. Unless you understand the concept at an abstract level you will not be able to appreciate the semantics of the ADT and as a result your implementation may suffer. When we define an abstract data type we do not worry about efficiency whether time or space. Since it is abstract therefore those worries come when we implement that. While defining an ADT, we are not worried about implementation details. Certain times it may be possible to implement our ADT on a piece of hardware or software system. For example, infinitely big numbers or strings cannot be implemented in hardware or software as said earlier because of memory limitations. But then again for all practical purposes an ADT will be useful as help while implementing our problem provided you are willing to maintain the semantics. There are two main ways of defining an ADT, imperative and functional. Imperative way of defining an ADT is closer to programming languages like C, C++ etc which allow imperative programming techniques while functional style is better suited for functional languages like Erlang, Haskell, OCaml etc. However, I will deviate from the formal style a bit to make it easy for you to understand ADTs without knowledge of a programming language so that you can evaluate it as a mathematical model. Let us consider an example ADT then we will dissect that after: ADT UnsignedInteger Object: An ordered subrange of integers starting at zero and ending at a maximum value of UINT_MAX on the computer. /* operations */ for all x, y in N(set of natural numbers) and true, false in booleans true ::== 1 (UnsignedInteger) false ::== 0 (UnsignedInteger) Zero(): UnsignedInteger => 0 IsZero(x): Boolean => if(x == 0) IsZero = true else false Add(x, y): UnsignedInteger => if(x + y < UINT_MAX) Add = x + y else Add = (x + y)%(UINT_MAX + 1) Equal(x, y): Boolean => if (x == y) Equal = true else false Sub(x, y): UnsignedInteger => if(x > y) Sub = x - y else Sub = positive of 2's complement with MSB is not sign bit Mul(x, y): UnsignedInteger => if((x * y) < UINT_MAX) Mul = x * y else Mul = (x * y)%(UINT_MAX + 1) Div(x, y): UnsignedInteger => Div = Quotient of x/y Mod(x, y): UnsignedInteger => Mod = Remainder of x/y You, my observant reader, would have noticed that this is not an ADT in its purest sense because we have cared about hardware i.e. assumed that it implements 2’s complement. Your observation is correct. It is not an ADT but I have tried to make sure that this ADT works on modern computers which work on 2’s complement. Certainly this will not work on systems like UNIVAC which implement 1’s complement in hardware. However, that is not important part. The important part is to learn as how you specify an ADT so that it works. Let us try to learn what has been described in ADT. This ADT describes unsigned integers much like that found in a statically typed language like C or C++. This ADT starts at 0 and ends at a specific value specified by UINT_MAX. What would be UINT_MAX is not specified as an optimum value of that depends in internal details of hardware. Zero() is an operation which always returns zero. IsZero() is an operation which returns true if argument is zero else false. true and false have been specified as their typical Boolean notations of 1 and 0 respectively. Add() adds two unsigned integers if their sum is less than UINT_MAX, if it is more than that then it is rotated which is based on the behavior of hardware again. Now I leave it up to you to figure rest of ADT. 1.2.1. An ADT for rational numbers¶ To foster the ideas as how to represent an ADT let us consider another example of rational numbers. A rational number is a fraction which either terminates or repeats upon division for example, \(\frac{1}{2}, \frac{3}{7}, \frac{7}{9}\). Thus, we see that denominators and numerators are integers. We also have to consider the sign of these rational numbers, which, may be positive or negative which we will represent using character data type for example. For completeness let us define a minimalistic character abstract data type as well. ADT Character Object: A character on English PC-104 keyboard which can fit in 8 bits character ::== a-z,A-Z,0-9,`~!@#$%^&*()-_=+[{]}\|;:'",<.>/?(space)(TAB)(return) c is one of the characters being one of the above. value(c): UnsignedInteger => if(c == a-z) return 0-25 else if(c == A-Z) return 26-51 else if(c == 0-9) return 52-61 else sequential value in above list from remaining characters I have kept the ADT character minimalistic to be enough to serve our typical usage. (space) specifies the space bar on your keyboard while (TAB) is the tab and (enter) is the return key. We have defined characters in terms of integral value so that we can store it in memory because memory can contain only sequence of bits. Characters really cannot be stored in memory as it is. This will allow us to apply equality for two characters as well as other operations which can be applied to integers though I have left them for you as an exercise. First three commas are just field separators. Let us define our rational number ADT now. ADT Rational Object: A rational number which has a finite denominator and numerator. /* Operations */ for all n1, n2, d1 and d2 as UnsignedInteger with d1 != 0 and d2 != 0 and true and false are our usual Booleans. s1 and s2 are signs represented as Character - and +. rational ::== <numerator, denominator, sign> where numerator and denominator are UnsignedInteger, denominator != 0 and sign is a character '+' or '-' MakeRational(n, d, s): Rational => return <n, d, s> IsEqual(n1, d1, s1, n2, d2, s2): Boolean => if((n1*d2 == n2*d1) && ((s1 == s2 == '+')||(s1 == s2 == '-'))) return true else return false Greater(n1, d1, s1, n2, d2, s2): Rational => if( s1 == s2) if((n1*d2) >(n2*d1)) return <n1, d1, s1> else if(s1 == '+') return <n1, d1, s1> return <n2, d2, s2> Add> Sub> Mul(n1, d1, s1, n2, d2, s2): Rational => if(s1 == s2) return <n1*n2, d1*d2, '+'> else return <n1*n2, d1*d2, '-'> Div(n1, d1, s1, n2, d2, s2): Rational => if(s1 == s2) return <n1*d2, d1*n2, '+'> else return <n1*d2, d1*n2, '-'> It is not at all hard to understand the rational number ADT and I think that it is self-explanatory. I have used this informal style of ADT description for now but when I will be describing data structures I will stick to a more formal style. One particular operation I would like to point out is IsEqual operation. Usually ADTs are equal when they have equal value but in case of rational numbers they can be equal even if absolute fractions are not equal. Rather, the different fractions will have equal value in their reduced form, for example, \(\frac{1}{2}, \frac{2}{4}, \frac{3}{6}\). Now that we have learned small bits of how to define an ADT let us turn our attention to more important philosophical questions. 1.2.2. Advantages of ADTs¶ 1.2.2.1. Encapsulation¶ An ADT guarantees the properties and operations about itself. This allows programmer of ADT that only so much is needed to satisfy the requirements posed by ADT. The implementation may be complex but that is abstracted by a very simple interface definition. Thus, a great deal of abstraction is achieved by specifying the ADT for the user of ADT. As a programmer of ADT we are worried only about satisfying the interface and properties requirements of ADT and nothing more. 1.2.2.2. Localization of Change and Isolation from Implementation¶ As a user of ADT we are not worried if the implementation changes internally as long as ADTs’ interface does not change. Since the implementation must adhere to interfaces defined by the ADT in question we as user of ADT get a guarantee that we are isolated from the implementation. Thus, a change in implementation of ADT does not warrant a change in our code as ADT user. For example, a car’s accelerator, break and clutch are always in the same positional order irrespective of change of mechanics inside. As you can see that changes in implementation are localized to the implementation details and users of the ADT are not effected which allows decoupling of ADT implementation and its usage which results in parallel work on both sides. 1.2.2.3. Flexibility¶ An ADT can be implemented in different ways as you will see soon when I will present implementation of queues and stacks both using an array and a linked list. However, the users of those queues and stacks are free to switch between the two implementations as they see fit because the interfaces of ADTs remain same. This allows us to use different implementations as per requirement of our problem giving us flexibility and efficiency. 1.2.3. Complexity Considerations for an ADT¶ As I have said that while defining an ADT, we are not worried about performance criterion of implementation. However, there are two schools of thoughts. One faction thinks that these should not be part of ADT while the other thinks that ADT should guarantee a minimum on performance criterion in terms of memory and time. For example, I will quote the author of STL, Alexander Stepanov who puts forth his argument as: Following is what Alexander Stepanov has to say: As far as I think complexity considerations should be part of ADTs because based on these guarantees we can choose what we will use and what we will not. In that sense, you can say that I agree with the opinion of Stepanov. In our trivial ADTs which we have seen I have omitted the complexity considerations but before we can include complexity considerations we have to include those in our analysis. However, before we can really introduce complexity considerations in our ADTs we have to learn what is complexity and how do we evaluate that. But before we can learn how to compute complexity of an algorithm let us have a concrete definition for it. 1.3. What is an Algorithm?¶ The word algorithm comes from the name of the 9th century Persian Muslim mathematician Muḥammad ibn Mūsā al-Khwārizmī. Algorism originally referred only to the rules of performing arithmetic using Hindu-Arabic numerals but evolved via European Latin translation of al-Khwārizmī’s name into algorithm by the 18th century.” (in English, “Al-Khwarizmi on the Hindu Art of Reckoning”). The term “Algoritmi” in the title of the book led to the term “algorithm”. Usage of the word evolved to include all definite procedures for solving problems or performing tasks. The question is a what is an algorithm. An algorithm is a finite sequence of well-defined operations on some input data which produces an output in finite amount of time and requires finite amount of space to hold its data which can be presented in a well-defined formal language for evaluation of a function. The concept of algorithm has existed for millennia, however a partial formalization of what would become the modern algorithm began with attempts to solve the Entscheidungsproblem (the “decision problem”) posed by David Hilbert in 1928, who was a great mathematician born in Prussia(modern Russia). Coincidently, John Von Neumann(father of modern computer architecture and inventor of merge sort) was his assistant for some time. The geniuses of Hilbert and Neumann is well known. Hilbert’s problems which is a list of 23 problems have fueled much of mathematical research of 20th century while Neumann contributed to development of computers, nuclear bombs both Uranium and Hydrogen bombs as well as towards development of ICBMs. Subsequent formalizations were framed as attempts to define “Effective calculability”, those formalizations included the Gödel–Herbrand–Kleene recursive functions, Alonzo Church’s lambda calculus, Emil Post’s “Formulation 1”, and Alan Turing’s Turing machines. We will study Lambda Calculus and Turing Machines later in this book. Algorithm A (Euclid’s algorithm). Given two positive integers a and b find the greatest common divisor, i.e. the largest positive integer which evenly divides (remainder after division is 0) both a and b. A1. [Find remainder.] Divide a by b. Say r is remainder ( r will certainly be \(0\le r<b\).). A2. [Is it zero?] If r = 0 terminate execution b is the GCD. A3. [Exchange values.] Set a = b and b = r. Goto step A1. I will use the word A for alorithm. This algorithm will have a monotonically increasing suffix which will be a positive integer. When these algorithms are later referenced a hyperlink will be made to refernce back to the algorithm. Some algorithms will have flowcharts given for them. For example, given below is the flowchart for Euclid’s algorithm. Euclid’s algorithm as a flowchart. Let us see a C99 program which evaluates GCD of two numbers. Given below is the sample code. #include <stdio.h> int main() { int a=0, b=0, r=1; printf("Enter two positive integers separated by space:\n"); scanf("%d %d", &a, &b); while(r != 0) { r = a % b; if(r == 0) break; else { a = b; b = r; } } printf("GCD is %d\n", b); return 0; } Note that terminating condition for our program is that remainder becomes zero. Typically we initialize variables with value 0 in C99 but in this case it must be non-zero. Now let us look at some desirable properties of an algorithm. 1.4. Complexity of an Algorithm¶ There can be several algorithms to achieve the same effect on a particular set of data. However, the two methods may have different requirements on time constraint. One may take more or less or equal time than the second one. We definitely always want an algorithm which consumes less time. Time may not be only contraint all the time. Sometimes we may be bound by amount of memory available to use. This may forbid us from using those algorithms which consume more memory even though they run faster. So there are two types of complexities which are in question time and space. Before we proceed let us familiarize ourselves with some special functions of mathematics and mathematical constants: Given below are some constants: If we consider harmonic sequence defined by the equation \(H_N = 1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{N}\) then it looks like following curve for definite integral \(\ln N = \int_x^N \frac{dx}{x}\) Harmonic Series Representation As you can see the harmonic sequence is defined by the total area of the bars in the image while logarithm natural is the area under curve. The formula \(H_N \approx \ln N + \gamma + (1/12 N)\) is a very good approximation for the harmonic sequence where \(\gamma\) is Euler–Mascheroni constant also known Euler’s constant after Swiss mathematician Leonhard Euler. Lorenzo Mascheroni was an Italian mathematician. Similarly, for Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 … the nth term is represented by the recurrence relation \(F_N = F_{N_1} + F_{N-2}\) for \(N \ge 2\) where \(F_0 = 1\) and \(F_1 = 1\). These Finonacci numbers have many interesting properties. Fibonacci series also occurs in nature. For example, have a look at the following image of yellow Chamomile(courtesy, Wikipedia): In this picture, the yellow Chamomile is showing the arrangements in 21(blue) and 13(aqua) spirals. Similarly, the sunflower image for its seeds also depict similar arrangements as shown below: A geometrical representation of Fibinacci numbers can be given by golden spiral. The sizes of squares are 1, 1, 2, 3, 5, 8, 13, 21 and 34. The spiral converges at the intersection of the two blue lines (P), and the ratio of the lengths of these two lines BP: PD is \(phi\), the Golden Ratio. In geometry, a golden spiral is a logarithmic spiral whose growth factor is the golden ratio. The polar equation for a golden spiral is the same as for other logarithmic spirals, but with a special value of the growth factor \(b: r = ae^{b\theta}\). An alternative image is given below(countesy Wikipedia1): Another occurrence of Fibinacci number is in rose petals as shown below: There are two ways by which you can categorize complexity. The first categorization is by resource consumption. We measure these using consumption of memory and CPU. Memory consumption is known as space complexity and CPU consumption is known as time complexity. The second categorization is by the methods by which we measure complexity of an algorithm. One of the popular methods is Big-O notation denoted by \(O\) . Big-O notation focuses on upper bound of algorithms for huge sets of data(tending to be infinity) and thus is known as asymptotic complexity. Another method which is less popular is amortized complexity. Amortized complexity is not concerned about worst-case performance but rather average run time for all cases. In an algorithm an operation may be costly but then its frequency may be less. Amortized complexity takes care of this fact and tries to balance the complexity value. Thus, we can safely say that Big-O notation is a guarantee but amortized notation is a probabilistic way of deducing run-time or rather more practical notion. Since it is much easier to computer Big-O complexity we will focus on it but from time to time I will also introduce amortized complexity for analysis of algorithms where suitable. Unfortunately, it is much harder to compute average case complexity or amortized complexity. To computer average case complexity one must make assumption about distribution of input which may not match with realistic inputs and realistic inputs are not easily represented as mathematical model. On the other hand worst-case complexity is quite acceptable and is universally acceptable but a paper by Paul Kemp shows that what can go wrong with worst case complexity. Note that worst case complexity comes in picture for very large inputs thus an algorithm demonstrating better worst case complexity is not necessarily better algorithm for real-world programs. Big-O is a member of a larger family of notations that is called Landau notation, Bachmann–Landau notation (after Edmund Landau and Paul Bachmann). Mathematically it tells us that how closely a finite series approximates a given function, especially for a truncated Taylor series or asymptotic expansion. One more concept is there for classification of algorithms. This classification is based on the fact whether data is available in its entirety to the algorithm or not. If the algorithm requires that data must be available for algorithm to work then it is known as offline algorithm. Algorithms which do not require entire data to be available on work on part of data at a point of time are known as online algorithms. Clearly as you can fathom, online algorithms will have better performance that offline algorithms. If ratio of performance of an online algorithm and its counterpart offline algorithm is bounded, the online algorithm is called competitive. Also, a point worth noticing is that all online algorithms do not have an offline counterpart. Now let us try to understand what is big-O notation and how to compute it. Consider two functions \(f(x)\) and \(g(x)\). Let us assume that these functions operate on a subset of real numbers. Then in big-O notation \(f(x)\) is written in terms of \(g(x)\) as follows: if and only if there is a positive constant \(K\) such that for all sufficiently large values of \(x,~f(x)\) is at most K multiplied by \(g(x)\) in absolute value. That is, \(f(x)~=~O(g(x))\) if and only if there exists a positive real number \(K\) and a real number \(x_0\) such that We typically do not say that we are concerned with growth rate as \(x\) goes to \(\infty\) and we simply write \(f(x) = O(g(x))\).The notation can also be used to describe the behavior of \(f\) near some real number \(a\) (often, \(a = 0\)): we say if and only if there exist positive numbers \(\delta\) and \(K\) such that If \(g(x)\) is non-zero for values of \(x\) sufficiently close to \(a\), both of these definitions can be unified using the limit superior: if and only if To explain how we compute \(O(n)\) let us see an example. Consider a polynomial function with all positive coefficients. Say our polynomials is somthing like \(f(x) = a_0x^n + a_1x^{n-1} + a_2x^{n-2} + ... + a_{n-1}x + a_n\). We can very safely say Therefore we can say \(f(x) = O(x^n)\). 1.4.1. An Alternative Definition¶ For two functions \(f(x)\) and \(g(x)\) and a constant \(K \in I\!R^+\) \(f(x) = O(g(x))\) if \(lim_{x\rightarrow \infty} \left(\frac{g(x)}{f(x)}\right) = K\) Given below is a plot of some most common functions encountered in algorithms. Note that plot of log(x) is barely visible in output. As you can clearly see \(log(x)<x<x*log(x)<x^2<x^3<2^n\). As \(O\) -notation gives upper bound similarly \(\Omega\) -notation gives lower bound. \(g(n) = \Omega(f(n))\) means there exists two constants \(L\) and \(n_0\) such that \(g(n) \ge L|f(n)|\) for all \(n> n_0\). If we want to write exact order of growth without being accurate about constant factors \(L\) and \(K\) then we use \(\Theta\) -notation. \(g(n) = \Theta(f(n)) \Leftrightarrow g(n) = O(f(n))\) and \(g(n) = \Omega(f(n))\) implying \(\Theta\) -notation gives both upper and lower bounds. In computer science for algorithm analysis we are almost always worried about big-\(O\) complexity because it gives upper bound i.e. for large set of input how the algorithm will behave. Complexity of algorithms form into various functions. I am presenting graphs of some common functions below for quick understanding. By looking at these graphs you can quickly deduce that which complexity fares well and which does not. Continuing this let us discuss a bit more about big-\(O\) notation. The problem with big-\(O\) notation is that even though it talks about two contants \. I am presenting a table for runtime of various complexities, looking at which you can appreciate the algorithms with better runtime. This table is for a computer which executes instructions at \(10^9\) instructions per second. The complexity of function \(2^x\) rises so fast that it is already beyond the number of years sun is going to survive. As you can clearly see \(O(log(log(x)))O(log(x))<O(x)<O(x*log(x))<O(x2)<O(x3)<O(2x)<O(x!)\). As O-notation gives upper bound similarly Ω-notation gives lower bound. g(x)=Ω(f(n)) means there exists two constants L and \(n_0\) such that \(g(x)\ge L|f(x)| ~\forall~ x>x_0\) Note that this is Knuth’s definition for \(\Omega\) not the Hardy-Littlewood definition. Knuth has commented on why he has changed this definition: Hardy-Littlewood’s version of \(\Omega\) is given below: \(f(x)=\Omega(g(x))\;(x\rightarrow\infty)\;\Leftrightarrow\;\limsup_{x\rightarrow\infty}\left|\frac{f(x)}{g(x)}\right|>0\) If we want to write exact order of growth without being accurate about constant factors \(L\) and \(K\) then we use \(Θ\) - notation. \(g(x)=\Theta(f(x))\Leftrightarrow g(x)=O(f(x)) ~\text{and}~ g(x)=\Omega(f(x))\) Note that for an algorithm performing at a complexity of \(O(x^2)\) we can say that \(O(x3)\) is its complexity as an approximation. It is allowed to represent a complexity at a worse notation but not a better notation. Just that the statement will be slightly inaccurate. The reason for this is that the function which has worse complexity is still and upper bound of the better function and thus our assumption of giving an upper bound holds true. If you know calculus and coordinate geometry then you would notice that a worse function has a higher rate of growth i.e. slope of the curve i.e. \(\frac{dy}{dx}\) or \(\frac{df(x)}{dx}\) where \(y=f(x)\). For any higher value of \(x\) the slope or rate of growth will eventually be higher for worse performing function. For example, for \(O(x^2)\) that value of \(x\) is 1 when compared with \(O(x)\). This value can be obtained by solving \(\frac{dy}{dx}\) for the two functions. In computer science for algorithm analysis we are almost always worried about big-\(O\) complexity because it gives upper bound i.e. for large set of input how the algorithm will behave. The problem with big-\(O\) notation is that even though it talks about two constants \. Algorithms which were very important a few decades ago are now important merely for academic importance. In case you prefer graphs to mathematical functions to understand, below I present all three big notations.
https://www.ashtavakra.org/data-structure-algorithms/introduction/
CC-MAIN-2022-05
refinedweb
5,227
53.71
not without a bit of manual work.. since the 96 introduced a change to the fs layout for the namespace you have to add the extra namespace folder that is missing. assuming that all snapshots are exported under /my-backup-dir you have to do something like: $ mv /my-backup-dir/.archive /my-backup-dir/archive $ mkdir /my-backup-dir/archive/default $ mv /my-backup-dir/archive/* /my-backup-dir/archive/default to rename the "archive" dir and add the "default" namespace dir that will contains all the table files. Matteo On Tue, Jul 29, 2014 at 11:15 AM, oc tsdb <oc.tsdb@gmail.com> wrote: > Hi, > > We are planning to move our cluster from hbase 0.94.14 to 0.98.2. > > Our query is - > If we have backup taken(using snapshots) on hbase 0.94.14, can we restore > the same backup on newer version of hbase 0.98.2? > > Thanks > oc.tsdb >
http://mail-archives.us.apache.org/mod_mbox/hbase-user/201407.mbox/%3CCAFb6Hi+FStV0UWuhoTtBOiMT_BcYvysP8iRidXzCfppE4E9QzA@mail.gmail.com%3E
CC-MAIN-2021-10
refinedweb
156
66.03
2006 Everett Moore In designing our cars we must, first, con- Olds continued this design into produc- sider the purpose and exactly where they will tion, while Ford’s Quadricycle was only used be driven. Unless you are designing a street to prove that a gasoline engine and four legal, licensed vehicle, your prime purpose wheels could provide a means of transporta- 120 13 16 R cylinder, it changes linear speed while the crankshaft speed remains constant. The movement of the piston is the slowest at the top and bottom of it’s stroke and the fastest when the crank is at right angles to the cylin- Fig. 3 der. our example, the tire OD is 20.5 in. When In the steering mechanism, just the oppo- going straight, the tread width (with “0” toe site happens. Treating the tie rod as a con- in) is 46 in., both front and rear. If we meas- stant linear motion, we see how the steering ure it in the turn,, we see that the tread arm is affected most when at 90o to the spin- width at the rear measures 44.5 in. while the dle. As it passes 90o in either direction it’s front measures 47.5 in. rotary motion starts slowing down in refer- Look at your street car. Turn the wheels ence to the linear motion of the tie rod. full to the left. Now, look at the wheel config- If we were to build a front axle with the uration and see an example of this. steering arms at right angles (90o) to the Remember, we said the kingpins were 40 spindle the previous paragraph would apply, in. apart. Lets say, for example, that you con- only both front wheels would move the same structed your front axle to these dimensions. amount. You might assume the steering arms extend- This is the configuration shown in Fig. 4. ing from each front spindle to be at right While the tie rod moves, both spindles react angles (90) to the the axles. This would mean the same. The front wheels, while always that you, also, made your tie rod 40 in. remaining parallel to each other, will turn to This would result in a front axle setup where the front wheels were always parallel, a different turning radius, resulting in one or resulting in their centerlines not intersecting the other scooting sideways, resulting in tire the rear axle centerline at the same point, wear. when turned as previously done. Refer to With both steering arms positioned Figure 4 on the next page for an illustration. inward and a left turn is made as in Fig. 3, What can be done to achieve the front the left wheel will turn a bit sharper than the wheel configuration as we see in Figure 3? right wheel and thus, maintain the same 46 101° 101° 74 Fig. 5 While I’m sure there’s some Einstein for- mulas that would give us the angle we need, 13 120 R 16 From the beginning, the Ford Model T Our design is based around a Peerless dif- had about the simplest and most economical ferential available from Northern Tool. The steering boxes made. Restorers have almost area where a sprocket would normally go has depleted the junk yards of this item that was a 4” pitman arm attached. This can be altered very popular with the home builders of past to a shorter arm if desired. Anyway, here’s a years. simple 2:1 steering that helps a bit. Try it — It utilized a planetary gear arrangement, you’ll like it! located just under the steering wheel, that provided a reduction. Still a bit “squirrely” to See the illustrations on the next page — some, it was a far cry better than a straight, go-kart type, shaft/pitman arm with no mechanical advantage. 3.00 4.00 Secure to frame 1 3 R DIA 2 8 1 1 R 8 1 9 3 DIA (x2) 2 32 4 (ref) 3 R (x2) 8 3 2 4 Make from 3/16" steel plate Fig. 1 .156 Fig. 2 Figure 1, above, depicts Pete Burger’s these slow-moving cars. When I built the neat way of making the camber adjustable on Quadricycle I, at first had “0” caster. The a typical “homebuilt” front axle. The top of steering was a bit squirrely and didn’t want the kingpin is supported by a 5/8” ball type to track forward. I adjusted approx 3o posi- rod end. By adjusting the nuts holding it, the tive caster and it made a world of difference camber can be fine-tuned as desired. with the car going “hands off” for quite a dis- All the homebuilts, including my own, tance. have accepted the camber that resulted in the I added a feature, shown in Figure 2, to finished axel. With the play in the kingpin Pete’s idea. By making the top mounting hole and wheel bearings, it’s easy for the front end off 5/32” to the rear of car, we get 3 degrees to look a bit spraddle legged. positive caster. Try these ideas, you’ll like There’s a lot of differing opinions as to the them! need for camber and caster adjustment on Fig. 1 Fig. 2 tion. 3 4 DIA The dimensions were measured from actual hubs and are approxi- mate. On the next page, we have pro- vided a list of Worksman part numbers and the current prices. The phone number goes directly to their order desk, where you can 3 1 order direct with your credit card. 3/16 keyway 4 4 With 3/4 Id Ball bearings - 3.5 wide hub With 3/4 Id Ball Bearings - 3.5 wide hub Part No. 326A (w/out tire & tube). . $49.33 Part No 329A (w/out tire/tube) . . . $52.34 Part No. 173 3/4 x 1-3/8 replacement Part No. 173 3/4 x 1-3/8 replacement flange bearing . . . . . . . . . . . . . . . . . $3.28 flange bearing . . . . . . . . . . . . . . . . . $3.28 _______________ _______________ With 3/4 bore w/3/16 keyway - 3.5 hub With 3/4 Bore w/3/16 keyway - 3.5 hub Complete with Kevlar tire and PR tube. Complete with Kevlar tire and PR tube. Kevlar, Diamond Tread, Heavy Duty, Tire Kevlar Diamond Tread Heavy Duty Tire Heavy Duty Puncture Resistant Tube Heavy Duty Puncture Resistant Tube Note: Worksman Cycles has been in business for years, manufacturing commercial grade bicycles for industrial use. The wheels they manufacture are of much higher quality than the import wheels we have been using. The free-turning wheels are equipped with much better bearings and have 11 ga (.120 dia) spokes cross laced with adjustable nipples. Their power driven wheels come equipped with hubs, featuring a 3/4 dia bore with 3/16 keyway. In addition to the 20 and 26 inch wheels, etc shown on the above list, they have 24 inch wheels, also, available. Their tires are available in white-wall, however they are not Kevlar belted. 1 DIA 4 4 Holes, Equally spaced Adapting a Worksman hub P/N 326A (20") or P/N 329A (26") to Jimmy Woods' Design. The above drawing shows the details of an adapter plate to fit a sprocket to wheel. This arrangement would mount flat sprockets to a Jimmy Woods’ design drive wheel. This assembly should adapt the Worksman wheel to all existing Woods carriages. Plain sprockets, made to be welded to sprocket hub can be found at Surplus Center. See our Links page for a link to their catalog. The No. 40 sprockets come with a 2” dia hole. This requires the adapter being mounted to wheel with flat head, socket head capscrews. The new, Dual Drive Wheel that Worksman is willing to make for our Spoke holes Worksman Hub - Driven requirements, in shown in 3 2 Keyed Inserts DIA Fig. 2. This has a 3/4 wide 4 insert welded on both sides. The keyways are aligned in welding fixture to assure the key stock will line up and go all the way through the hub. Remember, when using this hub, we must allow for 3 4 3 4 the spoke protruding 3/16 keyway 5 2 places approximately 5/32 beyond Fig. 2 3 8 the width of the hub. I believe that 1-1/2 inch Above, in Fig. 2, we present a cut-away drawing of the new of keyway exposure in each wheel that Worksman is willing to make for us. This new hub of two rear wheels should has 2, 3/4 wide inserts giving us 1-1/2 inch total keyway contact handle the average power which should be sufficient for power transmission to rear we apply. And for those of wheels. Please note: To get the price given on page 6, they you who doubt this, on must be ordered in pairs. 11 1 (ref) 16 1 DIA 11 1 DIA (ref) 16 5 3 (ref) 8 Make from Worksman wheels 78SA (26") or 1008A (20") by removing bearing cups. Weld in keyed insert made from Surplus Center 1" shaft coupling, Item No. 1-1563-E, by machining OD to match ID of hub (1-11/16 ref) and cutting length to match width of hub (3-5/8 ref). Fig. 5, above, is the detail drawing of the around hub as shown. To drill these plug 1 inch dia modification. Fig. 6, next page, holes, a right-angle attachment will be neces- continues our discussion about modifying sary for your drill to reach through the hub for a 3/4 inch dia keyway axle. In this spokes. These attachments are a common illustration, we are showing only one Surplus and inexpensive item at most good hardware Center coupling. Note how it fails to span the stores. entire width of the hub. For ease of welding , To use this modified wheel, a spacer or we recommend 4 plug welds, equally spaced washers can be used to expand the area of the 3/16 keyway Spokes 3 DIA (ref) 4 3 2 Plug weld (4 plcs) 4 Make from Worksman wheels 326A (20") or 329A (26") by removing bearings. Weld in keyed insert made from Surplus Center 3/4 shaft coupling, Item No. 1-1563-C, by machining OD to match ID of hub (1-3/8 ref) axle not covered by the welded-in coupler. least, serve as inspiration to come up with There are more variations possible, but your own custom design. If you do, please we have presented the more basic ideas. send us the details so we can share it with Hopefully, these will fulfill your needs or, at others. 20 x 2.125 Wheel with bearing cups 26 x 2.125 Wheel with bearing cups See Fig. 3 for detail. See Fig. 3 for detail. Without tire, tube & strip Without tire, tube & strip P/N 1008A. . . . . . . . . . . . . . . . . . . . .$59.89 P/N 78A. . . . . . . . . . . . . . . . . . . . . . .$59.89 _______________ _______________ — More on Steering Geometry — ply mounting the lever arms so they each by Bob Kapela point exactly to the center of the rear axle (on our cars, approximately 15 degrees), it would We have all read various articles about make a dramatic improvement in the steer- steering geometry on our self-built vehicles. ing. Here is how it works: Due to the geome- Words like "camber", "toe-in", "caster", and try of the Ackerman setup, when you turn the lately, the "Ackerman principle". In my opin- wheels to the extreme in either direction, the ion, because of our (limited) ability to make inside wheel will be turned up to (10) degrees super precision setups, and the slow (12 to 15 more than the outside wheel. This allows the mph) speeds of our vehicles, there is only one inside wheel to freely-turn in a smaller circle principle that we should always use when than the outer one. This greatly reduces the building our vehicles, and that is the strain on the steering gear, makes the turn "Ackerman principle", and this is why: (first, easier for the operator, and makes the turns it is assumed that you have constructed the much smoother. This is a very simple thing to front axle properly, square and tight, with do when setting up your steering and costs reamed kingpin bushings so there is no nothing, it is highly recommended. "slop", etc..) When you are at the point of attaching the Camber, toe-in, and caster lever arms to the spindles to make the wheels on vintage vehicles: turn, it is a natural thought to position the lever arms straight back, because it "looks "Camber" is where the spindles are "cam- right". Now, when you have the tie rod bered" in about (2) degrees, that is, the front attached and turn the wheels, both will turn wheels are closer together at the bottom than the same amount of degrees. they are the top. The idea of cambering is to However, when you decide to make a turn, reduce steering effort, as the center of the tire the inside wheel must travel in a smaller cir- at the ground will more closely intersect an cle than the outer wheel. If you were turning imaginary-line drawn through the spindle on snow, ice, or sand, for example, and looked bolt (or kingpin)- back at the tracks after you turned, you would see that the tracks were not "undis- "Toe-in" is used to offset the wearing action turbed". One or both wheels skidded or of the camber on the tires. The tires are setup dragged a bit through the turn. If you were to be slightly closer together at the front than on blacktop or concrete, the tires may squeal at the rear.The term "gather" is sometimes a little, or you can notice that one or both used instead of toe-in. tires are being "pulled" away from the rim during the turn. About one hundred years "Caster" is where the king pin (spindle bolt) ago, an early automotive engineer named is setup with the bottom inclined-slightly for- Ackerman (among others) found that by sim- ward. A straight line drawn through this will 2.000 .500 R .900 .325 15 degrees — How to improve an existing Azusa 11/16. Drill this out to .750 dia. x 3/4 deep at go-kart steering spindle — each end. Into this hole, press a bronze bear- ing 5/8 x 3/4 x 3/4 long at each end. This will In the last issue I promised to address the be a snug fit with our new kingpin. “sloppy” spindle that a lot of us are using. We This would be a good time to add a grease learned how to make an alteration to add the zerk fitting. So, drill and tap a 1/4 - 28 hole Ackerman Angle to the steering arm. Now through the side of the spindle tube, about let’s try to take out some of the slop in the 1/2 way between the new bushings. Now, you kingpin. can lubricate the kingpin. As Bob Kapela stated, the body of most If you’re using the Northern garden cart Cat 5 bolts will mike a minimum of .006 wheels on your spindles, there’s so much slop undersize. This, added to the misearable lit- in the bearings, coupled with the undersized tle nylon bushings, is horrible. spindle, there’s not much we can do here. You can replace the 5/8” kingpin bolts But, at least, the kingpin is tightened up. with a shoulder bolt of correct length. The existing spindle tube is drilled at — Everett Moore 24.00 90° 90° 30° 22.00 22.00 33° 27° 22.00 22.00 6.00 15° 15° 3.10 — BRAKES — from Art Chevalier The photo above shows Arts first design We’re getting photos from others who are, on his front wheel brakes. also, adding front wheel brakes. They will be featured in latert issues. For his disc brakes, Art used salvage brakes from salvage Cessna aircraft. How we get “Standards” were all alike in the matter of wheel spacing. The United States standard railroad gauge of Does the expression, "We've always done it 4 feet, 8.5 inches is derived from the original that way" ring any bells? The US standard specifications for an Imperial Roman war railroad gauge (distance between the rails) is chariot. And bureaucracies live forever. 4 feet 8.5 inches. That is an exceedingly odd number. So the next time you are handed a specifica- tion and wonder what horse's ass came up Why was that gauge used? Because that's the with it, you may be exactly right, because the way they built them in England, and English Imperial Roman war chariots were made just expatriates built the original US Railroads. wide enough to accommodate the back ends of two war horses. Now the twist to the Why did the English build them like that? story... Because the first rail lines were built by the same people who built the pre-railroad There's an interesting extension to the story tramways, and that's the gauge they used. about railroad gauges and horses' behinds. Why did "they" use that gauge then? Because When we see a Space Shuttle sitting on its the people who built the tramways used the launch pad, there are two big booster rockets same jigs and tools that they used for build- attached to the sides of the main fuel tank. ing wagons, which used that wheel spacing. These are solid rocket boosters, or SRBs. The SRBs are made by Thiokolat at their factory So.....Why did the wagons have that particu- at Utah. The engineers who designed the lar odd wheel spacing? Well, if they tried to SRBs might have preferred to make them a use any other spacing, the wagon wheels bit fatter, but the SRBs had to be shipped by would break on some of the old, long distance train from the factory to the launch site. The roads in England, because that's the spacing railroad line from the factory happens to run of the wheel ruts. through a tunnel in the mountains. The SRBs had to fit through that tunnel. The tunnel is So who built those old rutted roads? Imperial slightly wider than the railroad track, and Rome built the first long distance roads in the railroad track is about as wide as two Europe (and England) for their legions. The horses' behinds. roads have been used ever since. So, a major Space Shuttle design feature of Roman war chariots formed the initial ruts, what is arguably the world's most advanced which everyone else had to match for fear of transportation system was determined over destroying their wagon wheels. Since the two thousand years ago by the width of a chariots were made for Imperial Rome, they horse's ass. Issue No. 50 Engine and Wheels Page 39 — Front Wheel Brakes — Chain drives, properly-sized and installed, maintenance engineer at Ford, I could get a are a very reliable and inexpensive way to $50,000.00 project approved for a new con- operate a power transmission system. Setup veyor chain installation, based on a the meas- your drive correctly, maintain it, and enjoy urements of a couple of 10 link sections of the rewards of trouble-free operation for your chain alone. That is how reliable and recog- efforts. nized this test is. To further explain this, for example, your size #40 chain has 1/2" (.500") (For effect, I will describe some extreme pitch. The chain, usually, wears faster than situations, found in industrial operations, the sprocket teeth, which remain at proper that operate mostly non-stop, for extended .500" pitch longer, unless the sprockets are periods.) very soft. As chain-wear progresses, it's true pitch distorts , measured across several links You should only have to readjust your from .500" to .505", then .510", etc..There chains once per year or season. If you find comes a point when you cannot properly that your chains need adjusting frequently, adjust the chain to make up for this wear. As this is something you cannot afford to ignore. you tighten the chain, it will start to climb up It could mean that your drive is under-engi- on the sprocket teeth. Replace long before neered and needs upsizing from size #35 or this. #41 to size #40 or larger. If you are already at size #40, and still have problems, you have to Other factors that accelerate chain wear do additional troubleshooting to see what is include: combinations of a very heavy causing the problem. Where did you get the machine or one that has more than normal chain? If it is very inexpensive chain from resistance to rolling (this would usually cause some hard to pronounce country, this could be engine overheating), a machine carrying a the problem. "Made in America" still means heavy load, pulling a trailer, extended opera- something; use high quality, name brand tion in sand or soft ground, machine is over- chain. Is the chain dry and shiny and does it powered, or engine not running smoothly. kind of "snap" around the sprockets? This indicates that the chain is dry and needs Again, avoid half or "offset links", remem- lubrication. The proper way to relubricate a ber that a chain is only as strong as it's weak- chain is to remove it and soak it in medium est link. weight oil overnight. When re-installed, how- ever, centrifigul force may throw some oil on Chain takeups or tensioners are meant to your driveway and the underside of the keep tension on the slack side of chains only machine.When you have the chain off, hold it beween periodical mechanical adjustments. in your hands and see if there is significant They are nice, but not a cure-all and certain- slop between each individual link. This will ly not meant to compensate for unlimited indicate the amount of wear. Lay the chain chain wear and stretch. down full length and count the number of links. Then, compare the extended length of Check your sprockets. In industrial use, the chain section with a brand new section Engineers commonly specify sprockets with with the same number of links. The extra hardened teeth. They resist wear and main- length of the old chain will soon tell you tain proper profile for much longer than plain if/when it is time to re-chain. When I was a spockets (double this life). They are readily Everyone that builds a vintage-type repli- and tight? Is the brake linkage reliable? Is ca, like ours, is proud of the accomplishment. refueling convenient? The list goes on and on. No matter if you followed purchased plans I wish we had standardized-guidelines that exactly, modified them to fit your needs or we could all follow, maybe that will come used your own ideas from start to finish; you sometime. should keep in mind that a "robust", good looking machine being your goal, you should There is one law of nature with which I also stay within certain guidelines, to ensure think we are flirting and that is the law of that the history of your machine is always a gravity, in our application, it is the center of pleasant one. gravity and inertia. I am not a physicist, but my imagination goes to work if I see a narrow There are a lot of building parameters for track machine, with a seat high above the which to be aware and designers that present engine, and (to fit two adults more comfort- plans should use these parameters. They ably) the seat widened beyond the vehicle's include, but are not limited to, wheel tread design. A person, alone, operating a machine (width), seat width, seat height, maximum like this, sitting close to the outside, clipping steering angles, maximum-operating speed, along at a good pace, suddenly turning brake arrangements, etc. These parameters sharply, has inertia that wants to keep going help to assure that we build a safe vehicle, forward. Just be aware of this and build but there are many more where we simply accordingly. have to use our good judgment: How good is our speed control (throttle) setup?, Does it I am building my third machine at this have a positive-return to idle? Is there a "kill" time, and am implementing improvements switch, easy to access? Is there an fire that are not on the second one, and, certainly extingisher on board? Is the steering smooth not, on the first one. I, frequently, refer to the 1.000 .600 1.625 Dia 1.000 DIA 7 1 (ref) 1/4 std keyway 16 Driving Insert Make from Surplus Center X1B Hub, Item No. 2343 A .80 Outer Pilot Ring Hub Driving Insert Section A-A After assembly of driving insert and hub, A line drill 4 - .250 Dia holes as shown. Press in 4 - 3/4 lg Roll Pins flush with inner diameter. A Hub Grease Seal Above is an assembled wheel, cut-away at They come in kits, which include 2 wheel the spokes. You will note the conical shaped setups plus the master cylinder assembly, disk that clamps the spokes firmly to the hub which even includes the brake pedal! with 8 - 5/16 bolts. The results in a wheel that would match most originals for strength. More about hydraulic brakes in a future E & W newsletter. 2. The entire wheel including shrinking and I realize that we have moved up and fastening the rim is approx $110.00 beyond the garden cart wheel beginner’s car- riages and are talking about a carriage that 3. The tire, tube and rim strip should run could cost over $2,000 to build. But compared under $50.00 each. to other hobbies, we’re still in the economic This gives us a total of $214.00 per wheel ballpark if you consider what some people ready to mount onto your carriage. spend on golf, hunting, fishing, boating, ham radio, photography, flying RC and real air- We did not factor in the shipping cost planes, etc. My problem is I like to do all the which will vary depending upon where you above on a Tiddly-Wink budget! live. The biggest shipping is the UPS from Witmers, located in Pennsylvania. I hope you agree that the wait for this issue has been worth it. Another expense not factored in is the machined-adapters for the driving wheels. This will depend on whether you machine your own, have a friend do them for you or go Any questions? Please email me. the expensive route at a machine shop at Everett Moore evmoore80@msn.com $50.00/ hour. The above drawing depicts a “tool” I made to surface. I, also, have used it to change front mount my tires to the rim. Since I have a tires on a riding lawnmower and mount tires 140# anvil, I simply designed it to fit the on bicycle type wheels. This handy little tool hardie hole (square). Otherwise, a flange works so well I only wish I had made one could be welded to mount to some other solid years ago! Issue No. 50 Engine and Wheels Page 52 — Extended Hub Caps to Fit Witmer Hubs — 2.0 1.5 2.0 1.750 x 18 Thd The drawing above depicts an extended holes to accept a spanner wrench for ease of hub cap necessary to clear the rear axle. installing and removing hubcaps. Besides, they look more correct than the flat If you use a different rear axle arrange- brass hubcaps made for buggys. ment, it might not extend far enough not to These are machined from a piece of 2” dia use the regular brass hubcaps available from brass stock. For simplicity, we use two blind Witmers. — Getting Started — by Bob Kapela Do you want to build and enjoy a replicar? make a note of certain articles for future ref- Maybe you have read all of Everett's publica- erence, and keep it handy. Experts tell us tions and have followed the group's progress that we do not retain a real high percentage through the E-mail messages, etc., and of what we read, and time dilutes that even maybe you have even obtained plans, but just more. There is valuable information available haven't "broken ground". If it is any consola- in the publications, with many good articles tion to you, be assured that many others, and photos of machines, completed, and in including this writer have had initial reser- various stages of construction. Sit down vations before putting the first two pieces of beforehand and make a written plan, stating steel together. After "breaking the ice," your goals in building your replicar. State though, it starts to get more interesting and what kind of machine you want to build, how you will find that building your replicar you are going to power it, and the general lay- starts to stimulate your thought processes to out. This can guide you to a good start and the point that you can't wait to get back out you can refer to the plan throughout con- in the shop and make some more progress, struction. Most small businesses, and all everyday. large corporations have written plans of Do it right. Even if you have read every action and goals. If this group were to some- issue of "Engine and Wheels", do it again, day organize and form a club, we would have — "Provenance" - "Place of origin" — small group of people that build replica cars. Bob Kapela Who can foresee the future? What if this changes if the right conditions happen? We Watching "Antiques Road Show" a few could organize into a national club, become days ago, my interest picked up when I charter members of it, publish a monthly noticed a pattern in the expert appraiser's magazine. Who knows? valuation of an item if it had "provenance". My point is: include provenance as part of An example would be a Civil War pistol; by your vehicle's history. Get a nice heavy duty itself it may be valued at, say, $2000.00. With folder from one of the office supply places. provenance, like an included diary or photo of Put any pictures you took during construc- the original owner, together with the person's tion or plans you used to build the machine or military unit's history, the value placed on inside the folder. If you were in a parade(s) the item may double. and there was a picture of you and your vehi- How does this apply to us? We are still a cle in a local publication, put it in. Above all, A quick fix is to add more weight to the If your engine is driving a hydrostatic shaft. Use a double groove pulley for extra transmission, you may not experience as weight. If shaft length permits, add a chain much trouble, since the belt and driven-pul- sprocket in addition to the pulley. With a ley is adding to flywheel weight. sprocket of about 6 inches diameter, it should Gerry Williams is at the tiller while yours truly hitches a ride. I have so many grand kids in the area that want to drive Grandpa’s cars, that I had to thumb a ride! — Amish Buggy Brakes — about this and learned that, yes, the Old Order Amish, who use steel tires on their At the present time several builders are buggy wheels, do place the hydraulic brakes concentrating on the use of hydraulic brakes on the front wheels. The reason being that on our replica carriages. Bob Kapela and I the weight transfer lessens the ground con- are using the brake kit intended for use on tact on the rear wheels and they simply slide Amish buggies. They are high quality and as sleigh runners. relatively inexpensive, with the additional However, on buggies used by the feature of fitting the wood spoke wheels men- “English” and others, equipped with rubber tioned earlier in this issue. tires, the brakes go on the rear wheels. With In my study of these brakes, I was sur- rubber tires, the front wheels would transfer prised to see a lot of Amish buggies have the too much braking torque to the front axle and drums installed on the front axle, as opposed possibly damage the fifth wheel, where the to the rear. Some of you who live where the axle pivots to steer. Amish are prevalent or who have seen the It is interesting where we “rocket scien- Harrison Ford movie, “Witness,” may have, tist” have to go to learn what’s really happen- also, noticed the placement of brakes. ing! In a recent conversation with Eldon This is something to consider for those Witmer, at the Witmer Coach Shop, I asked who plan to add brakes to the front axle. Right Right Tail light Head Alternator Stop light Lamp Light SW Fuse Engine Starter Starter Solenoid Fuse Stop Light SW To Mag. Spare Fused Circuit Horn Fuse S M B + - A Horn SW Battery Ignition SW Left Left Tail light Head Stop light Lamp Wiring Schematic So modified, the hub would now fit onto a used, machine a dummy outer bearing as go-kart type rear axle, where the outer 2 depicted in the drawing above. This elimi- inches are machined to .750 and threaded. nates the tight machining required by the These axles provide a least expensive way to pilot ring. make rear axles. (Caution: Don’t use one from Northern as they are imported, poorly made Without words, the assembly drawing and then painted, making them oversized. below shows the dummy bearing in place. Use an Azusa axle, available from Manufacturer’s Supply) What you’re seeing here is from future plans that will incorparate an enclosed rear We feel an improvement is presented on axle, complete with hydraulic brakes. this page. Simply, leave the outer race intact and in place of the outer pilot ring previously At some stage of building a replica horse- Fig. 1 their being dimmed, however, sufficient to switch is a motorcycle shop that carries a lot function as tail lamps. of aftermarket products. They will, usually, have an inexpensive switch, designed for When the brakes are applied, a SPST motorcycle operation. switch is closed. This shunts out the 4 ohm resistor and delivers the full 12 volts to the tail lamps, making them brighter. Advanced Wiring Techinques for Replica 0.450 ø 0.220 Antique Cars 5.59 11.43 1.670 42.42 TURN SIG HEAD LAMP RY 1 KILL ENG TAIL LAMP ALT TURN SIG SW I A START TURN SIG SW S HORN R FLASHER B AM TURN SIG START HORN SWITCH SOL'D FUSES LIGHT SWITCH TURN SIG HEAD LAMP TAIL LAMP BATTERY Fig 1 4 Ohm Fig 3 Fig 4 A Refering back to Fig 1, you will notice that all ground connections are depicted a tri- I angle symbol. In an all metal body car, most R of these connections could be made to the S body itself. However, many of our carriages Proper eye protection has to be our num- A magnetic switch, outwardly, operates ber one concern. We can still function with the same with one exception: it has an elec- nine fingers, but without our eyes, it’s white tro-magnet that is energized, when the cane time. Eye shields and goggles are rela- switch it turned to ON. The switch contacts tively inexpensive — get plenty and have are held closed until you turn the switch off them at each work station, readily available. or — loss of electric power de-energizes the magnet, thus breaking the contacts. This Besides having eye protection, getting switch will remain OFF — until you purpose- into the habit of using them must be devel- ly turn it on again. oped. We are, usually, enthused about what were doing and it’s easy to be concentrating Second to a magnetic switch, use self-dis- on the job at hand so hard that we start mak- cipline and always unplug your saw when 917 Roper / AYP Later, you find that you can get it running with a minimum of expense for a new fuel fil- 143 Tecumseh ter, spark plug(s) and replacing the kill wire, as the old wire had a worn spot in the insula- 358 Poulan / Weed Eater tion, allowing it to contact the tractor frame. You have a smile from ear to ear. So does the 200 Tecumseh Two-Cycle guy you bought it from. With his $20 he has stocked the fridge with his favorite brew and Sears carries a large amount of repair his wife is now happy! parts and, if you have the model number, you can go on line and look at assembly drawings Well, in real life, things don't always go and order needed parts. They are not always that good. However, the point being made is cheap, but at least they have some of them. to never pay more than you can recover, if you bring home a pile of junk. You'll find riding lawn tractors in one of two conditions: 1) It will run, can be started Now, if you’re looking at a push mower Engines removed from push type mowers I first learned of the flywheel/starting or even new engines designed as a replace- problem from corresponding with a builder ment engine for these mowers all have the who was using a Tecumseh engine. From blade coupled to the vertical engine shaft, other input, it seems they lead the list in cost with an adapter and cap screw into the end of cutting measures to be able to compete. I the crankshaft. The attached blade acts as an realize that many of you are using Tecumseh added flywheel on the engine. Remember the and I hate to bad mouth your choice of model airplane engine you had (or may still engines. If this case, I have no first hand have). You didn't dream of starting it without experience with this brand engine and am the propeller attached. The propeller acted as merely reporting information that I have a flywheel. If you put this engine in a model received from others. boat, a small flywheel had to be added. I consider Kohler the Cadillac of small Some engine manufacturers, in order to engines. You'll even find hydraulic valve make their engines cheaper, have lightened lifters on them. Couple this with solid state up the flywheel and use the presence of the ignition timing and about all the servicing rotary blade to complete the flywheel require- necessary is the normal — check the spark ment. Now, when we replace the blade with a plug(s), air cleaner and oil. No point or valve small V pulley, we are not adding much "fly- lifter adjustment required! wheel." To compensate, always use a cast iron One more thing before I close this article pulley and, if the length of the shaft permits, —Alternator output — on engines thus add something in addition to the flywheel, equipped, you'll find 3 amp; 10 amp and 15 or such as a larger cast iron pulley (unused) or more amp alternators. When judging a used even a large brake disc - anything to add "fly- lawn tractor, look to see if it has an electric wheel." clutch to stop the cutting blade. Since this requires more current to keep the clutch This is a good place to discuss brands of engaged, it is common to find the engine, engines. Today, we are blessed with a huge also, has a higher output alternator. selection of engines from which to choose. From my own, personal, feelings generated There's more that could be written on this from experience and input from other such as subject. However, with these few bits of wis- this recent email: dom, I think I save more for later. 3 totally collapsed and allowed the rear axle to dig into the pavement, would the carriage 4 have remained upright, skidding sideways to mph is satisfactory. a stop? I, personally, think it’s time to rethink our A bicycle-type, spoked wheel is designed sense of values, when it comes to using the to take a great radial (perpendicular to axle) Northern garden cart type wheels. Yes, the load. When a bicycle or motorcycle turns, it is Worksman wheels cost more, but not nearly leaned into the turn, proportional to radius of as much as a trip to the emergency room at a the turn. This allows the load on the wheel to hospital! I would give odds that had James remain radial, with little side load. However, used Worksman wheels, the spokes would not when the same wheel is placed on a 4 wheel have pulled from the rim. Their stronger rims vehicle, there is no leaning, when steering; might not have folded-up either. therefore placing a side load on the wheel. Would this have prevented a roll-over? When the speed of the vehicle is kept rea- Possibly, and certainly not, by the rear axle sonable, the side load will seldom exceed the digging into the pavement! Our carriages, strength of the wheel. However, when speed like the originals we copy, all have an exces- is increased, the side force on the wheel sively-high centers of gravity. This, coupled increases on a non-linearer curve. This is with a relatively narrow tread width, gives why the leading designers in our hobby are the perfect formula for a roll-over, when turn- constantly preaching against building “fast” ing at excessive speeds. carriages. We all agree that a maximum of 12 — Yes — Women May Have Influenced Today’s Automotive Design ! ! Men’s Shirts
https://id.scribd.com/document/374625505/Horseless-Carriage-Builders-Handbook
CC-MAIN-2019-43
refinedweb
7,042
78.99
In particular, Python is a slightly higher level language than its main competitor, Perl, and is completely Object-Oriented rather than the object and scoping infrastructures being added as an afterthought to the original language design. Perl hackers will rant about how their language's quirkiness and flexibility make it more powerful than Python, but in point of fact while it's true that Python will not let you stuff an elephant into a purse, only the most lofty gurus would want to anyway. Lately I've been very happy with Ruby - it does all of what Python does and eliminates the mandatory indentation issues. Ruby is a more purely object oriented language than Python as everything is an object, even the primitive types. That having been said Python still gets a place of honor in my tool belt and that's unlikely to change in the foreseeable future. Oh, and nobody mentioned that it's named for Monty Python, not a snake. I originally learned Python because it was easy to learn, in particular, in comparison with Perl, and because a friend encouraged me to. I continue to use Python because, for many of the things I need to do, it is simply the quickest way to get the job done. In general, folks who hate Python despise the whitespace thing, but once you get over it, you realize it does make it infinitely easier to read other people's code. There have been several suggestions to imitate braced block structures on the Python site. I use Python for everything except when I need to do a lot of string-munging or I need to run the script/program as setuid root (for which Perl is the ultimate programming language). The Perl interpreter will allow the script to run as the root user, where the Python interpreter won't. In order to build Python scripts to run as root for every user, you have to build a shim in some other language (such as C) that does the switch to root and then calls the Python script. Also, writing Python extensions is a pain in the ass. There is no requirement to declare variables, so it is possible to mistype the name of a variable and inadvertantly create a new one or try and refer to a non-existant one. This leads to bugs that are hard to find, especially as much checking is deferred until run-time and only happens if that code is executed. The white space indentation scheme works for small code segments and a few levels of indentation, but for large blocks of code and lots of nested blocks it can be hard to keep track. Of course functions should be used, but see the next issue. There is a high execution time overhead for calling functions. Various articles have been written to explain how to write faster python code, but the sad fact is that often the best speed up is to move a function body into the body of a loop. This exacerbates the previous issue and, even then, python is not suited to computationally intensive tasks. by Eric S. Raymondfor a somewhat simplified version of mine. Listing 1). Listing 2. This is the genealogy of the programming language Python:. This genealogy is brought to you by the Programming Languages Genealogy Project. Please send comments to thbz. This node will address only one area of comparison between programming languages: readability. Do not, however, think that this sort of comparison is superficial and not valuable. Some estimates put code maintenence at almost fifty percent of total cost in a development project. Going back to code you wrote months ago, it can often be quite difficult to remember what's going on in your program. This is especially true if the code was written by another person. While good comments will always help, the truth remains: ugly code is hard to work with. Readability is an area in which Python excels. To help give you an idea of this, I've written several short programming exercises in both Java and Python. Now, to be fair, I'm not really comparing apples to apples here. Java is certainly the more powerful language, and one would expect it to be more complex. However, Java's syntax and formatting is similar to that of C and C++. I would eventually like to flesh this node out with examples comparing Python to Perl, but alas, my ability with Perl is not sufficient at this time. Until then, I give you the samples below: Console Test: public class ConsoleTest { public static void main(String[] args) { for (int i = 0; i < 1000000; i++) { System.out.println(i); } } } Python: for x in xrange(1000000): print x Hash Test: import java.util.Hashtable; public class HashTest { public static void main(String[] args) { for (int i = 0; i < 1000; i++ ) { Hashtable x = new Hashtable(); for (int j = 0; j < 1000; j++) { x.put(new Integer(i), new Integer(j)); x.get(new Integer(i)); } } } } for i in xrange(1000): x={} for j in xrange(1000): x[j]=i x[j] List Test: import java.util.Vector; public class ListTest { public static void main(String[] args) { for (int i = 0; i < 1000; i++) { Vector v = new Vector(); v.addElement("a"); v.addElement("b"); v.addElement("c"); v.addElement("d"); v.addElement("e"); v.addElement("f"); v.addElement("g"); for (int j = 0; j < 1000; j++) { v.addElement(new Integer(j)); v.elementAt(j); } } } } Python: for i in xrange(1000): v=['a','b','c','d','e','f','g'] for j in xrange(1000): v.append(j) v[j] IO Test: Java:(); } } } Python: f=open('scratch','wb') for i in xrange(1000000): f.write(str(i)) f.close() for i in xrange(1000): v=['a','b','c','d','e','f','g'] for j in xrange(1000): v.append(j) v[j] IO Test:(); } } } f=open('scratch','wb') for i in xrange(1000000): f.write(str(i)) f.close(). --The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk. Beauty. Python is a term used to refer to one of the main collaborators on the BBC sketch show Monty Python's Flying Circus. The original pythons were Terry Gilliam, Terry Jones, John Cleese, Eric Idle, Graham Chapman and Michael Palin. Often used in the context of "the n surviving Pythons", since one of them is dead and the rest will inevitably follow. When n=0, we will have lost the last of a unique group of comedians who changed the face of British comedy. Python can also be used as an abbreviated form of the name Monty Python's Flying Circus The Python node, perhaps more than any other, speaks volumes about the readership of E2: IT`S A FUCKING SNAKE, IS IT NOT? ``trust in meeeee, trust in me'' Native to India, Africa and Asia, the Python is a constrictor; not venomous, it literally squeezes the life from its prey before swallowing it whole. As such, it plays a valuable role in its ecosystem by policing the population of rodents, insects and other pests. Despite its fearsome reputation, the Python is endangered due to the taste of the western world for shoes and wallets made from its skin, of which it can produce a huge amount (a Python, the world`s largest snake, can grow up to ten metres long). In its natural environment, the Python may live for twenty years. The Python uses its extraordinary killing technique sparingly. Interestingly, despite the strength easily to kill a large mammal combined with the benefit of camouflage, its instinctive response when challenged is to retreat rather than risk itself in fight. Although the Python prefers the dense rainforest as a home, its ability to crawl, climb and swim make it adaptable to a range of environments. Pythons have been reported as a problem in large cities. Unsurprisingly, it makes a controversial pet. The female Python is unusual among reptiles in that she incubates her young as would a chicken. In some cultures, also a euphemism for `penis' in the context `purple headed power python'. UPDATE: It has been pointed out (by Razhumikin) that the wording of the above comments may lead people to believe the Python to be a species of snake. This is not, in fact, the case. The term ``python'' refers to a family of species, or genus. enkidu dwells on the matter further by adding: In your clarifying statement, you should probably not refer to a genus as a ``family of species'', given that there is also a class called ``family'' which is a family of genuses. But I'm doing it too, since there's also a class called class, which is a family of orders... General Python Contrary to common belief these three groups (pythons, boas and anacondas) do not represent the (evolutionary) oldest groups of snakes, that would be the Typhlopidae or "Blind Snakes" The whole group is non-venomous, killing its prey by constriction. This type of constriction does not mean that they choke their prey to death by closing of their trachea or windpipe. It means that the snake throws a loop (or a number of loops, depending on the size of the prey, relative to the snake) of its body around the chest of its prey, and slowly tightens this loop. Thus making it impossible for the prey to inhale. This of course results in the death of the prey, after which the loop of the snake which is wound around the prey's body is tightend once more, to break the preys ribs in order to be able to swallow it easier. The largest snake in the world (in length, not weight) is a Reticulated Python (see this) which can get up to about 10 meters long (which is longer than a dubbeldecker bus!), altough other species (such as the Anthill Python (Antharesia perthensis)) stay very small (adults grow to about 60 cm). The heaviest snake in the world, by the way, is an Anaconda (Eunectes murinus) which can weigh over 250 kilos, and is unable to support its own weight on dry land (This is not a member of the Pythonidea Family thought). At the time of this publication, nine writeups in this node pertain to the programming language, three pertain to the genus of non-fictional snakes, one is a dictionary entry, and one pertains to the performers in Monty Python's Flying Circus. This node is missing a critical piece of literature, namely the very origin of the word "Python!" Ovid, Hyginus, Robert Graves, Erwin Rohde, and Karl Kerenyi have all written extensively about the creature and the myths associated with him, but the earliest known mention of the dragon-like beast Python is in the Homeric Hymn to Apollo (which Homer probably did not write). According to the story, a serpentine monster named Python was terrorizing all of Greece, in pursuit of Leto, a lesser goddess of the dark night sky. Python was a hideous creature whose body was so vast that it could flatten entire acres of cities and fields. Its bite, breath, and presence were poisonous, radiating an evil miasma into the areas around it. The reason for its violence was that Hera was wrathful toward Leto for laying with Zeus, and Hera dictated that Leto would not be allowed to give birth to her twins (Artemis and Apollo) anywhere that the sun shines. Python pursued everywhere that Leto fled, and many smaller islands actually "swam away" from Leto's efforts to escape to them: the islands both feared the monster's carnage and destruction, and Hera's anger. Leto finally found herself on the shore of the island Delos, and Delos did not initially flee her or turn her away, because it considered itself far too inhospitable for her to consider it a place of refuge: its terrain is forbidding to all agriculture, it lacks any real mineral resources, and its only real claim to fame at the time was a small temple to Zeus on one of its conical hills. For this very reason, Leto heaped flattery and promises on the island, swearing, The island accepted Leto's pleas and allowed her to shelter in its crags and caves, well out of the sunlight, so that she could finally deliver her children. Years after this event, when Phoebus, the Greek solar deity, was still young and wanted to make a proud name for himself, he took up his golden bow and arrows and set out to slay the serpent, intending to avenge the grief it gave to his mother. Python fled Apollo's pursuit, making a line of destruction all the way to the Greek town of Delphi, on the southwestern portion of Mount Parnassus in the Phocis valley. At Delphi, Python hid himself beneath or behind the temple where the Earth-goddess Gaia kept her prophetic oracle. Normally, this would be a clever way for the serpent to take refuge, both because Python himself was sacred to Gaia (in some versions, Python was considered the hero and protector of all of Mount Parnassus), and because the temple space was considered wholly inviolable, even by other gods. Apollo followed Python all the way to the innermost sanctuary of the temple, where the Delphic Oracle sat on her tripod beside a cleft in the mountainside, where volcanic fumes emerged to give her visions of prophecy from Gaia. He killed Python there in the temple, a dreadful crime against Gaia and her oracle. Apollo placed the Omphalos Stone, a massive boulder considered to be "the navel of the world," on the dead serpent's head, to make sure it was dead, and the rest of its corpse was left hanging out the front of the temple, rotting with dreadful odor in the intense sunlight. The town was given the secondary name Pytho, "rotting," as a result, and the oracle (who retained her status and job at the temple) was titled the Pythia, for the place name. Apollo re-appropriated the temple into his own service instead of Gaia's, and as a penance for the offense against Gaia, Apollo instituted the Pythian Games, over which he personally presided. Some versions of the story depict two serpents instead of one- a female named Delphyne and a male named Typhon; some depict Python in Apollo's service protecting the Omphalos instead of buried under it. The symbolism of the story has been widely debated: the monster's miasma could be related to plagues or to toxic fumes from sulfurous vents and hot springs near and beneath the temple of the Delphic oracle. Another interpretation is that Apollo's slaying of Python represents how sunlight banishes the mists which form over swampy areas overnight, and how nocturnal frogs and swamp animals retreat into hiding during the day. The story of Saint George and the Dragon is considered to be generically the same tale type, or a related type, to the story of Python (Aarne-Thompson Index tale type 300, "The dragon slayer"). Py"thon (?), n. [NL., fr. L. Python the serpent slain near Delphi by Apollo, Gr. .] 1. Zool. Any species of very large snakes of the genus Python, and allied genera, of the family Pythonidae. They are nearly allied to the boas. Called also rock snake. ⇒ The pythons have small pelvic bones, or anal spurs, two rows of subcaudal scales, and pitted labials. They are found in Africa, Asia, and the East Indies. 2. A diviner by spirits. © Webster 1913. Log in or registerto write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/Python?author_id=4236
CC-MAIN-2015-06
refinedweb
2,613
65.86
> go north You are facing the north side of a white house. There is no door here, and all the windows are boarded up. A narrow path winds north through the trees. > go north Let’s start with some housekeeping details before we get into coding. I’m going to start each of these episodes – and there will be a LOT of them – with a spoiler from Zork. So: spoiler alert! If you want to solve all the puzzles in Zork, now would be a good time to do so. You’ve had over 30 years, get on it! There’s a tradition amongst Z-machine implementers to name the project after funny stuff found in the Great Underground Empire — the Frotz and Nitfol interpreters, the Quetzal save file format, the Blorb resource format all come to mind — and I will be no different. I hereby name this project “Flathead”, after Lord Dimwit Flathead, the ruler of the Great Underground Empire. We are going to start very, very small in this series. How do we twiddle a bit in OCaml? Because we’re going to need to! The Z-machine specification is chock full of text like this: If the top two bits of the opcode are 11 the form is variable; if 10, the form is short. […] In short form, bits 4 and 5 of the opcode byte give an operand type as above. If this is 11 then the operand count is 0OP; otherwise, 1OP. In either case the opcode number is given in the bottom 4 bits. I do not like bit twiddling code. The code is frequently inelegant, the mechanisms overwhelm the meaning, and it is rife with the possibility for bugs. So it was with some trepidation that I started writing bit twiddling code in OCaml. But perhaps I am even getting ahead of myself here, as most of my readers will not be familiar with OCaml. OCaml is primarily a functional language in the spirit of Lisp and Scheme: it encourages immutable data structures and passing around functions as data. It has an awesome type inference system; everything is typed and there are very few type annotations necessary. There are no “statements” per se; most of the code you write is expressions. The language permits both object-oriented programming (the O in OCaml) and mutable data structures, but I am going to use neither in this project. OCaml has both a “REPL” (read-eval-print-loop) interactive mode and a more traditional compile-a-bunch-of-sources-and-execute-the-result mode. I’ll be using the latter throughout, but I encourage you to play around in the interactive mode. When I embarked upon this project I had written no code in any ML variation since the early 1990s. I am very out of practice, and learning as I go here. If you are an OCaml expert reading this, I strongly encourage leaving constructive comments! I want to learn how to write high-quality code here, not code that looks like a C# developer who had no clue about functional languages wrote it. So let’s start small. I have an integer. The Z-machine refers to two-byte integers as “words”, so I’ll call my first integer “word”: let word = 0xBEEF Pretty straightforward. A declaration like this may end in two semicolons, and if you are using OCaml in its interactive mode, the two semis tell the REPL that you’re ready to have this evaluated. In non-interactive mode it is legal to include the semis but apparently this is not considered cool by experienced OCaml programmers, so I will omit them in listings in this series. This declares a variable called word, though “variable” is a bit of a misnomer in OCaml. Variables do not (typically) vary. (There are ways to make variables that vary – they are somewhat akin to “ref” parameters in C# – but I’m not going to use them in this project.) Variables are actually a name associated with a value. Later on we’ll see another way to “change” a variable in OCaml, but let’s learn to crawl before we try to walk. Note that the standard C-style convention for hex literals works as you’d expect in OCaml. Back to the problem at hand. How would we extract, say, the top four bits? I will follow the convention of the Z-machine specification that the least significant bit is bit 0, so we are looking for bits 15 through 12. In C-like languages we would use bit shifts and bitwise masks. OCaml is no different, though the names of the operators are a little odd: let () = Printf.printf "%0x\n" ((word lsr 12) land (lnot (-1 lsl 4))) Yuck. Let’s unpack that mess. The let-expression with the () to the left of the equals is a convention for “this is the main routine”. The contents of that routine are a unit-returning – the OCaml jargon for void returning – expression which ought to look surprisingly familiar to C programmers. Yes, OCaml has printf, bizarrely enough. The capitalized Printf names a module — we’ll talk more about modules later — and the function is printf. The format string is just like format strings in C — percent indicates a field to be replaced, backslash-n is a line break, and so on. Argument lists follow directly after the function, and arguments are separated by spaces. No parenthesizing or commas are required or permitted. We want to give two arguments to printf: a format string and a number value, so the number expression is in parentheses; otherwise all those elements could be treated as individual operands. lsr, land, lnot and lsl are the logical shift right, and, not and shift left operators. I don’t think I could live with myself if I had to write a whole lot of code that looked like this illegible mess. What I really want to say is “I want four bits, starting from bit 15 and going down, from this word.” So let’s write a helper method to encapsulate that logic. I’ll break the computation into two parts while I’m at it: compute the mask, and then shift and apply the mask: let fetch_bits high length word = let mask = lnot (-1 lsl length) in (word lsr (high - length + 1)) land mask Principle #2: Embrace small code. Abstraction encourages clarity. No computation is too small to be put into a helper function. No expression is too simple to be given a name. Small code is more easily seen to be obviously correct. Let’s again look at this in some detail. Here we see how to define a function in OCaml: just the same as you define a variable. What distinguishes a function? It has a parameter list following the name of the value. The body of the function is a single expression. Really? A single expression? It looks like two statements: compute the mask, and the evaluate the value. But notice the in at the end of the let line. This means “the variable binding I just made is local to the following expression”, and the whole thing is one expression. I dearly wish this feature was in C#; there have been proposals to add it a number of times throughout the years. Now that we have a function we can use it: let () = Printf.printf "%0x\n" ((word lsr 12) land (lnot (-1 lsl 15))); Printf.printf "%0x\n" (fetch_bits 15 4 word) A couple things to notice here. First, there’s a semicolon separating the two unit-returning expressions. This syntax is used in OCaml mostly for expressions that are useful for their side effects, like printing. The semicolon means basically what the comma operator means in C: evaluate the first expression, discard the result, evaluate the second expression, use the result. It’s the sequential composition operator. (One can think of the semicolon as logically the sequential composition operator on statements in C#, but in OCaml it really is an operator on expressions.) We call our new function the same as any other function: state the name, and then give argument expressions separated by spaces. Note that the function we’ve defined here is completely typed. OCaml deduces that all three arguments must be integers because integer operators are used on each. It also deduces that the return is an int. An attempt to pass in a string would result in a compile-time error. OCaml is statically typed, but not explicitly typed. There is a way to add explicit type annotations but I’m not going to use it in this project. Have we gone far enough in building this abstraction for messing around with bits? No, we have not. We can do better. Next time on FAIC: We continue to obsess over individual bits. How could this possibly be improved? Or, perhaps more to the point: what can still go horribly, horribly wrong? Anyone try to use ocaml-top on windows? It is crashing on startup for me so I’m stuck using the REPL for now. I haven’t tried that tool. In the next episode I’ll post a link to the batch files that I’m using to compile and run the code. ideone.com has an ocaml compiler. that is what I’m using so far. F# uses different bitwise operators. With those replacements, the code is valid F#. F# bitwise operators F# keyword reference “In addition, the following tokens are reserved in F# because they are keywords in the OCaml language: asr land lor lsl lsr lxor mod sig” let word = 0xBEEF let fetch_bits high length word = let mask = ~~~ (-1 <<>> (high – length + 1)) &&& mask let () = Printf.printf “%0x\n” ((word >>> 12) &&& (~~~ (-1 <<< 4))); Printf.printf "%0x\n" (fetch_bits 15 4 word) Right. But Eric’s program is still valid F#, despite all that. Simply use the file extention .ml and f# will compile in ML compatibility mode. In ML Compatibility mode, those keywords are released, so they can be implemented as functions. They actually are functions in OCaml too, and are only keywords there in the sense that the compiler is permitted to assume (without checking) that they are defined in the normal fashion. Of course you also need definitions for these pervasive functions and any other used parts OCaml Standard library that are not implemented in Fsharp.Core. Those are implemented in a library named FSharp.Compatibility.OCaml. To use them while maintaining source compatibility with OCaml, you need to add the following conditional comment to the top of the file: (*F# open FSharp.Compatibility.OCaml F#*) With just a single line additional, the use of of the .ml file suffix, and referencing a single assembly available on NuGet, Eric’s code so-far works fine with F#, without needing to change a bunch of operators. I expect that to continue to be true, moving forwards, given the language subset Eric will be using. Also the only reason that one line of code is needed is because compatibility library chose to use namespaces. In theory the library could have been written with its modules in the global namespace, in which case the conditional comment would be unnecessary, and Eric’s code would work completely unmodified. using VS2013, I did not need to use .ml file extensions, though I did need to add a –mlcompatibility option to the build, besides adding the nuget compatibility library and referencing it as you described. The answer: everything. Everything can go wrong. Yes, but care to be a bit more specific? I think you would have a much better sense as to what could happen than I would, sir. Bounds, mostly. I’m not very familiar with the ML family. Does this mean your code is actually going to be in Caml, or even just ML? I guess not, since modules sound pretty O. Modules are more like namespaces or static classes than they are like objects. Modules in ML are an unusual and powerful beast, because ML consists of two languages really, a core language and a module language (see or). So if modules seem odd sometimes, the reason for this is because you are in module language. More Infocom links: starts a series of articles on the early days of Infocom. Follow the links at the top-or-bottom of the article for more articles. You might find of particular interest. (Thanks to Jeremy Smith (no relation) for that last link.) P.S. You mentioned “Nitfol”, the “speak to animals” spell from Enchanter. This was derived from “Lofting” spelled almost backwards, the author of the Dr. Dolittle stories. Well, there is no argument validation whatsoever so things can go horribly wrong in that sense; lengthand highcan both be equal or smaller than zero which doesn’t make much sense, and lengthcan be greater than high + 1which also seems nonsensical. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #2024 For those who don’t want to install the whole OCaml environment, but want to test small code snippets, there is also an online REPL: Pingback: Zork vs. Haskell, part 1. | Squandered Genius Pingback: West of house | Fabulous adventures in coding Pingback: Forest path | Fabulous adventures in coding I think you were going for (-1 lsl 4) instead of (-1 lsl 15) in the last code example You may wish to consider ocaml-bitstring for this. 🙂 Just want to second Paul’s suggestion. ocaml-bitstring makes it easy to pattern-match on binary data, so bit-twiddling becomes just normal, straight-forward code, with no twiddling needed 🙂
https://ericlippert.com/2016/02/03/north-of-house/?replytocom=59167
CC-MAIN-2022-33
refinedweb
2,278
73.07
> > > >> In my experience, the mx_internal namespace was used often for the >> purposes such as to cover the weak points in design. I think that the >> problem of not documenting things could've been solved by @private At Adobe, every single Flex public property, method and class had to be approved by the architectual review board which oversees the Flash API. Once public, it can't be removed from the API since that could break user code. From my experience, the tendency of the review board is to keep things simple and try to maintain consistency across the classes. For example, an substantial amount of time was spent on property, method and parameter names. mx_internal was sometimes used by if you thought something should be public but couldn't justify it to the review board so you left a back-door. Sometimes mx_internal was used to cover "weak points in design". I think the older code uses mx_internal much more than the newer code and some engineers use it more than others. Carol
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201202.mbox/%3CCB6A86F9.A194%25cframpto@adobe.com%3E
CC-MAIN-2015-14
refinedweb
172
58.62
I feel like this can be easily accomplished. I have the following code: my_array = np.zeros(6) min = 0 max = 1 my_array = np.random.uniform(min,max, my_array.shape) print(my_array) I might be wrong, but I think np.random.uniform() generates a whole new array, and then the my_array variable simply points to the new array, while the old array gets garbage collected. Since I’m positive that the new array will have the same shape of the old one, is there a way to efficiently replace the values, rather than allocating a whole new array. Furthermore, I hope I could do this efficiently, such that I don’t need to use for loops. The following code was my attempt: my_array[:] = np.random.uniform(min,max) However, this results in a single new random value that gets reproduced 6 times, which is not what I want, and, furthermore, probably results in 6 calls to np.random.uniform() which seems inefficient EDIT: I want to keep the same functionality as in the code above, i.e., I want to replace all values of the array with new, random, values. The idea is to do this multiple times which is why I want to avoid having to generate a new array each time Answer One can replace the values of one numpy array with the values of another numpy array using the syntax original_array[...] = array_of_new_values The ... is an Ellipsis. See for more information. As hpaulj comments, some numpy functions provide an out= argument, so values from the function can go directly into the existing array. np.random.uniform does not have this argument. I have provided an example below. import numpy as np arr = np.zeros((100, 100, 100)) And now we overwrite the values: low, high = 0, 1 arr[...] = np.random.uniform(low, high, arr.shape) Here are timeit results of overwriting an existing array and creating a new variable: %timeit arr[...] = np.random.uniform(0, 1, arr.shape) # 67 ms ± 6.62 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) arr2 = np.random.uniform(0, 1, arr.shape) # 85.3 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
https://www.tutorialguruji.com/python/can-i-use-numpy-to-replace-values-in-an-array-rather-than-generating-a-new-array/
CC-MAIN-2021-21
refinedweb
370
68.36
Using Jini to Build a Catastrophe-Resistant System, Part 2by Dave Sag 05/15/2002 In the first part of this series, I discussed the technologies underlying an application that could resist the impact of a 767. I called this system Corporate Operating System, or COS. The need for such systems has forced the emergence of many new layers in application development, above and beyond the traditional n-tier structure. We have the Java Virtual Machine layer as a substrata, followed by a Service Container layer, in parallel with a basic set of ensemble provisioning services, such as those provided by Rio. Above that you now have a couple of application layers. For some applications you may need to write specialized services. For this, you have access to all of the code of COS (described in detail in part one), and soon (hopefully) you'll also have access to the source code for Rio, as well as Jini. You could simply extend COS' AbstractCosJSB (a super-class of Rio's own Service Beans) and fill in the blanks, or for a more complex JSB that interacts with COS' serviceFlags, extend AbstractServiceFlagJSB. This extends AbstractCosJSB and like it, provides a method: public abstract boolean performServiceOperation(CosEntryInterface entry, Transaction trans, String serviceparam) throws CriticalObjectStoreException, CosApplicationException; This is the central method that executes a "worker" method on the target object. It takes two interesting parameters: the target entry object, and a string parameter that is used to select the method on the target to execute. The service parameter is often used to morph the behavior of this method, with each value relating to a different interface method on the primitive target entry. The values of the service parameter are usually hard-coded into the definition of this method on the implementing service bean. They are also specified in an XML configuration file in the central COS system, and are distributed along with the ServiceConfiguration object. The parity of these two sources can be verified in the doServiceConfiguration method of the interface. Each service extending this class follows the "Service Flag Pattern." This pattern connects the service to a named JavaSpace, where it listens for ServiceFlag with a property set to the name of that service bean implementation. ServiceFlag objects are messages describing the location of a target Entry in the space and a simple instruction as to what it is to do with that Entry. Services have access to the full range of features of a Rio JSB, and as such can be good citizens of a J2EE or CORBA environment, or a JSP Web application. For many applications, the default services provided by COS -- JDBC archiving (Karen), HTTPPosting (Roger), garbage collection (Otto), distribution (Max), scheduling (Robin), emailing (Pat the Postman), and a growing list of others -- will work for you out of the box, so you can simply use a default ensemble configuration. Figure 1 shows the COS admin tool having loaded the dawgTaskTracker.xml file that specifies the Ensemble application. This file is a copy of the default COS ensemble file, the only difference being the line <CosApplication name="dawgTaskTracker" apptype="Dawg">. A COS client application is simply an application that connects to a named COS ensemble. The client application may simply extend AbstractClientApp and call on its inherited connect() method to perform the actual connection. For security reasons, the COS admin tool is to be used to deploy, start, and stop applications. Your client application will simply connect to the named application if it is running. Once connected to the COS, your application provides a handy CosConnectable object with create, retrieve, update, delete, and other methods you can use to manage shared information. Our sample application is an example of what we call a distributed, ad hoc workgroup ("Dawg") application, called TaskTracker. (Great name, huh?) TaskTracker works as follows: Users log in and report a task or tasks. They allocate the tasks to their buddies. The task itself is represented by an instance of a TaskTrackerEntry that is passed between the COS' JavaSpace and your GUI client app. Let's take a quick look at the actual TaskTrackerEntry: public class TaskTrackerEntry extends CosEntry implements Archivable A CosEntry is a simple extension to the JavaSpace's Entry class, which serves as a COS-specific marker. CosEntries add an application context and a creationEnsembleName, as well as creation and modification dates. This associates the CosEntry with an ensemble. When cos.create(entry) is called, the entry's setCreationEnsembleName() is called. Like JavaSpace entries, all CosEntries must have null constructors and public properties. In addition to a null constructor, our TaskTrackerEntry provides a more useful constructor for a client application to use when actually creating a new TaskTrackerEntry. public TaskTrackerEntry (String projectname, String name, Integer priority, String category, String blurb, String message, String user) The basic principles of the TaskTracker are as follows: - Projects can be broken up into descrete high-level tasks. - For every task there is a TaskTrackerEntryto represent it. - Any member of the system may create and accept tasks. - Tasks may be divided into subtasks. - Anyone may reassign a task to their buddies. - When you accept a task you must say how long you think it will take you to finish it. You may change your mind later on. This inverts the traditional ERP (Enterprise Resource Planning) or MS ProjectPlan-style project management system in that, although you do specify tasks in a similar way and can easily add sophisticated task dependencies and so forth, only the description of the tasks is done from the top down, and then only at first. The Dawg model encourages a bottom-up approach to task and resource allocation. It permits people to broadcast and refine what they want and to volunteer for duty.
http://www.onjava.com/pub/a/onjava/2002/05/15/911proof.html
crawl-001
refinedweb
960
51.58
Threaded View Answered: Why should not work on the mobile devices? Answered: Why should not work on the mobile devices? Currently, we are progressing Mobile App project by the sencha touch 2.0. However, some problems occurred it app does work no the mobile devices. But the browser(safari, chrome) is on the works very well. I need your help. Please find attached the wrong part from the source. Also, see below. Development Environment: 1. android 2.2 2. sencha touch 2.0 pr2 Call in android WebView.getSettings().setJavaScriptEnabled(true); WebView.loadUrl(""); I think .. No one anything specific issues in Android First, thank you very much for answers. As you said, phonegap was packed and run. However, the result is the same. Can you do the wrong place to find? In the attached file. Please, please. --- Java source --- public class PhoneGapActivity extends DroidGap { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //setContentView(R.layout.main); super.loadUrl(""); } }
http://www.sencha.com/forum/showthread.php?155297-Why-should-not-work-on-the-mobile-devices&p=674121&mode=threaded
CC-MAIN-2014-15
refinedweb
166
63.46
version bump. Serious testing. Our class is heavily unit-tested and covers 100% of the code, including all exceptional behavior. Furthermore, we checked with Valgrind that there are no memory leaks.. The single required source, file json.hpp is in the src directory or released here. All you need to do is add #include "json.hpp" // for convenience using json = nlohmann::json; to the files you want to use JSON objects. That's it. Do not forget to set the necessary switches to enable C++11 (e.g., -std=c++11 for GCC and Clang). . Here are some examples to give you an idea how to use the class. Assume you want to create the JSON object { "pi": 3.141, "happy": true, "name": "Niels", "nothing": null, "answer": { "everything": 42 }, "list": [1, 0, 2], "object": { "currency": "USD", "value": 42.99 } } With the JSON class,"}), json::array({"value", 42.99}) }; You can create an object (deserialization) by appending _json to a string literal: // create object from string literal json j = "{ \"happy\": true, \"pi\": 3.141 }"_json; // or even nicer with a raw string literal auto j2 = R"( { "happy": true, "pi": 3.141 } )"_json; // or explicitly auto j3 = json::parse("{ \"happy\": true, \"pi\": 3.141 }"); You can also get a string representation (serialize): // explicit conversion to string std::string s = j.dump(); // {\"happy\":true,\"pi\":3.141} // serialization with pretty printing // pass in the amount of spaces to indent std::cout << j.dump(4) << std::endl; // { // "happy": true, // "pi": 3.141 // }"); Any sequence container ( std::array, std::vector, std::deque, std::forward_list, std::list) whose values can be used to construct JSON types ); // only one entry for "one" is used // maybe ["one", types (see examples above) can be used to. Though it's 2016 already, the support for C++11 is still a bit sparse. Currently, the following compilers are known to work:. The following compilers are currently used in continuous integration at Travis and AppVeyor: The class deeply appreciate the help of the following people. parse()to accept an rvalue reference. get_ref()function to get a reference to stored values. has_mapped_typefunction. int64_tand uint64_t. Thanks a lot for helping out! NDEBUG, see the documentation of assert.. To compile and run the tests, you need to execute $ make $ ./json_unit "*" =============================================================================== All tests passed (8905012 assertions in 32 test cases) For more information, have a look at the file .travis.yml.
https://fuchsia.googlesource.com/third_party/json/+/4444ef93965e462119857dc8e7527456fd37a914
CC-MAIN-2021-10
refinedweb
395
67.04
.NET Compact Framework Team Some known issues Check out some of the known issues in MIX drop posted here. Author: mahathi Date: 03/24/2010 Transparency in CF Check out the blog posts by Yash here, on transparency in CF. Author: mahathi Date: 03/21/2010 MIX helpful links Here are a bunch of early links to get you folks started: - Where can I find bits to play around... Author: mahathi Date: 03/15/2010 We are a”LIVE” at MIX 2010! Today at MIX we announce the application development platform for Windows Phone 7 series, a... Author: mahathi Date: 03/15/2010 NETCFSvcUtil.exe fix Check out Manish’s blog on NetcfSvcUtil.exe crashing on some operating systems and the uploaded fix... Author: mahathi Date: 08/10/2009 Back to basics GC series Check out the Back to basics GC series published a while ago by Abhinaba!! Author: mahathi Date: 07/10/2009 WCF for mobile whitepaper Check out Mahathi’s blog about a WCF for mobile whitepaper published recently by one of the MVPs. Author: mahathi Date: 07/08/2009 HTTPS related hotfix Manish has posted regarding a hotfix for the HTTPS related issue discussed in a previous post. The... Author: mahathi Date: 07/02/2009 Back to the NETCF blog We would like to revive the NETCF team blog and post here instead of the mobile developer blog as we... Author: mahathi Date: 07/01/2009 NETCF 3.5 namespace poster updated Author: mahathi Date: 06/22/2009 New Mobile Developer Blog! We, the NetCF team, have decided to combine our blog with the other teams doing mobile development... Author: MSDN Archive Date: 05/08/2008 Power Toys for .NET Compact Framework 3.5 have been released The Power Toys for .NET Compact Framework 3.5 have just gone live at... Author: Garry Trinder Date: 12/13/2007 Why .NET Compact Framework fails to call some HTTPS web servers On his blog, Andrew discusses an HTTPS bug in NetCF that causes NetCF to throw a WebException when... Author: Andrew L Arnott Date: 11/21/2007 David writes about the Lunch Launcher and Store and Forward Messaging Over the past couple of months, I have been serializing my experiences in writing the Lunch Launcher... Author: MSDN Archive Date: 11/12/2007 Post on Remote Logging WCF on .NET Compact Framework Hey everyone, I've posted a quick overview of how to use the .NETCF Remote Logging tool to help... Author: Garry Trinder Date: 11/01/2007 .NET Compact Framework Application and Libraries List Updated I've updated the .NET Compact Framework catalog with the some new applications and libraries, and... Author: MSDN Archive Date: 09/25/2007 Power Toys for .NET Compact Framework 3.5 CTP Released The Power Toys for .NET Compact Framework 3.5 CTP (September 2007) has just been released as an MSDN... Author: MSDN Archive Date: 09/12/2007 Reflections on having multiple versions of NetCF on your device Andrew Arnott discusses which versions of the .NET Compact Framework can be side-by-side installed,... Author: Andrew L Arnott Date: 09/07/2007 The WCF subset supported by NetCF Andrew Arnott details the features of WCF that are and are not supported by NetCF on his blog. Author: Andrew L Arnott Date: 09/07/2007 How to (not) write an especially precarious app on .NET (Compact Framework) On his blog, Andrew Arnott discusses how to avoid depending on the internal details of NetCF (or the... Author: MSDN Archive Date: 07/02/2007 .NET Compact Framework 2.0 SP2 Released Microsoft .NET Compact Framework version 2.0 SP2 release is completed and is in the process of being... Author: MSDN Archive Date: 03/13/2007 Application Compatibility Forecast for NETCF v2 sp2 During our test pass against v2 sp2, we tested backwards compatibility with previous releases by... Author: MSDN Archive Date: 03/09/2007 Why your NetCF apps fail to call some web services Here's the scenario: You are writing an NetCF app and trying to call a web service from that app.... Author: MSDN Archive Date: 02/01/2007 .Net Compact Framework Application and Library Catalog Updated I've posted an updated .NET Compact Framework catalog to the internet. The catalog has outgrown the... Author: MSDN Archive Date: 01/30/2007 .NET Compact Framework 3.5 included in Orcas January CTP The .NET Compact Framework team has spent the last year planning and developing the next version of... Author: MSDN Archive Date: 01/28/2007 Managed Code Performance on Xbox 360 for XNA: Part 2 - GC and Tools ...continuation of Part 1, it can be found here Memory and Garbage Collection One common concern for... Author: MSDN Archive Date: 12/22/2006 Managed Code Performance on Xbox 360 for XNA: Part 1 - Intro and CPU Introduction Now that XNA Game Studio Express 1.0 is out, it’s time to start writing managed code... Author: MSDN Archive Date: 12/22/2006 NetCF 3.5's Finalizer Log The .NET Compact Framework has had several loggers (error, interop, loader, and networking) in... Author: MSDN Archive Date: 12/18/2006 David talks about serializing fields as XML node attributes There are times that you may wish to serialize one or more fields as attributes on the object's node... Author: MSDN Archive Date: 10/05/2006 David demonstrates IL debugging .NET Compact Framework applications using MDbg Dan Elliott recently posted about the IL OpCodes supported by the .NET Compact Framework. This got... Author: MSDN Archive Date: 10/05/2006 Dan details .NET Compact Framework CIL OpCode support The instruction set for a CLI compliant execution engine is described by ECMA's CLI Partition III... Author: MSDN Archive Date: 10/05/2006 David talks about filtering TextBox control input I was talking with Mark Prentice today and we were looking at filtering a TextBox control so that it... Author: MSDN Archive Date: 10/05/2006 David demonstrates the MDbg X command When debugging using command line tools, one of the most challenging tasks is getting the fully... Author: MSDN Archive Date: 10/05/2006 Dan discusses extending .NET Compact Framework controls One of the comments I've heard frequently since I began working on the CF GUI base class libraries... Author: MSDN Archive Date: 10/05/2006 Platform detection III: How to detect a touch screen on Windows CE in .NET CF Pocket PC's have touch screens. Smartphones don't. While it is straightforward to determine which of... Author: MSDN Archive Date: 10/02/2006 Platform detection II: Is your app running on Smartphone or Pocket PC? While both Smartphones and Pocket PCs are based on Windows Mobile, there are some very important... Author: MSDN Archive Date: 09/22/2006 Platform detection I: How to detect that your app is running in the emulator When you develop your Windows CE or Windows Mobile application in .NET Compact Framework, you... Author: MSDN Archive Date: 09/15/2006 David describes an easier way to enable .NET Compact Framework diagnostic logging Diagnostic, performance and debugging tools are very cool. The more tools available, the easier it... Author: MSDN Archive Date: 08/31/2006 David Talks about Attaching to .NET Compact Framework Applications using MDbg Being able to attach to a running process is a very powerful debugger feature. It is especially... Author: MSDN Archive Date: 08/25/2006 Scott Talks About Whether Bitmaps Leak Memory “Do bitmaps leak memory?” No, but without careful coding it could appear that they do. Read more... Author: MSDN Archive Date: 08/23/2006 David Introduces the .NET Compact Framework v2 SP1 Error Log With the addition of support for headless devices, it became apparent that we needed a means for... Author: MSDN Archive Date: 08/23/2006 David Talks about Controlling Device Processes using MDbg In the previous parts of this series, I talked about getting started with MDbg and using the Device... Author: MSDN Archive Date: 08/23/2006 Mark Talks About Developing for Windows CE 4.2 with .NET Compact Framework v2 SP1 .NET Compact Framework v2.0 SP1 includes support for Windows CE 4.2 based devices. To date, Visual... Author: MSDN Archive Date: 08/23/2006 David Talks About Using the Device Emulator with MDbg By default, the Device Emulator uses DMA as it's transport for device to desktop communications. The... Author: MSDN Archive Date: 08/23/2006 David Introduces Support for Debugging .NET Compact Framework Applications using MDbg With the release, last year, of version 2 of the .NET Framework SDK, a new command line debugger... Author: MSDN Archive Date: 08/23/2006 .NET Compact Framework Team Member Content Links To make it easier to find the great information that the .NET Compact Framework team is publishing... Author: MSDN Archive Date: 08/23/2006 V2 SP1 Developer Patch Released Developers who wish to update the build of .NET Compact Framework v2 to SP1 with extended Visual... Author: MSDN Archive Date: 08/04/2006 v2 SP1 Application Compatibility Forecast 52 applications were tested for v2 SP1, 9 of them were built with v2. We don't expect to see many... Author: MSDN Archive Date: 07/11/2006 July 2006 .Net Compact Framework Applications and Libraries Here's an updated list of applications for Pocket PC and Smartphones and libraries built for the... Author: MSDN Archive Date: 07/11/2006 .NET Compact Framework v2.0 SP1 is done and is being released. Microsoft .NET Compact Framework version 2.0 SP1 release has been completed and is in the process of... Author: MSDN Archive Date: 06/21/2006
https://docs.microsoft.com/en-us/archive/blogs/netcfteam/
CC-MAIN-2021-39
refinedweb
1,599
66.94
Tutorial An Introduction to Redux's Core. Redux is a predictable state container for JavaScript apps, and a very valuable tool for organizing application state. It’s a popular library to manage state in React apps, but it can be used just as well with Angular, Vue.js or just plain old vanilla JavaScript. One thing most people find difficult about Redux is knowing when to use it. The bigger and more complex your app gets, the more likely it’s going to be that you’d benefit from using Redux. If you’re starting to work on an app and you anticipate that it’ll grow substantially, it can be a good idea to start out with Redux right off the bat so that as your app changes and scales you can easily implement those changes without refactoring a lot of your existing code. In this brief introduction to Redux, we’ll go over the main concepts: reducers, actions, action creators and store. It can seem like a complex topic at first glance, but the core concepts are actually pretty straightforward. What’s a Reducer? A reducer is a pure function that takes the previous state and an action as arguments and returns a new state. Actions are an object with a type and an optional payload: function myReducer(previousState, action) => { // use the action type and payload to create a new state based on // the previous state. return newState; } Reducers specify how the application’s state changes in response to actions that are dispatched to the store. Since reducers are pure functions, we don’t mutate the arguments given to it, perform API calls, routing transitions or call non-pure functions like Math.random() or Date.now(). If your app has multiple pieces of state, then you can have multiple reducers. For example, each major feature inside your app can have its own reducer. Reducers are concerned only with the value of the state. What’s an Action? Actions are plain JavaScript objects that represent payloads of information that send data from your application to your store. Actions have a type and an optional payload. Most changes in an application that uses Redux start off with an event that is triggered by a user either directly or indirectly. Events such as clicking on a button, selecting an item from a dropdown menu, hovering on a particular element or an AJAX request that just returned some data. Even the initial loading of a page can be an occasion to dispatch an action. Actions are often dispatched using an action creator. What’s an Action Creator? In Redux, an action creator is a function that returns an action object. Action creators can seem like a superfluous step, but they make things more portable and easy to test. The action object returned from an action creator is sent to all of the different reducers in the app. Depending on what the action is, reducers can choose to return a new version of their piece of state. The newly returned piece of state then gets piped into the application state, which then gets piped back into our React app, which then causes all of our components to re-render. So lets say a user clicks on a button, we then call an action creator which is a function that returns an action. That action has a type that describes the type of action that was just triggered. Here’s an example action creator: export function addTodo({ task }) { return { type: 'ADD_TODO', payload: { task, completed: false }, } } // example returned value: // { // type: 'ADD_TODO', // todo: { task: '🛒 get some milk', completed: false }, // } And here’s a simple reducer that deals with the action of type ADD_TODO: export default function(state = initialState, action) { switch (action.type) { case 'ADD_TODO': const newState = [...state, action.payload]; return newState; // Deal with more cases like 'TOGGLE_TODO', 'DELETE_TODO',... default: return state; } } All the reducers processed the action. Reducers that are not interested in this specific action type just return the same state, and reducers that are interested return a new state. Now all of the components are notified of the changes to the state. Once notified, all of the components will re render with new props: { currentTask: { task: '🛒 get some milk', completed: false }, todos: [ { task: '🛒 get some milk', completed: false }, { task: '🎷 Practice saxophone', completed: true } ], } Combining Reducers Redux gives us a function called combineReducers that performs two tasks: - It generates a function that calls our reducers with the slice of state selected according to their key. - It then it combines the results into a single object once again. What is the Store? We keep mentioning the elusive store but we have yet to talk about what the store actually is. In Redux, the store refers to the object that brings actions (that represent what happened) and reducers (that update the state according to those actions) together. There is only a single store in a Redux application. The store has several duties: - Allow access to state via getState(). - Allow state to be updated via dispatch(action). - Holds the whole application state. - Registers listeners using subscribe(listener). - Unregisters listeners via the function returned by subscribe(listener). Basically all we need in order to create a store are reducers. We mentionned combineReducers to combine several reducers into one. Now, to create a store, we will import combineReducers and pass it to createStore: import { createStore } from 'redux'; import todoReducer from './reducers'; const store = createStore(todoReducer); Then, we dispatch actions in our app using the store’s dispatch method like so: store.dispatch(addTodo({ task: '📖 Read about Redux'})); store.dispatch(addTodo({ task: '🤔 Think about meaning of life' })); // ... Data Flow in Redux One of the many benefits of Redux is that all data in an application follows the same lifecycle pattern. The logic of your app is more predictable and easier to understand, because Redux architecture follows a strict unidirectional data flow. The 4 Main Steps of the Data Lifecycle in Redux - An event inside your app triggers a call to store.dispatch(actionCreator(payload)). - The Redux store calls the root reducer with the current state and the action. - The root reducer combines the output of multiple reducers into a single state tree. export default const currentTask(state = {}, action){ // deal with this piece of state return newState; }; export default const todos(state = [], action){ // deal with this piece of state return newState; }; export default const todoApp = combineReducers({ todos, currentTask, }); When an action is emitted todoApp will call both reducers and combine both sets of results into a single state tree: return { todos: nextTodos, currentTask: nextCurrentTask, }; - The Redux store saves the complete state tree returned by the root reducer. The new state tree is now the nextState of your app. Conclusion That was a lot to go over in very little words, so don’t feel intimidated if you’re still not entirely sure how all the pieces fit together. Redux offers a very powerful pattern for managing application state, so it’s only natural that it takes a little practice to get used to the concepts. To learn more check out these resources: 🎩 In future posts we’ll explore more advanced topics like dealing with asynchronous events using Redux-Saga or Redux Thunk.
https://www.digitalocean.com/community/tutorials/redux-redux-intro
CC-MAIN-2020-34
refinedweb
1,200
62.17
WMIC - Take Command-line Control over WMI This article is from the March 2002 issue of Windows & .NET Magazine. Microsoft is creating a lot of good reasons to make the command prompt in Windows XP and the Windows Server 2003. On This Page What Is WMIC? How to Run WMIC WMIC Command-Line Components Putting WMIC to Work—and other classes that support WMIC are stored in the schema's default namespace, or role—root. Table 1 WMIC Verbs You use commands to control access to WMIC and the WMI namespace. Notice that the last sample command in the table uses Path and the WIN32_USERACCOUNT class rather than the Useraccount alias. Path is a WMIC command that lets you directly access one or more instances in the WMI namespace rather than reaching them through an alias. The Path command is especially useful when no alias exists for the systems management task you want to complete. You can extend WMIC with new aliases and new roles, but using the Path command is easier if you have a firm grasp on the WMI namespace. WMIC also supports the Class, Context, Quit, and Exit commands. The Class command lets you directly access a class in the WMI schema or create an instance of an existing class. The difference between the Path and Class commands is that the Path command acts on an instance and its properties (e.g., to retrieve management data), while the Class command acts on the class definition. For example, to retrieve all the properties of the WIN32_SOFTWAREELEMENT class, you can type class WIN32_SOFTWAREELEMENT get The output to the console is in HTML format. Later, I show you how to use the /output global switch to redirect the output to an HTML file and view the attributes from a Web browser. The Class command with the Assoc verb shows the namespace path to a class and the other classes associated with the class. You can use the Class command to delete a class and to create an instance of a class but not to create a class. The Context command shows the current settings of the global switches. The Quit and Exit commands simply leave the WMIC command prompt and return you to the previous shell (e.g., the Telnet prompt, the XP command prompt). Command-line Help is the way to become familiar with WMIC. Table 2 shows the characters to type at a WMIC command prompt to find the specified information. Table 2 Command-Line Help Putting WMIC to Work Now that you understand the components of the WMIC command line, let's look at how to run WMIC from a batch file and send output to the console and to an HTML or XML file. When you run WMIC from a batch file, the XP command prompt appears after the commands run. Running WMIC from a batch file lets you repeat common tasks without having to retype a complicated sequence of commands. For example, Listing 1 shows a command you can place in a batch file to display at the console selected processor information about two computers, SERVER1 and SERVER4. The /format switch is verb-specific rather than global because it works with only the Get and List verbs. LISTING 1: Code to Display Results at the Console from a WMIC Batch File wmic /node:SERVER1, SERVER4 cpu get name, caption, maxclockspeed, systemname /format:textvaluelist.xsl WMIC batch files can use variables. Instead of specifying computer names in the batch file, you can specify variables in the format %1, %2, and so on, as Listing 2 shows. You can place this code in a batch file, then when you run it, type one or two computer names after the batch file name. Alternatively, you can create a separate text file that contains a CSV list or a carriage-return-delimited list of computer names. You call the text file with the /node global switch and the text file name with the @ symbol as a prefix. The @ symbol tells the /node switch that the following parameter is a filename, not a computer name. LISTING 2: Code to Use Variables in a WMIC Batch File @echo off if "%1"=="" goto msg if "%2"=="" goto single wmic /node:%1, %2 cpu get name, caption, maxclockspeed, systemname /format:textvaluelist.xsl goto end :single wmic /node:%1 cpu get name, caption, maxclockspeed, systemname /format:textvaluelist.xsl goto end :msg echo you must specify at least one computer name. :end The console isn't the only place to send results. You can instruct WMIC to send output to a file in XML, HTML, or Managed Object Format (MOF) format. MOF is the native WMI file format for classes and class instances in the WMI repository on a WMI-enabled computer. Listing 3 shows code that directs the output of SERVER4 processor information to an HTML file. The /output global switch instructs WMIC to send the output to file1.htm. The /format verb-specific switch instructs WMIC to transform the native XML output into an HTML form. You can create Extensible Stylesheet Language (XSL) files to format output or use any of the XSL files stored in the \%systemroot%\system32\wbem folder of any computer with WMIC installed. For example, you can use the csv.xsl file to format the output into a CSV list of results, or the htable.xsl file to create an HTML table of results. Figure 2 shows file1.htm open in a Web browser. LISTING 3: Code to Direct WMIC Output to an HTML File wmic /node:SERVER4 /output:e:\file1.htm cpu get description, maxclockspeed, extclock, manufacturer, revision /format:hform.xsl LISTING 4: Code to Direct Class Command Output to an HTML File wmic /output:e:\se_class.htm class WIN32_SOFTWAREELEMENT get LISTING 5: Code to Generate XML Output from a WMIC Command wmic cpu get maxclockspeed /translate:basicxml /format:rawxml.xsl Earlier, I mentioned that the default output of the Class command with the Get verb is in HTML format. Therefore, to send output of this type of command to an HTML file, you specify the /output switch without a /format switch, as Listing 4 shows. The /record and /append global switches also let you capture information from the WMIC command line. Use the WMIC Command-line Help facility to find more information about these switches. To output information in XML format, use the /translate switch and the Basicxml keyword to convert the greater than (>) and less than (<) symbols into meaningful characters in XML. For example, Listing 5 shows how to create raw XML output. You can then import the XML data into a database or some other repository that understands the XML tags in the output. The output created from the code in Listing 5 contains the WMIC command, the command-line request, the target nodes, the context of the global switches, and the command results. For good reason, WMIC is a key piece of Microsoft's command-line initiative for XP and .NET Server. WMIC enables robust command-line systems-management access to the WMI namespace, wherever WMI is running on the network. The command-line components take a little time to master, but after you do master them, a whole world of systems management lies at your fingertips..
http://technet.microsoft.com/en-us/library/bb742610.aspx
CC-MAIN-2014-35
refinedweb
1,213
61.46
Now don;t yell at me for rehashing my question from an earlier post, but what does the #ind00 output of this program mean? I instructed it to display a long double. Now don;t yell at me for rehashing my question from an earlier post, but what does the #ind00 output of this program mean? I instructed it to display a long double. "If you tell the truth, you don't have to remember anything" -Mark Twain unless you post the code i will not be able to give you a better answer then this but my best guess would be that you did not proparly initialize the pi variable and then add to it instead of setting it equal to something. It could also be something in the way you output it or perhaps something completely else but my experiance with simaliar output is that you forgot to initialize.... The output code is at the bottom of main(): The code is commented for my Math teacher; just ignore Code:// Pi.cpp : Defines the entry point for the console application. #define NULL 0x0 //Telling the computer what zero is... because it doesn't know #include "stdafx.h" //Includes a file into this project that includes MORE files into this project! using namespace std; class coord { public: coord(){x = NULL; y = NULL;} ~coord(){} long double x; long double y; }; long double lDpow(long double base,int power); int main(int argc, char* argv[]) { coord point; //Point to be calculated unsigned int r; //The radius of the circle coord* Points = new coord[3];//Pointer to array of points that is the circle int PointsPosition = 0; //Current empty slot in the coord array, declared above double inc = 1; //the incrament at which x values will be calculated long double circ = NULL; //A quarter of the cirumfrence of our circle bool FileOut = false; std::cout.setf(std::ios::showpoint | std::ios::fixed);//Setup i/o parameters //These next i/o calls prompt the user for information, and take it from the user. std::cout << "Please input the length of the radius: "; std::cin >> r; std::cout << endl; std::cout << "Please input the resolution of the x values\n (Smaller yeilding more accuracy, .001 to 100): "; std::cin >> inc; std::cout << endl; //This next loop loops through incrimental 'x' values, finding their 'y' couterparts for(long double loop = 0; loop <= r; loop += inc) { point.y = sqrtl( (r * r) - (loop * loop)); //Set y value using the x value, radius and centerpoint (0,0) point.x = loop; //The x value changes with every cycle of the loop by an incrament of inc. Points[PointsPosition] = point;//Pass value from the temporary variable to the one that will be used to calculate cirumfrence. if(loop > 0) //if the loop has run twice or more circ += sqrtl(lDpow((Points[PointsPosition].x - Points[PointsPosition - 1].x),2) + lDpow((Points[PointsPosition].y - Points[PointsPosition - 1].y),2)); //(above) add to the circumfrence a value equal to the distance between the two most recent points calculated. PointsPosition++; //Update the variable that keeps track of what slot we are in in the Points[] array (add 1). //This brilliant little feature below is what allows gigabytes of information to be calculated while only using about 10 bytes. if(PointsPosition == 2) //If the next slot to be written into is the THIRD slot { Points[0] = Points[1]; //these two lines basically reset the array, so we're recycling memory PointsPosition = 1; } }//the end of the loop statement //output all of out data to the user! //Remember, only 1/4 of the circumfrence of a circle was calculated, so we must multiply by four. std::cout << "The circumfrence of your circle (rounded): " << circ*4 << endl; std::cout << "Pi, according to your circle (c / 2r): " << std::setprecision(25) << ((circ*4) / (r*2)) << endl; point.~coord(); delete[] Points; std::cout << endl << "Press any key and enter to exit"; std::cin >> inc; return 0; } long double lDpow(long double base,int power) { long double Answer = 1.00; for(int x = 0; x < power; x++) { Answer *= base; } return Answer; } "If you tell the truth, you don't have to remember anything" -Mark Twain
https://cboard.cprogramming.com/cplusplus-programming/38024-whats-output.html
CC-MAIN-2017-13
refinedweb
693
58.11
Dino Esposito March 2000 Summary: This article provides an in-depth discussion on NTFS 2000, a new file system in Microsoft Windows 2000. (19 printed pages) Download NTFSext.exe. Introduction Overview of NTFS 2000 Multiple File Streams Fundamentals of Streams Streams Backup and Enumeration Hard Links Enjoy NTFS Features Summary The myth of a fully object-oriented version of Microsoft® Windows NT® has been around for a while, since 1994. Cairo—the code name of that legendary version of the OS—never materialized outside a lab in Redmond. Since Cairo's inception some of its fundamental ideas have been introduced now and again. The basic idea behind Cairo was that files and folders would become objects and collections of objects. The folder's content is not necessarily bound to the underlying file system storage mechanism and you can access and replicate those objects as independent and stand-alone entities. File and folder objects would expose a programmable API in terms of methods and properties, both standard and defined by the owner or the author. What we have today, instead, is a file system that registers files and folders in some internal structures, which are duplicated when the files and folders are moved around the disks. Files and folders have a fixed set of features that is too small for the needs of modern applications. As a partial workaround, we've been given over the last few years several techniques for adding extra information to files and folders. Shell and namespace extensions, the desktop.ini file, the FileSystemObject, and the Shell Automation Object Model are just a few examples. However, all these functionalities are just spot and local solutions. They completely miss the point of an organic reengineering of the Windows® file system. Because backward compatibility is a serious issue, Windows is still utilizing an old-fashioned file system built on top of the file allocation table (FAT), the dawning of which dates back to Microsoft MS-DOS® version 2.0! Even with some more recent improvements, such as support for high-capacity hard disks, FAT remains a rather inadequate way of storing file and folder information. Years of real-world experience demonstrate that the most significant limitations we run into have to do with the additional information programmers need to properly manage and identify files. Recently, I was asked to try to retrieve the real creation date for a Word 97 document. You may think it is an easy task since the creation date is an attribute you can easily retrieve through some API functions. This is only partially true. Try copying the same Word file on a different machine, or even in the same folder, and then compare the creation date of both copies. Surprisingly, they differ! While making a copy, you create a brand new file with the time stamp of when the creation occurs. Working on a copy, you lose potentially valuable information concerning when the file was originally created. Fortunately, a Word document retains such information internally in the SummaryInformation fields. So, in my case I was able to solve the problem and successfully bill the client. Had it been an Access or a text file, my effort would have failed. With Windows NT, Microsoft introduced a new file system called NTFS. Among its most notable features is the B-tree structure, which speeds up file retrieval on large folders, file-based security, logging, enhanced file system recoverability, and a much better use of disk space than FAT or FAT32. (By the way, Windows 2000 provides full support and access to FAT32 volumes.) Since their advent with Windows 3.1, NTFS volumes also have another, often underestimated feature: They support multiple streams of data into a single file. With Windows 2000, the stream support has been reinforced, and other rather handy features have been added to help you work seamlessly with files. Let's look at the major features of NTFS 2000, the version of NTFS that comes with Windows 2000. If multiple streams of data aren't an exclusive feature of NTFS 2000 volume files, there are several other features that require Windows 2000 to work. They are: During the Windows 2000 installation, you're asked to specify whether you want your Windows 2000 volume to be converted to NTFS 2000. The use of the NTFS 2000 file system, though, is required only on machines acting as domain controllers. You can convert a FAT partition to NTFS at any time by using the command-line utility convert.exe: CONVERT volume /FS:NTFS [/V] The volume argument specifies the drive letter followed by a colon. It could be also a mount point, or a volume name. The /FS:NTFS option specifies that the volume must be converted to NTFS. Finally, use /V if you want the utility to run in verbose mode. When you run convert.exe it does some initialization and then asks you to reboot. The conversion will take place immediately upon next startup. In addition to all the features listed above, a remarkable aspect of the Windows 2000 overall folder management is the full and somewhat extended support it provides for the desktop.ini files. In the remainder of this article, I'll focus primarily on streams and hard links. Table 1, however, summarizes the most important points pertaining to the other key NTFS 2000 features. Table 1. Key Features of NTFS 2000 Under an NTFS file system each file can have multiple streams of data. It's worth pointing out that streams are not a feature of NTFS 2000, but they have been in existence since Windows NT 3.1. When you read the content of a file under a non-NTFS volume (say, a disk partition of a Windows 98 machine) you're able to access only one stream of data. Consequently, you perceive it as the real and "unique" content for that file. Such a main stream has no name and is the only one that non-NTFS file systems can handle. But when you create a file on an NTFS volume, things might be different. Look at Figure 1 to understand the big picture. Figure 1. The structure of a multi-stream file A multi-stream file is a sort of collection of single-stream files all embedded in the same file system entry. They definitely look like a unique and atomic unit, yet comprise a number of independent sub-units you can create, delete, and modify separately. There are a number of common programming scenarios where streams are more than handy. However, if you plan to use them, bear in mind that as soon as you copy a multi-stream file to a non-NTFS storage device (such as a CD, a floppy, or a non-NTFS disk partition), all extra streams will be unrecoverable if lost. Unfortunately, such a compatibility issue makes streams much less attractive in practice. For server-side applications designed and destined to run only on NTFS volumes, streams are an excellent tool to leverage to build great and creative solutions. When you copy a multi-stream file on non-NTFS volumes, only the main stream is copied. This means you lose your extra data, because they can't come up again even if you copy the file back to an NTFS disk. For now, let's assume you're working exclusively on NTFS machines and let's see how to create named streams. In Code Sample 1 you can see a Windows Script Host (WSH), Microsoft Visual Basic® Scripting Edition (VBScript) file that demonstrates how to read and write streams from an NTFS file. To identify a named stream within a file you should follow a particular naming convention and add at the end of the file name a colon followed by the stream name. For example, to access a stream called VersionInfo on a file called test.txt you should use the file name: Test.txt:VersionInfo Use it with any Microsoft Win32® API function that manipulates files. To access the content of the VersionInfo stream, pass that name on to CreateFile() and then use ReadFile() and WriteFile() to accomplish reading and writing as usual. If you want to check whether a certain stream exists within a file, compose the file stream name as just shown and use CreateFile() to check it for existence: HANDLE hfile = CreateFile(szFileStreamName, GENERIC_READ, 0, NULL, OPEN_EXISTING, 0, 0); CloseHandle(hfile); if (hfile == NULL) MessageBox(hWnd, "Error", NULL, MB_OK); To work with streams, you don't necessarily need to be an intrepid C++ programmer. You can take advantage of streams also in Visual Basic and even with script code, as Code Sample 1 shows. The key factor that enables this sort of transparency is that all the low-level Win32 API functions, CreateFile() in particular, support stream-based file names on NTFS partitions. If you try to open a file called Test.txt:VersionInfo on a non-NTFS partition, for example on a Windows 98 machine, you'll get a "file not found" error message. Notice that what matters is only the file system of the volume that contains the file, not the Windows platform or the disk partition type hosting the calling application. In other words, you can successfully access a certain named stream on a shared folder on an NTFS partition also from a connected Windows 98 machine. Furthermore, consider that the colon is not a valid character, even for long file names. So when CreateFile() encounters that in a file name, it knows it has a special meaning. As of Code Sample 1, you can use streams with VBScript too, because the FileSystemObject object model makes intensive use of CreateFile() to open, write, create, and test files. In the sample code, I'm creating a text file with an empty, 0-length main stream and as many named streams as you want. Try running the demo and create a couple of streams. Let's say you call them VersionInfo and VersionInfoEx. There's nothing in the Windows shell that can lead you to suppose the presence of streams within a certain file. In Figure 2 you can see how the test.txt file looks within Windows Explorer. Figure 2. A file can be 0-length but have named streams. The Size column shows only the size of the main, unnamed stream and not even in the Properties dialog box can you get more information about streams. Although only on NTFS volumes, the Windows 2000 Properties dialog box gives you a chance to associate summary information to all files, including text files. Click the Summary tab and enter, for example, an author name, as shown in Figure 3. Incidentally, such a name can be displayed on a specific Author column due to the shell UI enhancements of Windows 2000. Read about that in the premiere issue of MSDN Magazine at. Figure 3. Associating extra information for a .txt file on an NTFS volume Hey, wait a minute. The summary information is typical data you set for Word or Excel documents but is definitely part of the document itself. How is it possible to associate it with a text file without altering the plain content? Elementary, Watson. The shell does that through streams! Once you apply those changes, try copying the file on another non-NTFS partition. The dialog box shown in Figure 4 will appear. Figure 4. Windows 2000 forewarns about possible stream data loss. It proves that the test.txt file contains a stream with document summary information. The system realizes when you're going to copy a file with extra information to a volume that doesn't support that. On non-NTFS partitions only the main unnamed stream is copied and the rest are discarded. For this reason, stream-based files are hardly to be exchanged if the target is not compliant. Is there a way—an API function or two—to enumerate all the streams a certain file has? Yes, there is, but it's not as easy and intuitive as it should be. The Win32 backup API functions (BackupRead, BackupWrite, and so forth) could be used to enumerate the streams within a file. However, they are a bit quirky to use and look more like a workaround than an effective and definitive solution. The idea is that when you want to back up a file or an entire folder, you need to package and store all possible information. For this reason, BackupRead() is your best friend when it comes to trying to enumerate the streams of a file. Let's focus on the function's prototype: BOOL BackupRead( HANDLE hFile, LPBYTE lpBuffer, DWORD nNumberOfBytesToRead, LPDWORD lpNumberOfBytesRead, BOOL bAbort, BOOL bProcessSecurity, LPVOID *lpContext ); For our purposes, you can ignore here aspects like context and security. The hFile argument must be obtained through a call to CreateFile(), while lpBuffer should point to a WIN32_STREAM_ID data structure: typedef struct _WIN32_STREAM_ID { DWORD dwStreamId; DWORD dwStreamAttributes; LARGE_INTEGER Size; DWORD dwStreamNameSize; WCHAR cStreamName[ANYSIZE_ARRAY]; } WIN32_STREAM_ID, *LPWIN32_STREAM_ID; The first 20 bytes of such a structure represent the header of each stream. The name of the stream begins immediately after the dwStreamNameSize field and the content of the stream follows the name. Because the traditional content of the file can be seen as a stream, although an unnamed stream, to enumerate all the streams, you just need to loop until BackupRead returns False. BackupRead, in fact, is supposed to read all the information associated with a given file or folder: WIN32_STREAM_ID sid; ZeroMemory(&sid, sizeof(WIN32_STREAM_ID)); DWORD dwStreamHeaderSize = (LPBYTE)&sid.cStreamName - (LPBYTE)&sid+ sid.dwStreamNameSize; bContinue = BackupRead(hfile, (LPBYTE) &sid, dwStreamHeaderSize, &dwRead, FALSE, FALSE, &lpContext); The preceding snippet is the crucial code that reads in the header of the stream. If the operation is successful, you can attempt to read the actual name of the stream: WCHAR wszStreamName[MAX_PATH]; BackupRead(hfile, (LPBYTE) wszStreamName, sid.dwStreamNameSize, &dwRead, FALSE, FALSE, &lpContext); Before attacking with the next stream, first move forward the backup pointer by calling BackupSeek(): BackupSeek(hfile, sid.Size.LowPart, sid.Size.HighPart, &dw1, &dw2, &lpContext); In most cases, you can treat streams as if they were regular files—for example, to delete a stream resort to DeleteFile(). If you want to refresh their content, just use ReadFile() and WriteFile(). There's no official and supported way to move or rename streams. In the final part of the article, I'll utilize this code to build an NTFS 2000-specific Windows shell extension that adds a new property page to all files with stream information. In the meantime, let's take a whirlwind tour of another NTFS feature. Do you know about shortcuts—those little .lnk files, mostly sprinkled around your desktop, that you use to reference something else? Well, no doubt shortcuts are a useful feature, but they also have a few drawbacks. First off, if you have multiple shortcuts pointing to the same target from different folders, actually you have multiple copies of the same—fortunately, rather small—file. More importantly, the target object of a shortcut may change over time. It might be moved, deleted, or simply renamed. What about your shortcuts? Are they capable of detecting those changes and tracking them down, (auto)updating properly? Unfortunately, they can't. The main reason for this is that shortcuts are an application-level feature. From the system's point of view, they are just user-defined files that simply require some extra work when you try to open them. Consider that being a shortcut is a privilege you might decide to assign also to other classes of files. Provided it makes sense, you can create your own class of shortcuts with an extension other than .lnk. What does the job is a registry entry named IsShortcut under the class node. Suppose you want to treat .xyz files as shortcuts. Register the file class by creating an .xyz node under HKEY_CLASSES_ROOT and make it point to another node, usually xyzfile. Then add an empty REG_SZ entry to: HKEY_CLASSES_ROOT \xyzfile and you're done. Other operating systems, specifically Posix and OS/2, have a similar feature that acts at the system level. OS/2, in particular, calls them shadows. A hard link is a file system-level shortcut for a given file. By creating a hard link to an existing file, you duplicate neither the file nor a file-based reference (that is, a shortcut) to it. Instead, you add information to its directory entry at the NTFS level. The physical file remains intact in its original location. Simply put, it now has two or more names that you can use to access the same content! A hard link saves you from maintaining multiple (but needed) copies of the same file, making the system responsible for managing various path names to address a single physical content. This greatly simplifies your work and saves valuable disk space. Furthermore, hard links, as system-level shortcuts, always point to the right target file—no matter if you rename or move it. Because the link is stored at the file system level, all changes apply automatically and transparently. It's worth noting that hard links must be created within the same NTFS volume. You cannot have a hard link on, say, drive C: pointing to a file on drive D:. If it sounds more familiar, think of a hard link as an alias for a file. You could use any alias to access it and the file gets deleted only when you delete all of its aliases. (Aliases act like a reference count.) Because hard links are aliases, synchronizing the content is simply a non-issue. CreateHardLink() is the API function used to create hard links. Its prototype looks like this: BOOL CreateHardLink( LPCTSTR lpFileName, LPCTSTR lpExistingFileName, LPSECURITY_ATTRIBUTES lpSecurityAttributes ); As the companion code for an old MIND article (see "Windows 2000 for Web Developers," MIND, March 1999), I provided a COM object allowing you to create hard links from script code. Code Sample 2 shows a VBScript program that utilizes it to create hard links for a given file. While it's easy to discover how many hard links a file has, there's no facility to enumerate them all. The API function GetFileInformationByHandle() fills out a BY_HANDLE_FILE_INFORMATION structure, whose nNumberOfLinks field informs you about that. Enumerating all the names of the linked files is a bit more difficult. Basically, you must scan the entire volume and, for each file, keep track of the unique ID it's been assigned. When you run into an existing ID you've found a hard link for that file. The file's unique ID is assigned by the system and is stored in the nFileIndexHigh and nFileIndexLow fields of BY_HANDLE_FILE_INFORMATION. Streams are particularly useful for adding extra information to files without altering or damaging the original format and taking up disk space. Streams, of course, occupy their natural space, but Windows Explorer appears unaware of this. Streams are invisible to Windows Explorer, so while it appears there is plenty of free disk space, in actuality free space is dangerously low. You can add extra (and invisible) information to any file, including text and executables. On the other hand, hard links are a great resource to centralize shared information. You have just one real repository for information that can be accessed from a variety of different paths. Consider that hard links aren't an entirely new concept for the Windows NT technology. Hard links have been around since the inception of Windows NT, but not until Windows 2000 did Microsoft provide a public function to create them. Each file has at least one link to itself so that GetFileInformationByHandle always returns a number of links greater than zero. You cannot set hard links to a directory, only to files. A practical problem that streams and hard links share is their very limited support from the shell. In an effort to remedy this, I've written a shell extension to provide information about the streams and the hard links of a given file. Figure 5 illustrates its look and feel. Figure 5. The Streams tab shows information about streams and hard links. The source code for the shell extension enumerates the streams using the BackupRead() API function. The content of the selected stream can be deleted with a simple call to DeleteFile(). The Edit Streams button runs the script code in Code Sample 1 by means of which you can add and update streams. Likewise, the Create Hard Link button runs the code in Code Sample 2 to create additional links. The user interface reflects all changes only if you refresh. As a final note, bear in mind that if you delete a hard link (which means deleting a file) the total number of links isn't updated as long as the deleted files remain in the Recycle Bin. In this article, I've just scratched the surface of NTFS 2000, focusing on key features such as streams and hard links. For a wider perspective of what's new in the Windows 2000 file system, I suggest you refer to the article "A File System for the 21st Century: Previewing the Windows NT 5.0 File System," written by Jeff Richter and Luis Cabrera for Microsoft Systems Journal, November 1998. Interesting topics, in particular sparse streams and reparse points, have not been addressed here, but if you enjoyed this article, please let us know and we'll soon provide a follow-up. ' CreateStream.vbs ' Demonstrates streams on NTFS volumes ' -------------------------------------------------------- Option Explicit ' Some constants Const L_NotNTFS = "Sorry, the current volume is not NTFS." Const L_EnterFile = "Enter a file name" Const L_TestNTFS = "Test NTFS" Const L_StdFile = "c:\testntfs\test.txt" Const L_EnterStream = "Enter a stream name" Const L_StdStream = "VersionInfo" Const L_EnterTextStream = "Enter the text of the stream" Const L_StdContent = "1.0" ' Makes sure the current volume is NTFS if Not IsNTFS() then MsgBox L_NotNTFS WScript.Quit end if ' Query for a file name dim sFileName sFileName = InputBox(L_EnterFile, L_TestNTFS, L_StdFile) if sFileName = "" then WScript.Quit ' Query for the stream to be written dim sStreamName sStreamName = InputBox (L_EnterStream, L_TestNTFS, L_StdStream) if sStreamName = "" then WScript.Quit ' Initializes the FS object model dim fso, bExist set fso = CreateObject("Scripting.FileSystemObject") ' Creates the file if it doesn't exist dim ts if Not fso.FileExists(sFileName) then set ts = fso.CreateTextFile(sFileName) ts.Close end if ' Try to read the current content of the stream dim sFileStreamName, sStreamText sFileStreamName = sFileName & ":" & sStreamName if Not fso.FileExists(sFileStreamName) then sStreamText = L_StdContent else set ts = fso.OpenTextFile(sFileStreamName) sStreamText = ts.ReadAll() ts.Close end if ' Query for the content of the stream to be written sStreamText = InputBox (L_EnterTextStream, L_TestNTFS, sStreamText) if sStreamText = "" then WScript.Quit ' Try to write to the stream set ts = fso.CreateTextFile(sFileStreamName) ts.Write sStreamText ' Close the app set ts = Nothing set fso = Nothing WScript.Quit ' //////////////////////////////////////////////////////// ' // Helper functions ' IsNTFS() - Verifies whether the current volume is NTFS ' -------------------------------------------------------- function IsNTFS() dim fso, drv IsNTFS = False set fso = CreateObject("Scripting.FileSystemObject") set drv = fso.GetDrive(fso.GetDriveName(WScript.ScriptFullName)) set fso = Nothing if drv.FileSystem = "NTFS" then IsNTFS = True end function ' Hardlinks.vbs ' Demonstrates hard links on NTFS volumes ' -------------------------------------------------------- Option Explicit ' Some constants Const L_NoHardLinkCreated = "Unable to create hard link" Const L_EnterTarget = "Enter the file name to hard-link to" Const L_HardLinks = "Creating hard link" Const L_EnterHardLink = "Name of the hard link you want to create" Const L_CannotCreate = "Make sure that both files are on the same volume and the volume is NTFS" Const L_NotExist = "Sorry, the file doesn't exist" Const L_SameName = "Target file and hard link cannot have the same name" ' Determine the existing file to (hard) link to dim sTargetFile if WScript.Arguments.Count >0 then sTargetFile = WScript.Arguments(0) else sTargetFile = InputBox(L_EnterTarget, L_HardLinks, "") if sTargetFile = "" then WScript.Quit end if ' Does the file exist? dim fso set fso = CreateObject("Scripting.FileSystemObject") if Not fso.FileExists(sTargetFile) then MsgBox L_NotExist WScript.Quit end if ' Main loop while true QueryForHardLink sTargetFile wend ' Close up WScript.Quit ' ///////////////////////////////////////////////////////////// ' // Helper Functions ' Create the hard link '------------------------------------------------------------ function QueryForHardLink(sTargetFile) ' Extract the hard link name if specified on the command line dim sHardLinkName if WScript.Arguments.Count >1 then sHardLinkName = WScript.Arguments(1) else dim buf buf = L_EnterHardLink & " for" & vbCrLf & sTargetFile sHardLinkName = InputBox(buf, L_HardLinks, sTargetFile) if sHardLinkName = "" then WScript.Quit if sHardLinkName = sTargetFile then MsgBox L_SameName exit function end if end if ' Verify that both files are on the same volume and the ' volume is NTFS if Not CanCreateHardLinks(sTargetFile, sHardLinkName) then MsgBox L_CannotCreate exit function end if ' Creates the hard link dim oHL set oHL = CreateObject("HardLink.Object.1") oHL.CreateNewHardLink sHardLinkName, sTargetFile end function ' Verify that both files are on the same NTFS disk '------------------------------------------------------------ function CanCreateHardLinks(sTargetFile, sHardLinkName) CanCreateHardLinks = false dim fso set fso = CreateObject("Scripting.FileSystemObject") ' same drive? dim d1, d2 d1 = fso.GetDriveName(sTargetFile) d2 = fso.GetDriveName(sHardLinkName) if d1 <> d2 then exit function ' NTFS drive? CanCreateHardLinks = IsNTFS(sTargetFile) end function ' IsNTFS() - Verifies whether the file's volume is NTFS ' -------------------------------------------------------- function IsNTFS(sFileName) dim fso, drv IsNTFS = False set fso = CreateObject("Scripting.FileSystemObject") set drv = fso.GetDrive(fso.GetDriveName(sFileName)) set fso = Nothing if drv.FileSystem = "NTFS" then IsNTFS = True end function
http://msdn.microsoft.com/en-us/library/ms810604.aspx
crawl-002
refinedweb
4,198
62.68
If you’re wondering how to create a Twitter bot with Python and Raspberry Pi, this tutorial is perfect for you. Today, we will make a Raspberry Pi post temperature tweets every hour using IFTTT., then below: Our sample project will get the readings from a DHT temperature, and humidity sensor then tweets it using the Python requests library and IFTTT. We will also schedule the python sketch to run every hour using crontab. Cron is the default task scheduler utility on Linux-based operating systems like the Raspberry Pi OS. You can read more about it here. Connecting the Components Connect the DHT sensor as shown below. DHT sensors support common input voltages, so you can use either 3.3 or 5V VCC supply. Note that DHT sensors come in several variants. The data pin might be on the left-most side or the middle. Be sure to connect the pins correctly. ><< Choose Webhooks as the service. Select Receive a web request. This trigger will give you a URL to which you can send a web request so that it activates and posts the tweet. Next, name your event. The event name should be short and should describe your applet. This way, you won’t have a hard time using several applets in a single program. Create the trigger then proceed with the effect. Click Add beside “That”. Then, choose Twitter as the service you want to trigger. As you can see below, there are several actions you can do using the Twitter service. You can post a tweet, post a tweet with an image, update your profile picture, and even update your bio. For our sample project, we’ll just post a simple text tweet. Finally, create the tweet that will be posted on your Twitter feed. You can add elements—variables containing useful data such as the event name, time of occurence, value containers, etc. After creating the tweet, IFTTT will prompt you to your Twitter account. Log in with your credentials and authorize IFTTT to post on your behalf. Then, finish up by reviewing the applet. Now, we will get the trigger URL you’ll want to add to your Python program. Go back to the menu bar and click My Services. Select Webhooks. Go to the webhooks home page and click Documentation on the top-right corner. This will open a page that will show you your personal key. The key prevents other people from activating your applet, making sure no one else activates it. Code Copy the code below to any Python editor or IDE in your Raspberry Pi. import requests import Adafruit_DHT def readDHT(temperature, humidity): report = {} report["value1"] = temperature report["value2"] = humidity requests.post("", data=report) h, t = Adafruit_DHT.read_retry(11, 2) # DHT11 on pin 2 readDHT(t,h) The two libraries we will use are requests and Adafruit_DHT. Requests is the go-to library for HTTP handling while Adafuit_DHT is a library made by the well-known electronics company, Adafruit, for any DHT sensors. The first part of the code contains a function called readDHT(). This function will take the readings from the sensor, convert them into a dictionary format, then send them to the URL we got from IFTTT. Be sure to replace the event name and personal key depending on your IFTTT applet. Also, you can change the sensor and pin with Adafruit_DHT.read_retry() depending on your available DHT sensor and connection. The code is actually pretty straightforward. Thanks to the power of both libraries we can do everything with less than ten lines of code. Now, to automate the python program with cron: 1. Open crontab. sudo crontab -e 2. Choose a text editor. In my experience, I was asked for my preferred text editor when I used the command line. I tried it again using the desktop terminal, and surprisingly, it didn’t ask me. It just opened the file using nano. Using a different text editor doesn’t make a functional difference so just choose the one you’re comfortable with. 3. Next, add the scheduled task. In order to do that, we must get familiar, since we want to run the program named tweet.py every hour, the entry should look like this: 0 * * * * python3 /home/pi/tweet.py If you want the program to run every minute 30 of every hour, e.g., 12:30, 1:30, or 2:30: 30 * * * * python3 /home/pi/tweet.py Furthermore, you can use the preset syntax @hourly which is basically the same as the first example entry. @hourly python3 /home/pi/tweet.py 4. Now, save and exit. 5. To view your currently scheduled tasks, enter the command below: crontab -l Demonstration Finally, check your Twitter account to verify that the program is working. You can use the code included to tweet any kind of data possible. Just make sure to send the values or strings in a dictionary format with value1, value2, and value3 as keys. Hi, I tried to view the Adafruit_DHT library you used in this project but I did’nt find the correct one which contains the function Adafruit_DHT.read.retry(). May someone provide a link to the source code of this library? Thanks a lot.
https://www.circuitbasics.com/how-to-send-a-tweet-with-ifttt-and-raspberry-pi/
CC-MAIN-2021-39
refinedweb
873
75.81
26 July 2012 14:01 [Source: ICIS news] HOUSTON (ICIS)--ExxonMobil's second-quarter chemical earnings rose 9.7% year on year, mainly because of a gain from restructuring the company's business in ?xml:namespace> Excluding that gain, the chemical segment's underlying performance weakened because of lower margins and volumes. ExxonMobil's chemical earnings for the three month ended 30 June were $1.449bn (€1.188bn), compared with $1.321bn in the 2011 second quarter. Under the restructuring in Overall, ExxonMobil reported a 49% year-on-year increase in second-quarter earnings to $15.91bn, mainly because of a net gain of $7.5bn associated with divestments and the restructuring
http://www.icis.com/Articles/2012/07/26/9581400/exxonmobil-q2-chem-earnings-rises-9.7-on-revamp-of-japan.html
CC-MAIN-2014-41
refinedweb
112
52.26
Popular Google Pages: ◕ Earn money by writing story to RiyaButu.com. More.. ◕ Bengali Story writing competition. More.. This article is regarding What is Casting and Explicit Cast in Java? Last updated on: 20th March 2017. ◕ What is Casting and Explicit Cast in Java? Let have an example of a complete program first: public class balls{ public static void main(String[] args){ int numBalls=0; int redBalls=3; int blueBalls=2; numBalls = redBalls+blueBalls; System.out.println("Total number of balls is: "+ numBalls); // This displays the result on screen. } } If we run this program then we will get the result as: Total number of balls is: 5 Now we will change the variable type from int to short on the above program. The program will look like this: public class balls{ public static void main(String[] args){ small numBalls=0; small redBalls=3; small blueBalls=2; numBalls = redBalls+blueBalls; System.out.println("Total number of balls is: "+ numBalls); } } If we want to run this program we will find that the program is not running. Here everything is OK. But why the program is not running? The answer is, the compiler is unable to compile the statement: numBalls = redBalls+blueBalls; The reason is: We have declare the type of variable as short which is 16 bit long. But in Java all binary integer operations work only with type int or with type long. Type int is 32 bit long. Type long is minimum 64 bit long. Hence the expression redBalls + blueBalls produces a 32-bit result. And the compiler is unable to store the 32 bit result to the numBalls as it is only 16 bits long. As a result the program is not running. ■ Solution: To make the program OK we have to change only one line as: numBalls = (short)(redBalls + blueBalls); Please note: We should always avoid explicit casting in our programs unless they are absolutely essential.
http://riyabutu.com/java-articles/casting-and-explicit-casting.php
CC-MAIN-2019-22
refinedweb
317
64.81
News Brief by Kip Hansen – 14 October 2020 In an article in the New York Times, “Why ‘Biodegradable’ Isn’t What You Think”, the NY Times’ journalist, John Schwartz , makes several very good points about new products – new packaging options – being labelled “biodegradable” and “compostable”. Primarily, as many of you already know, the labels “biodegradable” and “compostable” are for the most part simply marketing ploys — clever tactics used by marketers to drive more purchases of a specific product. These labels don’t really mean that when you dispose of the empty biodegradable or compostable packaging that it will magically blend back into the natural environment, broken down into its basic elements. It doesn’t mean that the biodegradable paper bottle or the compostable carryout fast food container will biodegrade or compost. Corn Plastic One example is the new “corn-based plastic” – PLA or polylactide. Bottles made of this plastic look similar enough to PET plastic bottles [ polyethylene terephthalate ] (like those that soft drinks and bottled water come in) to make separation in single-stream recycling impossible. In reality, they are actually just another form of polyester – with a different source material. The two materials, though similar, cannot be mixed in the recycling process as they contaminate one another. Schwartz tells us “The labels on PLA products often describe them as compostable. But that doesn’t mean you can just throw the stuff into your backyard compost pile, if you have one. To properly degrade, they have to be sent to commercial compost facilities.” PLA bottles would have to be collected at point-of-sale and point-of-use locations, say an amusement park that only sells water and soda in PLA bottles so they can be sent to a specialized composting service. Outside such use this means, of course, that if PLA soda or water bottles come into wide-spread use they cannot easily be recycled as they will be mixed and confused with and may prevent the recycling of the current PET bottles. Paper Paper is mostly wood or other plant fibers and is both recyclable and compostable. That is, if it is only paper. But not the new paper-based packaging, especially paper boxes and bottles intended to hold liquids, such as wine and juices. These plastic containers are made of layers of paper and plastics and thus become neither compostable or recyclable. They are then suitable to be landfilled or burned in trash-to-power plants only! . Fiber You may have been served part of a meal at a Chipotle restaurant made of pressed fibers. “Some fast-casual restaurants use bowls designed and marketed to be compostable. They are made from bagasse, a fiber produced as a byproduct from sugar cane mills.” Mr Schwartz tells us. Luckily, fiber based bowls can be both recycled and composted – but recycled only if they haven’t actually been used and contaminated with food, in which case they must be sent to a composting service or a landfill. In the landfill, they will eventually breakdown and mix with all the other “stuff’ buried there. [This paragraph updated 0800 14 Oct 2020 – kh ] PHA – Bacteria Created Plastic PHA, or Polyhydroxyalkanoates, are, you guessed it, “polyesters produced in nature by numerous microorganisms, including through bacterial fermentation of sugars or lipids.” Its primary virtue is that it is not made from petroleum. Like many promising products, “producing the material economically, however, has been a technical challenge.” A company in Singapore has manufactured plastic straws from the material, and an American company hopes to produce bottles from the plastic. John Schwartz, the NY Times’ journalist, says this: “PHA, or polyhydroxyalkanoate, has been the next big thing in biodegradability for years. This bioplastic, which can be produced by bacteria, has promising properties: Research suggests it can break down in conventional landfills. In ocean water, it will degrade within a few years, a fraction of the 450 years that it takes standard plastic.” Hurrah! If they can find a way to make this new plastic affordably, research suggests that items made from it can breakdown in regular landfills. “In ocean water, it will degrade within a few years, a fraction of the 450 years that it takes standard plastic.” Oops—where did that last bit come from? The “450 years that it takes standard plastic” to breakdown is a falsehood prominently promoted by NOAA. Now, it is a serious matter to call information from NOAA a falsehood. But this is the case. How do we know? NOAA tells us so on this page, an interview transcript about the Great Pacific Garbage Patch. DIANNA PARKER:. TROY KITCH: I noticed that you said garbage patch ‘areas.’ So the Great Pacific Garbage Patch is only one area in the ocean where marine debris concentrates? DIANNA PARKER: There. TROY KITCH: A peppery soup? Could you explain that again? DIANNA PARKER: Well, imagine tiny, tiny micro plastics just swirling around, mixing in the water column from waves and wind, that’s always moving and changing with the currents. These are tiny plastics that you might not even see if you sailed through the middle of the garbage patch, they’re so small and mixed throughout the water column. Here’s a photo of the Great Pacific Garbage Patch from The Ocean Cleanup project: The Guardian claims that “480bn plastic drinking bottles were sold in 2016….most plastic bottles produced end up in landfill or in the ocean.” See all those floating millions of plastic bottles? No? Why not? Even when one goes to the areas of the oceans that concentrate floating pelagic plastic, the great oceanic gyres, and searches diligently, with nets and sub-millimetric sieves, the millions of plastic bottles, claimed to take 450 years to breakdown, are not found. They don’t even find very many bits of plastic large enough to pick up with your fingers. Here’s what they found sieving the Great Garbage Patches of the world’s oceans: Almost everything found was between 4 and 0.75 millimeters. Almost nothing over 10 mm – about 0.4 inches. And under .075 mm (which is very small), the number of “bits” drops off rapidly approaching zero. If PET soda and water bottles really took 450 years to breakdown, at least some of the plastic bottles that have escaped into the wild and been washed into the ocean ought to be still floating around out there. Why are they not found? Firstly, PET has a specific gravity of about 1.3. That means it does not float once it is no longer a hollow container (like, say, a bottle with a bit of liquid in the bottom). When still hollow, they float. They are not found floating in the ocean because they rapidly breakdown from the effects of the sun and the waves, break apart, break apart again, and again, and then the bits sink. As the NOAA spokesperson, Dianna Parker explains: “Plastics never really go away. They just break down over and over and over again until they become smaller and smaller from sunlight and other environmental factors [like] waves, big storms, those kind of things.” NOAA says they breakdown, into smaller and smaller pieces. That part is true, but the idea that they “never really go away” is false. What happens to those little bits? Microbes eat those little pieces. # # # # # That Said: Kindergarten rules apply at all stages and areas of life: Pick up after yourself — clean up your own messes: Put your trash in the trash bin.. # # # # # Author’s Comment: The whole recycling movement in the USA is a terrible mess. Almost everything ends up in the landfill. New plastics won’t solve the perceived problem. Landfilled plastics are not really a problem anyway, certainly not any more of a problem than landfilled glass. Not a problem, it is just a waste. NOAA’s continuing anti-plastic campaign – some details of which are gross misinformation — is a shame. # # # # # 125 thoughts on “Where Do the Plastic Bottles Go?” Has no one appreciated the fact that the geological processes of our planet have been recycling everything for the last 4.6 billion years? Now some of the rates maybe slower than you like, and the occasional comet dump makes a really nasty mess (sorry dinosaurs) but somehow mother earth seems to cope. Just give her a helping hand and high temperature burn as much trash as you can. Got to somehow get those CO2 levels back to Cretaceous normal for the plants sake. Could get some electrical power out of it, too. Yep It’s not like plastics are some giant mystery to the earth’s natural recycling processes. It’s been breaking down the components of plastics for billions of years. There is nothing magical about plastic. It will get broken down and repurposed just like everything else on the face of the earth. Plastic are made out of hydrocarbon, all life on earth a carbon based, that carbon is in the form of hydrocarbon, plastic is what life on earth is made of. On a fool would not believe it would not end up being food for something. Plastic bottle recycling is at a very high level in Germany, aided by a deposit scheme… ‘Almost ninety-nine percent of mandatory PET deposit bottles are collected for recycling in Germany according to the latest study, Aufkommen und Wiederverwertung von PET-Getränkeverpackungen in Deutschland (PET beverage packaging volume and recycling in Germany) published in 2016 by the German Society for Packaging Market Research (GVM); 93.5% of disposable and reusable bottles collected are recycled – and up to 98% for disposable deposit bottles. “The disposal bottle deposit in Germany has secured these high quotas,” according to Schmidt. This has proven to be a successful strategy in the fifteen years after its introduction. Recycling takes priority with PET – 34% of the recycled material is processed into new PET bottles according to the GVM study. Other users include the film industry (27%), textile fibre manufacturers (23%) and other applications such as tape and cleaning agent container production (16%). Eighty percent is recycled within Germany, and the rest is mostly exported to destinations near Germany’s borders. PET material exports to China have seen a steady decrease, so restrictions on plastic waste exports from Germany to China only apply to a limited extent in the German PET industry.’ (Where is your source for this?) SUNMOD Did you a point? To your life, not just this post. Call me skeptical Griff …. China banned the importation of most waste and recycling material from any country in 2017. So now go back to where you got all that quoted junk and tell us if it is still valid aka look at the date on what guardian article you took it from :-). “griff October 14, 2020 at 2:29 am …aided by a deposit scheme…” You go find out how much that costs taxpayers. You will be surprised. It’s also slow and irritating if it’s anything like Austria’s. You have to put each bottle into a machine, and it takes several seconds, and the bar code has to be intact. For someone who drinks as much beer and bottled water as me (no mains water) it’d take an hour each week at least. In Oz we now have the same system, but you deposit in a central location where I am. Its fast, but even so I don’t bother to do the beer bottles any more because they are too heavy to carry too the station from the car park. I also don’t want to waste water washing them when I pay 3c per litre in the dry season. Water bottles I do once a month or so because I’m stubborn and hate to see the government make more money out of me. I was putting it all in the recycling bin before that anyway, but they decided to punish me regardless. They just do it to show they can. Later they will do it for something important and you’ll give in because you’ve been conditioned. We have a deposit based scheme for ready to drink containers here in Alberta. Yes, most of it ends up at recycling collection centers (we call them Bottle Depots), but how much ends up actually being recycled, and how much does it cost tax payers? I haven’t kept an links but I’ve read that plastic recycling only works through one, maybe two, cycles before the basic ingredients are too deteriorated to be useful again. I don’t know just what that means but the reports state that it is a major impediment to greater recycling. As for “fiber based bowls can be both recycled and composted – but only if they haven’t actually been used and contaminated with food” That makes no sense for composting. Anything that can be used by an animal body – food – can be used by the fungi and bacteria in a compost pile. I’ve noticed that snails and slugs will eat paper and cardboard. Not sure of its nutritional value for them. Many, many years ago I temporarily ‘adopted’ a red joey in outback Western Australia: it’s favorite snack was tissue paper. I think it is. My dad had the great idea to build a wine cellar with breeze blocks (cinder blocks that have hollows). He put the blocks sideways for the walls and just put the wine bottles in the hollows. Unfortunately, the snails ate every single label. We had ‘pot luck’ wine whenever we visited! Andy ==> Thank you — I seem to have mis- typed that paragraph — and have fixed it now. Fiber bowls can be recycled only if they have not been used. They can be composted — but the restaurant would have to collect them onsite and send them enmass to a commercial composting service. Never trust the Chinese Communists…never. What is your point griff? Is it that the Chinese stopped dumping PET bottles into the ocean due to Germany no longer paying them to haul it away? And that’s why nothing can be found? Of course Germany was the only source, and they must have gotten that recycling system up and running 450 years ago around 1570, since all the bottles have degraded by now? Are your bosses paying you by the word? Australia also has recycling depots near many supermarkets. SA’s plastic recycling firm shut down because it couldn’t afford the electricity in South Australia. Only about 9.4% of the plastic is actually recycled, And the the local plastics recycling sector was now smaller than it was in 2005. One thing we can agree on, we should be re-using much more. ! Fred250; single use plastic bags (every government hates these) are about the most re-used product ever invented. Cheers So true ! And the most collected. 🙂 I have never had so much plastic around the house than since they got rid of the single use bags. I kept the single use bags and used for multiple uses. Now I have lots and lots of multiple use bags, which weigh much more and take more space. I use them for similar things and for about the same number of uses. I think the multiple use bags must be 4 or 5 times the weight of plastic at least. Griff Plastic recycling in Germany has two components: cleaning and separation at source (homes) and after collection, the various destinations of the waste streams. Here in Waterloo we have a very good system of separation at source and collection, sometimes using different (diesel powered) vehicles. The waste streams at the dump site are: garden compostibles to make top dressing (which is professionally done and free for collection at the dumpsite); glass; concrete, ceramic wash basins (etc) and bricks; wood; electronics including TV screens and power supplies etc; metal; cardboard; drywall; household waste; and finally, “construction”. Hazardous items are collected at a separate building: liquid paint, oil, things containing mercury etc. In Germany there are rules for cleaning the plastic before it is put out for collection so people do it. Looking into what happens after that, an article I read maybe three years ago said about 85% of it is burned in incinerators, often all of it. This re-direction from “recycling” to “burning” is hidden from the public that does all the work of cleaning and separating because it is a PR exercise more than a recycling exercise. GWPF addressed the back-slapping claims made about European ‘recycling’ in this paper: the short version is that waste is recorded as being ‘recycled’ once it arrives at a collection depot. After that, since the waste is of such low value, it is cheaper to ship it off to be dumped, and in turn, it is cheaper for ship owner to dump much of it over the side. Even if the waste arrives in an Asian country for ‘recycling’ much of it leaks into the environment. The best treatment is to incinerate in waste to energy/heat plants as is common in Sweden, but typically of ineptocracies, the EU is hostile to the idea that works most effectively (in this case on gullible warming pretenses) so they myth of a circular economy keeps being encouraged. Clear plastic PET bottles cannot be recycled into a new clear plastic PET bottle, as the PET can be water-clear only once, so your 34% claim is dubious. Back that up for us, Griff, your source is someone who knows nothing about plastics… Sorry Michael, as a Chemical Engineer who works for one of the largest producers of Polyester Plastics, PET can be recycled multiple times. Heat it back above its crystalline melting point and cool it rapidly below its peak temperature of rate of crystallization or strain crystallize and it will be clear. I love this place. Yeah, and this dubious video is deceptive too: The laws in the US of A require that recycled plastics not be used for food containers so using PET for bottles again is a non-starter. This is the same problem that recycled paper has – there is no market. In the old days, glass bottles had a deposit and most were collected and turned in. Due to sanitation concerns and the cost of cleaning the bottles for re-use, this is no longer economically viable. If recycling paid, it would pay. Recycling metals pays to do since I get paid for scrap steel, aluminum, and copper. If I have to pay to recycle, the recycling process actually consumes more energy and resources than just making the product in the first place. That said, I am perfectly happy to recycle toxic materials like my old ni-cad batteries, paint, fluorescent light bulbs, etc. and willing to pay to have these properly and safely disposed of. That is part of the cost of having these modern miracles. However, pushing recycling when it actually harms the environment more to do so is silly. You know, I’m old enough to remember glass bottles of milk …and of them being delivered to the house… and I STILL believe milk tastes better from a glass jug than from plastic or cartons. I’d love to see a re-assessment of the “economics” of reusing glass bottles vs creating and using (and disposing of) plastic and “paper” cartons for so much stuff. I can’t imagine that with modern technology, maintaining proper sanitation can’t be so very uneconomical. Systems are still available for purchase:(perhaps not of the correct scale?), and services are available as well:. We still use glass containers for our milk, bought in the local dairy and returned for cleaning and reuse. I agree on the taste from glass, and we are lucky to be able to buy raw milk which tastes better still. Agreed on all. In the UK farmers can sell raw milk onsite, but not advertise it. We used to get gold top raw most of the time. Mmmmmmm! Sorry, Loren C. Wilson, you are wrong. The US FDA has issued 235 No Objection Letters,?, allowing use a specific post-consumer plastics in food packaging. The FDA specifies for which food group and filling condition the Letters are issued. A number of the letters include all food groups and the filling conditions for which the virgin plastic is allowed. 177 of the letters apply to PET. Today we have some bottled water in 100% postconsumer recycled content bottles, Ice River is one, and some with lesser amounts, such as Nestle products. There is a robust market for postconsumer PET bottles and HDPE bottles. Used PET is used again to make new bottles, for textile fibers, for film and sheet, and for strapping. Used HDPE from bottles is used for pipe and to make new bottles. There is definitely a market. The Materials Recovery Facilities, MRFs, have operational costs of about $90 to about $120/ton. PET bottle bales and HDPE bottle bales sell for more. Mixed paper, cardboard, and glass sell for less than the cost and make up the majority of the tons processed. Numerous life cycle studies have shown less energy is required to make new packaging from recycled plastic than to make new packaging from virgin plastic precursors. Recycling plastic saves energy. Recycling, properly done, conserves the environment and does not harm the environment. griff ==> when quoting a source, it is customary to include a link so that others can refer to the original source material. Thanks Griffs source is Petnology, the people who promote the stuff. Oh dear!: The various aspects of plastic chemistry aside, who is throwing garbage away/down/out? There is a cultural aspect to all of this pollution that suggests the problem of plastic debris in the oceans is only another manifestation of garbage all over the place. In my fairly extensive travels I’ve been around poor people who never contaminated, and around rich people that threw whatever out the window of their car. How to change the cultural aspect of littering? No idea. I have no doubt it is a big problem, however. Along comes marketing geniuses who add “biodegradable” or “re-cyclable” to a product, and the culture doesn’t change, but some persons buy it as a virtue-signal. Problem not solved. 6 or 7 large rivers in Asia and Africa provide most of the oceanic plastic. “In my fairly extensive travels I’ve been around poor people who never contaminated, and around rich people that threw whatever out the window of their car.” The opposite is truer in my experience. They are mum about the source of the plastics in the ocean patches. Starbucks ridiculously withdrew use of plastic straws over this bs although it’s pretty safe to say none of their straws made the journey. It virtually all comes from poorly placed garbage principally in countries like the Philippines. It is washed down streams and into the ocean by typhoons and other heavy rain storms. I suspect that it is even part of a plan. All countries seem to be short of space for landfill and this solves a costly problem for island nation particularly. Gary P ==> Island nations, particularly small ones, are short of landfill space. Other countries, not so much. USA — you could hide all the trash for the whole world for a century and it would not be noticeable. in my one county in Upstate New York, there are enough unused old quarries to take 50 years of NY City trash. The landfill problem usually isn’t space, it is that nambies “don’t like them”. Human cities have required middens for all of human history — he modern version is the landfill, In the 1950s, each rural farm had its own “tip”….now they have collective tips called landfills. Everything, all material and all energy is renewable and recycled; some on short human timescales, some on longer geological time scales, and some on even longer time scales. I see rank culturism above. I moved from filthy garbage dump roadsides to a community culture where roadside trash is a rarity, and we regularly patrol our property for tourist trash. A mono-culture community with a moat and ferryman’s tariff. Microbial decomposition in landfill is anaerobic. Methane gas one of the main by product and it the alarmists are to be believed, methane gas is 21 to 25 times more potent than carbon dioxide as a green house gas. Non-biodegradable items are more environment friendly from GHG perspective. Except that in the U. S., landfill gas is captured and either flared or, more and more commonly, considered a valuable resource and either cleaned up and piped out as natural gas or burned in an electric generating unit. A number of years ago, there was a flurry of competition for contracts for landfill gas. At the time, one investor in environmental technologies told me it was one of the very few “green” investments that actually made a profit. The anti-landfill enviros conveniently forget that just about anything organic eventually biologically degrades in a landfill to either harmless material or usable gas, especially if the landfill is designed properly. Pflashgordon, the highest topography in south Florida is heaped-up “landfills”, with plastic coverings and methane collection pipes coming out of them. The methane is utilized to power county vehicles. This is fairly close to quite reasonable conduct. Sarasota County and Waste Management do this. This thing about recycling is that all those plastic bottles must be cleaned and the cap put back on so they don’t get contaminated. Otherwise, they get discarded. The real solution is to make large 1500 foot mini mountains and then make them into water slide parks. Trash gone, income for the city/town, fun for the people. EPA standards for “Sanitary Landfills” have long been all about keeping water out of the landfill area, with synthetic and natural covers, not allowing trees with deep root systems to grow, etc. As a result, waste in landfills really didn’t decompose much; the only water available was what came with the waste, or what minimal liquid could be added to achieve a high degree of waste compaction. Some of that supported initial bacterial growth and degradation of the waste, and some went out as leachate to be hauled off and processed. After a couple of years, at most, gas production stalled. Studies were made of landfill waste decomposition and researchers routinely found newspapers from decades past that were still legible, as well as food and other wastes that had not decomposed. It’s only been in the past decade or so that EPA and its state affiliates have allowed efforts to actively produce gas, which means a need to add water to support bacterial action. I see it as a great improvement in landfill management, after decades of locking up billions of yards/tons of garbage in the ground. Methane is not a problem. Bacteria eat it out of the atmosphere within 2 years. CO2 is there to stay for 300 years. CO2 is not a problem. CO2 is plant food and we need more of it not less. Is that bad Alex? Alex, That’s completely wrong. CO2 does NOT stay in the atmosphere for 300 years. The atmosphere contains about 730 Gigatons of C (carbon, not carbon dioxide). Every year about 230 Gigatons of C are released to the atmosphere from both natural and anthropogenic sources but the atmospheric load of C rises by only ~3 GT. Why? Why isn’t it going up at 230 GT per year? It’s because C is absorbed by the oceans and plant life at almost the rate that it’s emitted. Of that 230 GT, only 6 to 7 GT is emitted by fossil fuel burning and other anthropogenic sources, the rest is emitted naturally. Anthropogenic C is now being absorbed by the oceans and plant life at 1/2 the rate that it’s emitted – about 3 to 4 GT of the human emissions per year is being absorbed. The atmosphere is far from natural equilibrium at the current 410+ ppm atmospheric CO2 concentration so the absorption rate is very high. If we stopped emitting C, the oceans and plant life would continue to reduce the C mass in the atmosphere by that same 3 to 4 GT / year. Every single year of no C production would undo ~2 years of emissions. You wouldn’t have to wait 300 years for the CO2 concentration to drop, you would see a dramatic decrease in atmospheric concentration in only 30 years – enough of a decrease to get about ½ way back to pre-industrial levels. You’re off by a factor of 10 in the 1/2 life of CO2. Could someone please explain the way we can trace “anthropogenic CO2”, that is, what makes it so different that we know it’s absorbed at what seems to be a different rate than naturally occurring C/CO2? As far as I know, certain anthropogenic sources of CO2 have a different isotope than “natural” CO2. Unfortunately, it is not that simple. CO2 has not a single “lifetime” in the atmosphere, but many. Some 30% of CO2 exchange within 4 years (single molecule residence time) Some 50% have lifetime of 300 years Other 20% have lifetime in kiloyears The reason is, the CO2 is described by a higher order differential equation and everything depends on the initial conditions. Total crap bafflegab. “CO2 is described by a higher order differential equation”. Wow, must be calculus. You must be a genius or something. Apparently the plastic garbage patch in the Pacific is the size of Texas. Can be seen from space. Funny yet, no-one has seen that patch, from space. Patrick ==> Or even while out at sea. WW1+2 have sent countless souls, vessels, aircraft and much else ranging from plain toxic to extremely toxic to the bottom of the seas. And ? Nothing. A big fat nothing catastrophically bad has ensued. Well some bad stuff did nearly happen. Near the proposed bridge linking Scotland to N.Ireland it was revealed a major WWII munitions dump in the sea would be a bad idea to build upon. Beaufort’s Dyke Those Guardian WWII bombs , regularly dug up in building sites in Germany, are not from the deep sea though. Not sure if the royal Navy has dared sample the Beauford… bonbon ==> Thanks, interesting . But, I doubt that the munitions were in plastic bottles. Bonbon, now I understand the reports of giant mutant green octopuses crawling ashore at night to feed on buried unexploded ordnance. Maybe the Guardian should also investigate ? Here in the Philippines the purpose of typhoons is to wash the plastic into the ocean. Sky King ==> In the poorer parts of the Caribbean as well. Similar thing happens in CA when the rainy season begins. I once made an ‘industrial size’ wild-bird feeder using a large clear PET bottle. It lasted less than 3 weeks, totally crumbled under the intense sun one gets near Hadrian’s Wall in Cumbria UK ;-D As a working peasant I priced some (recycled) plastic fence rails (as per Decking Lumber) A single piece, 10ft long and 1″ by 4″ cost £35 (Thirty Five) Its wood equivalent (12ft long) was priced at £3 (Three) I (only) once tried some fence stakes made of recycled plastic. They are *THE* most hideously dangerous things ever devised – they ‘twang’ when you attempt to hammer them in and if using a mechanical tractor mounted post-knocker are perfectly capable of breaking fingers, arms, legs, faces and bodies Yet what are they made of, will they simply fall apart as fast as my attempted bird-feeder did? I used to collect rainwater for my cows, they loved it. I used 2nd hand plastic kerosene storage tanks (as per oil-fired home heating) I once got a 3500 litre tank off a heating engineer. He was gainfully employed, via Government edict, replacing oil fired systems witj ‘more efficient’ gas fired heating systems – either piped gas or propane. He had a mountain of old plastic tanks when I met him. He was cutting them up, with a hand-held angle-grinder and putting the bits into a skip (next stop= landfill) (Wearing no PPE at all. sigh) I bought a tank off him for £25, about what the skip cost him = £180 with capacity for 7 or 8 sliced up tanks. We are governed by clowns – viz – the orrible scary ones, NOT the haha funny ones “I used to collect rainwater for my cows, they loved it” The soft rainwater or the chewy plastic? Peta ==> And you are far too north to get any real tropical sun….plastic bottles exposed to the sun soon break apart. PVC fittings also. Saw one throw away in the AZ desert it was half gone, by now I assume it all gone since it was ten years back. Plastic bags in my shed never seen the sun turn to powder just in the heat the same do plastic buckets the just shatter after a time in the heat. Rechargeable batters for power tools don’t last summer in the heat. Late reply and not being in England I cannot tell you specifically what the lifespan of said “plastic lumber” is there. I can tell you the life expectancy in Canada though based on our products that are roughly similar. That lifespan is about 20-25 years. Or roughly the same as pressure treated lumber. The latter actually being truly biodegradable in a relatively short time span. “plastic bottle recycling” We do that here too, kids scour the place for bottles too, as its free money for them. But lets be honest, those bottles are dirty and it takes more energy to clean them and recycle them instead of just making new ones also, they cannot use recycled bottles to make new bottles even if they wanted to, the plastic is degraded and they use the recycled plastic to make lower grade plastic products which are NOT recycled, as you get only 1 recycle for plastic Some plastic furniture, garden edging etc is made from re-cycled plastic. There are uses, probably not enough though. Mark, published Life Cycle Assessment reports, , teach that “Using recycled plastic reduced total energy consumption by 79% for PET, 88% for HDPE, and 88% for PP”. Sorry, you are wrong. It does not take more energy to clean used bottles for recycling than to ‘just make new ones’. As for “cannot use recycled bottles to make new bottles” the 2018 United States National Postconsumer Plastic Bottle Recycling Report,, tells us that 37.4% of recycled HDPE bottles went into the next bottle. About 1/3 of the recycled PET was used to make new bottles, both food-grade and non-food grade. And, those PET and HDPE bottles with recycled content are recycled again and into new bottles. What is not biodegradable? I spent a number of years sailing around New Guinea & the Solomon Islands among others. There were some huge bases in these islands during WW11. As an ex Oz navy flyer I was interested in exploring some of these. One was Green island, an atoll between Bouganville & New Ireland, a major base confining the Japs at Rabaul, after it had been bypassed. At it’s peal there were 19,999 service men on this small atoll, servicing a fighter & 2 bomber air strips, a huge Catalina sea plane base, & a black cats base, & a base with up to 6 squadrons of patrol boats. At it’s peak it required 10,000 gallons of high octane petrol a day to supply the planes & boats. In 1976 I could find no part of the fuel tank farm anywhere, no sine of the patrol boat facility, & only some cement hard stand of the sea plane base, with 60Ft high trees growing through it. The only remains of the 3 air strips was a 2000 ft part of the fighter strip, now used as the airport for the light planes servicing the area. I visited many such places, & so little remains, just 30 years after the war, it brought home to me that it is mans constructions that ate vulnerable & fragile. The last thing you can call nature is fragile. It will quickly consume anything not protected from it. You know, I never found a garbage patch, or dump in any of these places. Yes, any homeowner can testify to the fragility of man’s constructions… Biodegradable plastic is a VERY good thing. Unfortunately, expensive. I do like the German way. Any plastic bottle costs you 25 cent. You get these 25 cent back when you (or anybody who finds the bottle) bring it to a (really any!) recycling point. After this simple rule was implemented, you do not find any single bottle thrown away! In the US before plastic, soda, beer et al came in glass bottles that included a small deposit which was refunded when the bottle returned to the retailer and most bottles were returned, cleaned and refilled. These were eventually replaced with “no deposit, no return” bottles which is the standard today. A couple of decades ago, some states required vendors (again) to add a deposit (five or ten cents from memory) to each bottle sold and refund it when bottles were returned. I believe that the scheme has been discontinued in all states. Still doing it in Taxifornia. We’ve been doing it in Michigan since 1976, 10 cent deposit on cans, glass bottles, and plastic bottles (Think soda and beer). Returns are done at the store where you purchased from. Machines laser scan the UPC labels to determine if it’s a legit return (bottles from Ohio won’t work). They spit out a receipt to take to the cashier when you are done. Pretty easy system here in Michigan. Speed: Read your next beer bottle label (or if you don’t drink, look at one in the grocery), still lots of bottles labeled for deposits. I’m visiting family in New York State — up in the pretty portions between the Finger Lakes and Lake Ontario, where it’s not been totally polluted and corrupted by the yo-yo politicians in Albany (it’s just been plundered to support the current and past governors’ largess to “downstate”). My label says: CT-HI-IA-MA-ME-NY-VT 5cents MI-OR 10 cents. (That’s a glass bottle; plastic one had far fewer states listed.) And, that’s a bottle from from our neighbors to the north, Canada. Interesting. Back in the 1976 era I visited Michigan frequently and heard all the bottle-return complaints. My most striking memories are of large trailers full of empties in grocery store parking lots and management complaints that they had to sort all the bottles to get them back to the right vendors. I guess automation driven by UPC labels solves most of that. My father, rather than returning bottles put them out with the garbage — the garbage men (not part of their job) returned them for the deposit. EMM, easy money matters. Germany, my department. Non-returned or refused (bought in Aldi, refused by LIDL p.ex.) bottles are a nice side gain opportunity. Idem the bags. We got them for free once, now the small skimpier not reusable one costs 5 cents. The roll of 1’000 bags bought for less than 2 bucks will bring in 50 bucks of easy cash. Can’t we make railroad ties from dirty old plastic? Interesting. I would have thought they would not last long. Time will tell. Can’t imagine they’ll be less expensive. I wonder how they compare to concrete ties. I still see it as another solution to an earlier “solution-turned-problem.” We’re seeing the same thing again with plastic grocery bags: anyone else remember the “paper or plastic” debate? Grocers most likely will just see it as a way to diversify revenues by charging a nickle for this, 10 cents for that, $150 for the “reusable” bag (that’s impossible to clean if something leaks in it). I have a Windex spray bottle labeled, “100% Ocean Bound Plastic.” I can imagine how many hours were spent working on the wording for that. I guess it means that the plastic that was used to make the bottle would have been dumped into an ocean if not for Windex (or its supplier) rescuing it from the waste stream and converting it into a new bottle. Speed ==> Marketing Ploy anyone? Low Density Polyethylene and Polypropylene (.94 and .9 specific gravities respectively) will float. Being aliphatic polymers, they generally have good UV resistance and do not biodegrade. That being said, they would still be caught up in the plastic fishing nets. Buckeyebob ==> Read my earlier essays on plastics. Floating plastic breaks down at sea — into smaller and smaller bits — and then gets eaten my microbes. Milk/Juice/Wine cartons are recyclable! They are put in paper mills that separate paper from the plastic and aluminum. Paper is then recycled as usually. Plastic and aluminum may or may not be further processed. Paper every time it is recycled end up with shorter fibers, there a limit how much you can recycle paper of any kind. Each the recycle fibers are mixed with new fibers, so much for the label 100% recycled paper, that a lie. You are referring to the recovery rate. A maximum of 80% recovery if starting with virgin softwood fiber, lower if it has a hardwood fiber content. The 100% recycled fiber label implies but does not state that the recycled fiber is post consumer… often it is rejected mill fiber, i.e. low grade waste. No, sorry. The plastic cannot be separated from the fiber sufficiently for the fiber to be reused. Recycling wood fiber( paper) is a very specialized operation and requires relatively clean inputs. When it began even some inks were enough to contaminate the processes. Recycler ==> Can you supply a link? Here you are: Scroll down to “How Does Carton Recycling Work?”. It might be just an eco propaganda though – I am not an expert. A good article but deceptive. The true fact was hinted at when references the poly-al being used for energy i.e. burned. No paper mill would dare use the separated fiber for paper as any plastic missed would do massive damage to the rollers on the paper machine. The only use of recycled cartons I have heard of was shredding for animal bedding where it is mixed with other materials like sawmill shavings. I also found this video: While it may be somewhat deceptive too, I believe that milk cartons are at least partially recyclable. Recycler ==> “…many municipal recycling programs do not accept plastic/paper hybrid cartons, including juice containers and ice cream cartons. ” “Manufacturers of cartons have joined forces in the Carton Council to increase access to carton recycling across the United States.” They require special recycling processes….and all that depends on separation. Something odd happened on the way to Brexit, or rather from. London commissioned a Polish, yes, an EU country, to produce the new shiny proudly British passports, from polycarbonate. Now all those, made to last, passports will be dumped on the beaches by the remoaners, calling untold post-Brexit environmental damage. Plastics – derived from Fossil Fuels Fossil fuels – natural, organic and bio-degradable. The dominant source of ocean plastic is asia and africa. Here in canada something like 8% of plastics is recycled, the rest goes to the dump. Utter waste of time separating unless we just burn it all. Or transmute it into gold, or cheerios. As Kip has noted here and elsewhere, plastic debris smaller than 1mm is quickly consumed by ocean microbes. Isn’t the logical solution to grind the plastic bottles down to 1mm bits and fertilize the ocean with it? Paul Johnson ==> If you have plastic bottles in quantity, they are best recycled into plastic building materials, such as planks for outdoor decks, park benches, etc. Kip – Ideally so, but rivers and harbors in Asia seem to show that’s not economically viable. “Grind and release” may not be free, but it’s a simple, scalable, permanent solution that doesn’t require infrastructure for transport, sorting, processing, marketing and delivery. Paul Johnson ==> The idea of just grinding it up and put it into the wild is not sound environmentally or economically. If you have enough to grind up, melt it into useful items which are usually made of a less economical material, like wood. Waste to energy-burning garbage probably makes the best sense. Including a coarse preliminary sort gathering up metals and glass, which can be repurposed. Glass is just sand so recycling that probably doesn’t make sense, other than washing and reusing. Just gather it up, crush it and use it for road fill when constructing a new highway. All the metals have a value, so recycling that makes sense. The rest can be burnt, and if hot enough, can be safely combusted right in a city, such as the Burnaby waste to energy plant right in the city limits. Even some of the biomass plants can safely burn creosote railway ties, as when the temperature is high enough, all the various chemicals in creosote are safely combusted. That has been tested at Environmental Appeal Boards, and the science is clear on high temperature combustion, that it is safe. Keep the garbage out of our rivers, ditches and oceans. That is a no brainer although bad habits are hard to break. The guy living 2 doors down from me is an electrician. Every couple of months he stacks old fridges, washing machines etc out on the kerbside. They are then picked up by the metal recyclers, who strip it down to be added to newly made metal. Great recycling. 🙂 Earthling2 ==> I am a fan of trash-to-energy plants — as long as they have proper high-temperatures and stack scrubbers. Pre-sorting is good — but often impractical. So I was with a group of people attending a week long function in Hong Kong a couple of weeks back. We had a little time going in a and a couple of days at the tail end to do some sight seeing. The group had heard of a Buddhist shrine on the island of Hung Shing Ye. The group took a ferry from Hong Kong to the island and there was an interesting sight as we approached the island. From about 1 mile out of the ferry dock the ferry was entering into a floating garbage heap. By the time we docked at the terminal the ferry was pushing through over three to four feet of floating plastic and garbage of every description. The amount of plastic bottles, waste Styrofoam and plastics of every type it was mind boggling. The total disregard for the environment by these people is staggering. So tell me again how banning single use plastics and drinking straws here in North America is going to stop the great garbage patch in the Pacific ocean. Boris ==> The sight you describe is common in harbors in Asia, Africa and the Caribbean. This is street trash swept into the rivers and harbors by storms. Some nations cleanup these messes — some let them sink. New York City used to have barges dump NYC trash out at sea. The barges were supposed to drive out to deep water, tens of miles East. Only the barge captains figured it was the same if they went just out of sight over the horizon, 13 nautical miles. And dump the trash there in much shallower water, drink beer, putt around for awhile, then return to port. An especially advantageous plan during the Atlantic’s frequent foul weather. Then NYC discovered from divers that their trash was creeping back inshore; what wasn’t floating back with the tide that is. All of a sudden, NYC sought and found ‘other’ places to send their trash. Some in landfills and some to trash burning facilities. NOAH’s “How Long Items Remain in the Environment” picture shows aluminium cans remaining 200 years. In a seawater environment cans would not last five years Peter Fraser ==> Anyone who scubas or snorkels knows that NOAAs Marine Debris poster is nonsense. Not only do things erode, corrode, and breakdown in the sea, anything that is solid(-ish) becomes a home to some sea life. If you want to attract fish in he tropics, put something, anything, in the water — floating or resting on the bottom. yes, they breakdown into small plastic fragments which get eaten/absorbed by marine life (if that marine life hasn’t strangled from plastic or filled its gut with little bits of plastic and died) Since there aren’t hordes of dead marine life washing ashore all over the world, or even anywhere, due to plastic gut, I’m guessing this is a non-problem. griff ==> Marine life mistakenly eating bits of “non-food” material is quite common. Some plastic bits are eaten by fish, shrimp, etc, then excreted (along with the poop). Most, but not all, animals are the basic “tube” plan — and almost all can excrete anything they can fit down their gullets. Owls are an example of the opposite — they “spit back up” “owl pellets” consisting of the bones and some fur and some feathers of their prey — these pellets are useful for biologists who study the diet of these owls. Plastics do not belong in the oceans – they should be properly handled and disposed of. We havent had aluminium cans for 200 years. …”Tell me lies Tell me sweet little lies”… I do a lot of gardening, in pots. And I raise orchids. Plastic pots, that is, plastic degrades quickly in sunlight. There is a reason, commercial growers use black, brown and dark green pots. The colorant absorbs Ultraviolet light and slows the plastic degradation. So instead of the plastic getting brittle within a year or two, it might take five to ten years. Personally, I like clear plastic pots for my orchids so the roots also get light. Many orchids are epiphytes and exposed roots are normal. Meaning that I have to purchase clear plastic pots with specially formulated UV protection in the plastic, or they only last one year at best. Right now, those clear plastic pots purchased within five years are now getting brittle and I have to replace them. Remember, the plant pot companies are not in the business of making their pots last decades. I sometimes use clear soda bottles. Drill, (or melt), some holes for drainage and clear 2L bottles make good plant starters for trees. Shame they don’t last for long, but they will make a year. The 16 and 20 ounce clear bottles are good for starting tomatoes, peppers, whatever. I did prefer the bottles with plastic support bottoms, but the bottles without that extra plastic bottom work almost as well. Leave the plastic too long in the sun and one can literally crumble the plastic into dust. I once read about some researchers were digging into a landfill to see how well things decomposed. They wrote several paragraphs about a somewhat fresh looking hotdog they found. The newspaper wrote a whole article mostly about that hotdog. I’ll finish by pointing out that our local landfill installed vents and collectors. The landfill’s electrical needs are supplied by using collected landfill methane. Good article, Kip! I had a friend who was a green nut. He got all excited about plant derived some epoxy he purchased and promises of plant derived plastic. He didn’t take it well when I pointed out that all epoxies and plastics are derived from plants and animals. No! No! No! he claimed. Well, he had already purchased the plant derived epoxy at five times the cost of regular epoxy and had it at hand. I suggested that he smell the epoxy. It smelled like epoxy. I then asked if the epoxy had any health warnings, which it had in plenty. I then told him about the first billiard ball replacement material for ivory; celluloid or cellulose nitrate. A compound which happens to be cellulose based. i.e. Nitrated wood or cotton fiber. Since he didn’t get the point, I pointed out that converting plant products into resins and plastics commercially is not the green enterprise he was fantasizing. Same types of factories, same types of products, same smells and health dangers. ATheoK ==> Ah..there you go again…using reality to but a wet blanket over fantasy! Some people LIKE fantasy better. These guys in Namibia, post videos every day of the seal rescues they do. They have cut fishing lines off of 500 seals so far. Not much weight in the plastic lines that injure the seals. Its their shape that is the downfall of the seal, they swim thru the loops and get caught. This article attracted ad pitch for a reusable replacement for a cotton swab. I can’t wait for my physician switch and fight the scourge of disposable swabs. Yuck! It’s bad enough wondering how “clean” the tools are at the dentist office or barber. If the doctor comes at you with a used swab, why bother with hand washing or gloves. Re: Compostable plastics. In the UK there are quite a number of plastic items that claim to be compostable at home. Many say they are made from potato starch. Examples include; bags for loose fruit & veg at the supermarket, clear windows in cardboard food boxes, bags for loose tea, mailing wrappers for magazines. My understanding is that the manufacturers are supposed to have carried out tests to make sure that these will compost down withing 12 months in a home setting. As opposed to a commercial setting which will run at higher temperatures. University College (part of London University) is carrying out a citizen science experiment to see if these items really do compost in a home setting within 12 months. More details here: Epilogue: Lots of Plastic Panic out there. There are some valid concerns but nothing that adds up to the need to panic. The real problem is failure of nations and localities to pick up and properly process the trash. There s a new paper that seems to say that diatoms and other microscopic animal and plant life are living on those tiny tiny pieces of plastic and causing it to sink….I’ll try to figure out what they are on about and write it up for you all to read here. # # # # #
https://wattsupwiththat.com/2020/10/14/where-do-the-plastic-bottles-go/
CC-MAIN-2020-45
refinedweb
9,154
71.44
. It appears that combining this command and the unit test runner, we can indeed do code coverage (Andrew also told me about a way to map "back" routine lines to original lines in the source code). So, according to the information above and this page, a sequence of possible command would be (provided we are in the correct namespace): - set ^UnitTestRoot = "/path/to/test/root" - zbreak /TRACE:ALL:someOtherFile - do ##class(UnitTest.Manager).RunTest("someDirUnderRoot") - [and the commands to disable tracing] The problem though is that the trace format refers to int code... However Andrew told me that it was possible to "map back" to the "real", COS code. The question is, how? I didn't know about ^SYS.MONLBL. As to why unit testing, it's because unit testing is used along with code coverage: you run unit tests, and you check what parts of your production code is taken/non taken. Timings are indeed irrelevant. The plan is to run unit tests with coverage activated, scan both the test results and monitoring results and collect the results.
https://community.intersystems.com/post/followup-unconference-session-about-unit-test-and-code-coverage-it-may-be-doable-after-all-i
CC-MAIN-2019-39
refinedweb
179
61.26
Sorry I cannot help any further; I've never used embedding myself. I hope someone else will step in. For the archives: I believe you're just getting lucky if Py_Finalize() doesn't crash the process. ----- Original Message ---- From: Igor Karpov <ikarpov at gmail.com> To: Ralf W. Grosse-Kunstleve <rwgk at yahoo.com> Cc: Development of Python/C++ integration <cplusplus-sig at python.org> Sent: Thursday, March 5, 2009 1:46:26 PM Subject: Re: [C++-sig] boost::python embedding error - runtime - Mac OS X - 1.38 Well, the version with Py_Finalize() runs on the same machine with 1.34.1, so I don't think that's the problem. However, paring down the code to: #include <Python.h> #include <boost/python.hpp> using namespace boost::python; int main(int argc, char** argv) { { // Using boost::python Py_Initialize(); object main_module = import("__main__"); object main_namespace = main_module.attr("__dict__"); try { object ignored = exec("print 'Hello World, from boost::python!'\n", main_namespace); } catch (error_already_set const& e) { PyErr_Print(); return 1; } } return 0; } Will re-create the same Bus error at runtime... --Igor. On Thu, Mar 5, 2009 at 3:26 PM, Ralf W. Grosse-Kunstleve <rwgk at yahoo.com> wrote: > >> Py_Finalize(); > > > Boost.Python doesn't support Py_Finalize(). Could you try again without? > It is a long-standing known issue, e.g. > >
https://mail.python.org/pipermail/cplusplus-sig/2009-March/014305.html
CC-MAIN-2014-15
refinedweb
217
62.85
On Mon, Apr 20, 2009 at 8:45 PM, Ville Skyttä <ville skytta iki fi> wrote: > What rpmdevtools does for *Emacs users is that it makes opening a new > $foo.spec automatically use the corresponding rpmdevtools spec template for > $foo as emitted by rpmdev-newspec, and adjusts a few more or less cosmetic > variables. If there's a way to do something similar with vim (which currently > appears to be using always the same template shipped in vim-common regardless > of $foo) or other editors, patches are welcome in Bugzilla or > fedorahosted.org/rpmdevtools would probably want to be a patch against vim rather than rpmdevtools since that's where the rest of the spec stuff is at the minute. Something like this maybe? Index: vimrc =================================================================== RCS file: /cvs/pkgs/rpms/vim/devel/vimrc,v retrieving revision 1.20 diff -u -r1.20 vimrc --- vimrc 3 Jun 2008 14:34:32 -0000 1.20 +++ vimrc 21 Apr 2009 14:32:53 -0000 @@ -25,7 +25,11 @@ " don't write swapfile on most commonly used directories for NFS mounts or USB sticks autocmd BufNewFile,BufReadPre /media/*,/mnt/* set directory=~/tmp,/var/tmp,/tmp " start with spec file template - autocmd BufNewFile *.spec 0r /usr/share/vim/vimfiles/template.spec + if executable("rpmdev-newspec") + autocmd BufNewFile *.spec exe "rpmdev-newspec -o - ".bufname("%") + else + autocmd BufNewFile *.spec 0r /usr/share/vim/vimfiles/template.spec + endif augroup END endif -- Iain.
http://www.redhat.com/archives/fedora-devel-list/2009-April/msg01495.html
CC-MAIN-2013-20
refinedweb
235
58.48
NFS/RDMAOur OSU intern Lei Chai has produced a screen cast for the openSolaris work she completed for NFS/RDMA; Check it out here Technorati Tags: nfs, opensolaris, NFS/RDMA Posted by macrbg [NFS] ( March 05, 2008 04:48 PM ) Permalink Draft-21 Layout and devices picture.The current draft for NFSv4.1 (21) is out, and one of the places you can fetch it from is here I've updated "The Picture" .. for sparse files based layouts.. Technorati Tags: nfs, nfsv4, opensolaris, pNFS, solaris Posted by macrbg [NFS] ( March 01, 2008 02:19 PM ) Permalink NFSv4.1 Draft-14 SPARSE files layout.Here is an updated files layout picture that is consistent with draft-14 of NFSv4 Minor version 1. Technorati Tags: nfs, nfsv4, opensolaris, pNFS, solaris Posted by macrbg [NFS] ( October 22, 2007 02:15 PM ) Permalink NFSv4.1 Draft-13 (corrected picture)While looking at some of the changes for draft-13 of NFSv4 Minor version 1, I put together a couple of diagrams that represent the pNFS layouts and devices.. All you pNFS fans.. enjoy.. Here is the Layout to Device relationship, as we would use it in LAYOUTGET : and Here Devices returned in GETDEVICELIST: Technorati Tags: nfs, nfsv4, opensolaris, pNFS, solaris Posted by macrbg [NFS] ( October 19, 2007 10:13 AM ) Permalink baking pNFSIt was a nice sunny time (mostly) in Ann Arbor for the citi pNFS focused 14th bake-a-thon. Posted by macrbg [NFS] ( October 14, 2007 06:54 PM ) Permalink Omni Group.I'm bit of a machead and so when i take notes i use onmi outline, when i plan a project i use omni plan, and for diagrams i use omni (nope not draw) graffle ;) -- Now why haven't they done an omni blog app ? Technorati Tags: OSX, puppy Posted by macrbg [NFS] ( September 16, 2007 07:55 AM ) Permalink mds_default_stripe. For those adventurous types the following source code patch will allow you to alter the stripe width. --- /nfs41-proto/usr/src/uts/common/fs/nfs/nfs4_state.c +++ /issue_102/usr/src/uts/common/fs/nfs/nfs4_state.c @@ -23,7 +23,7 @@ * Use is subject to license terms. */ -#pragma ident "@(#)nfs4_state.c 1.52 07/07/03 SMI" +#pragma ident "%Z%%M% %I% %E% SMI" #include <sys/systm.h> #include <sys/kmem.h> @@ -4267,6 +4267,8 @@ } } +int mds_default_stripe = 32; + mds_layout_t * mds_gen_default_layout() { @@ -4283,7 +4285,7 @@ return (NULL); args.lo_arg.loid = 1; - args.lo_arg.lo_stripe_unit = 32 * 1024; + args.lo_arg.lo_stripe_unit = mds_default_stripe * 1024; rw_enter(&mds_layout_lock, RW_WRITER); setting the value in /etc/system to alter: set nfssrv:mds_default_stripe = 64 -- happy pnfs-ing.. Posted by macrbg [NFS] ( July 09, 2007 05:07 PM ) Permalink PNFS Prototype code. One thing that may not be very apparent with the prototype code is that the MDS automatically generates a default layout based on the data-servers that have 'reported in' with it. The stripe count is the known data-servers and a default value of 32k for stripe width. This information is lost each time the MDS is rebooted. Technorati Tags: nfsv4, OpenSolaris, pNFS, Solaris Posted by macrbg [NFS] ( July 09, 2007 07:39 AM ) Permalink is it tomorrow already ? So dear reader, it is indeed tomorrow. I'm thrilled to announce, I (that is me Robert) will give away a brand new never used ipod shuffle bought with my own $'s right out of the Austin Domain Apple store. What's the catch ? -- well, you have to be able to panic the openSolaris pNFS MDS server, caused by an issue that we have not already disclosed .. and you have to provide a suggested fix.. So, that's easy eh ? ... go forth reader and start a crashing.. -- Oh, send me email once you have done it.... robert <dot> gordon <at> Sun <dot> com Oh, one more thing.. my team mates can not participate.. Technorati Tags: ipod, nfs, nfsv4, OpenSolaris, pNFS, Solaris, ZFS Posted by macrbg [NFS] ( June 27, 2007 01:41 PM ) Permalink pNFS. pNFS wants YOU, yes YOU! -- It longs for your attention and loving keystrokes.. Seriously.. go to the download page download, install it.. use it... You want an incentive ? ... really ? Hmmm okay.. come back here tomorrow and i'll tell you what it is.. Robert. Technorati Tags: ipod, nfs, nfsv4, OpenSolaris, pNFS, Solaris Posted by macrbg [NFS] ( June 26, 2007 07:53 AM ) Permalink Storage.. A Solaris NFS/ZFS Appliance Storage at home is always a problem. I mean, you’ll never know if that nut, bolt and whatchamacallit will come in handy one day.... The same goes for the digital age, that file, that picture... you may just need it one day, or perhaps you’d just like a place to back things up to.. I kept buying single ‘portable’ hard drives as i overflowed my various configurations.. (i have 5 Macs, and an iWill zmaxdp) It was time to have a more viable solution.. So I set out to build something that would hold about 8 disks, have an decent ethernet connection, something that could be headless.. After various configurations of motherboards and cases, Jeff Smith hit me with the clue bat, “hey, just use an Ultra 40!”. So here is the recipe:- I Ordered an Ultra 40 with the additional hard-drive backplane kit (X4213A), the other CPU (X4191A-Z), and an extra 2 Gig of memory (X5287A-Z). Now, where will I find the all important SPUD, nope it’s not a potato, it’s the bracket that will allow a disk to be mounted in the Ultra 40 disk bays. It seems that part number 540-3024 works ! and you can find them here. How about some disks ? We like seagate, and so sprinkle some of these. Now, I said headless, well we need one of these. Next all you gotta do is assemble, configure ZFS/NFS and start using it.. Technorati Tags: nfs, nfsv4, OpenSolaris, ZFS, Solaris Posted by macrbg [NFS] ( May 08, 2007 12:00 AM ) Permalink Mount has error "Resource temporarily unavailable" I was just messing about with snv_23 and FC4; and i noticed that if i forgot to add the no_root_squash option the Slolaris Client would pause for a short while at the mount(1M) command and then error out with 'resource temporarily unavailable' -- one more thing to watch out for.. Technorati Tags: nfs, OpenSolaris, Solaris Posted by macrbg [NFS] ( October 25, 2005 07:38 AM ) Permalink | Comments[0] Solaris NFSv4 client mount from a Linux Server: Since there had been some confusion over a Solaris Client failing to mount from a Linux based server (there may be a couple of issues you may bump into) -- I thought i'd share some hints and things to look at.. First, of course, your mount may fail because the item you wish to mount has not been exported from the server; using showmount -e <server> will return a list of exports for that server, if the item is not listed then you should contact the server sysadmin for help (or export it yourself :) using the exportfs command with possible updates to the /etc/exports file) You may see a permission denied message, and if you are using snv_22 to snv_24 bits that could be due to the client using a non-reserved port. A couple of things can be done to over come that problem; - add the insecure keyword to the exports option line on the server (allowing requests from non-reserved ports) /foop *(rw,nohide,insecure,no_root_squash,sync) - revert the default behavior for the client to use reserved ports You may also see a problem with no such file or directory (ENOENT). The linux NFS server implements the NFSv4 pseudo filesystem (pFS) as a separate namespace to that of the server, and hence NFSv2/v3 (since in v2/v3 we expose the servers native namespace). The root of the pFS namespace is designated in the exports file via the fsid=0 option. So for example lets say you have some data that you'd like to share in the following directories:- /export/proj; /export/archive; /export/www; /export/temp. An example /etc/exports file might look like this :- /export/proj *.dev.dot.com(rw,insecure,sync) /export/archive *(ro,insecure,sync) /export/www @www(rw,insecure,sync) /export/temp *(rw,insecure,sync) Mounting using NFSv3 our mount command would look like this :- # mount -o vers=3 linux_server:/export/proj /proj This will work fine for v2/v3 mounts but fail for NFSv4. The pFS is customizable by the sysadmin allowing the flexibility to present a different namespace; for our example we may choose to specify that the root for the pFS be /export; Adding the following line to /etc/exports will allow us to do that :- /export *(rw,fsid=0,insecure,no_root_squash,sync) In doing this we are now presenting the pFS namespace to NFSv4 Clients as :- /proj /archive /www /temp and as such we will need to alter the mount command on the NFSv4 Client: # mount -o vers=4 linux_server:/proj /proj See Also: Using NFSv4 Linux NFS Client _____________________________ Technorati Tags: OpenSolaris Solaris nfs Posted by macrbg [NFS] ( October 20, 2005 12:51 PM ) Permalink | Comments[6] Default for reserved ports switched back! In Solaris nevada build 26; I've switched the default port for the NFS client to use reserved (aka privileged) ports, effectively disabling 6185950 until 6319735 is putback . See Also: The NFS-Discuss thread here 6319735 Fix the reserved port tune-able 6185950 nfs mount problem when nfs_portmon = 1 6331812 Change KRPC default port usage to reserved ports. Using NFSv4 ____________________________________________________________________ Technorati Tags: OpenSolaris Solaris nfs Posted by macrbg [NFS] ( October 14, 2005 02:32 PM ) Permalink | Comments[0] FreeBSD and Linux Servers and reserved ports. It was pointed out that a Linux NFS server may export a filesystem using the insecure option. For FreeBSD it looks like there is a sysctl(8) "vfs.nfsrv.nfs_privport" that dictates the behavior or adding the -n option to mountd(8); Posted by macrbg [NFS] ( September 25, 2005 09:14 PM ) Permalink | Comments[0]
http://blogs.sun.com/macrbg/
crawl-002
refinedweb
1,670
59.94
; } I wrote some code for testing the impact of try-catch, but seeing some surprising results. static void Main(string[] args) { Thread.CurrentThread.Priority = ThreadPriority.Highest; Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime; long start = 0, stop = 0, elapsed = 0; double avg = 0.0; long temp = Fibo(1); for (int i = 1; i < 100000000; i++) { start = Stopwatch.GetTimestamp(); temp = Fibo(100); stop = Stopwatch.GetTimestamp(); elapsed = stop - start; avg = avg + ((double)elapsed - avg) / i; } Console.WriteLine("Elapsed: " + avg); Console.ReadKey(); } static long Fibo(int n) { long n1 = 0, n2 = 1, fibo = 0; n++; for (int i = 1; i < n; i++) { n1 = n2; n2 = fibo; fibo = n1 + n2; } return fibo; } On my computer, this consistently prints out a value around 0.96.. When I wrap the for loop inside Fibo() with a try-catch block like this: static long Fibo(int n) { long n1 = 0, n2 = 1, fibo = 0; n++; try { for (int i = 1; i < n; i++) { n1 = n2; n2 = fibo; fibo = n1 + n2; } } catch {} return fibo; } Now it consistently prints out 0.69... -- it actually runs faster! But why? Note: I compiled this using the Release configuration and directly ran the EXE file (outside Visual Studio). EDIT: Jon Skeet's excellent analysis shows that try-catch is somehow causing the x86 CLR to use the CPU registers in a more favorable way in this specific case (and I think we're yet to understand why). I confirmed Jon's finding that x64 CLR doesn't have this difference, and that it was faster than the x86 CLR. I also tested using int types inside the Fibo method instead of long types, and then the x86 CLR was as equally fast as the x64 CLR. Well, the way you're timing things looks pretty nasty to me. It would be much more sensible to just time the whole loop: var stopwatch = Stopwatch.StartNew(); for (int i = 1; i < 100000000; i++) { Fibo(100); } stopwatch.Stop(); Console.WriteLine("Elapsed time: {0}", stopwatch.Elapsed); That way you're not at the mercy of tiny timings, floating point arithmetic and accumulated error. Having made that change, see whether the "non-catch" version is still slower than the "catch" version. EDIT: Okay, I've tried it myself - and I'm seeing the same result. Very odd. I wondered whether the try/catch was disabling some bad inlining, but using [MethodImpl(MethodImplOptions.NoInlining)] instead didn't help... Basically you'll need to look at the optimized JITted code under cordbg, I suspect... EDIT: A few more bits of information: n++;line still improves performance, but not by as much as putting it around the whole block ArgumentExceptionin my tests) it's still fast Weird... EDIT: Okay, we have disassembly... This is using the C# 2 compiler and .NET 2 (32-bit) CLR, disassembling with mdbg (as I don't have cordbg on my machine). I still see the same performance effects, even under the debugger. The fast version uses a try block around everything between the variable declarations and the return statement, with just a catch{} handler. Obviously the slow version is the same except without the try/catch. The calling code (i.e. Main) is the same in both cases, and has the same assembly representation (so it's not an inlining issue). Disassembled code for fast version: [0000] push ebp [0001] mov ebp,esp [0003] push edi [0004] push esi [0005] push ebx [0006] sub esp,1Ch [0009] xor eax,eax [000b] mov dword ptr [ebp-20h],eax [000e] mov dword ptr [ebp-1Ch],eax [0011] mov dword ptr [ebp-18h],eax [0014] mov dword ptr [ebp-14h],eax [0017] xor eax,eax [0019] mov dword ptr [ebp-18h],eax *[001c] mov esi,1 [0021] xor edi,edi [0023] mov dword ptr [ebp-28h],1 [002a] mov dword ptr [ebp-24h],0 [0031] inc ecx [0032] mov ebx,2 [0037] cmp ecx,2 [003a] jle 00000024 [003c] mov eax,esi [003e] mov edx,edi [0040] mov esi,dword ptr [ebp-28h] [0043] mov edi,dword ptr [ebp-24h] [0046] add eax,dword ptr [ebp-28h] [0049] adc edx,dword ptr [ebp-24h] [004c] mov dword ptr [ebp-28h],eax [004f] mov dword ptr [ebp-24h],edx [0052] inc ebx [0053] cmp ebx,ecx [0055] jl FFFFFFE7 [0057] jmp 00000007 [0059] call 64571ACB [005e] mov eax,dword ptr [ebp-28h] [0061] mov edx,dword ptr [ebp-24h] [0064] lea esp,[ebp-0Ch] [0067] pop ebx [0068] pop esi [0069] pop edi [006a] pop ebp [006b] ret Disassembled code for slow version: [0000] push ebp [0001] mov ebp,esp [0003] push esi [0004] sub esp,18h *[0007] mov dword ptr [ebp-14h],1 [000e] mov dword ptr [ebp-10h],0 [0015] mov dword ptr [ebp-1Ch],1 [001c] mov dword ptr [ebp-18h],0 [0023] inc ecx [0024] mov esi,2 [0029] cmp ecx,2 [002c] jle 00000031 [002e] mov eax,dword ptr [ebp-14h] [0031] mov edx,dword ptr [ebp-10h] [0034] mov dword ptr [ebp-0Ch],eax [0037] mov dword ptr [ebp-8],edx [003a] mov eax,dword ptr [ebp-1Ch] [003d] mov edx,dword ptr [ebp-18h] [0040] mov dword ptr [ebp-14h],eax [0043] mov dword ptr [ebp-10h],edx [0046] mov eax,dword ptr [ebp-0Ch] [0049] mov edx,dword ptr [ebp-8] [004c] add eax,dword ptr [ebp-1Ch] [004f] adc edx,dword ptr [ebp-18h] [0052] mov dword ptr [ebp-1Ch],eax [0055] mov dword ptr [ebp-18h],edx [0058] inc esi [0059] cmp esi,ecx [005b] jl FFFFFFD3 [005d] mov eax,dword ptr [ebp-1Ch] [0060] mov edx,dword ptr [ebp-18h] [0063] lea esp,[ebp-4] [0066] pop esi [0067] pop ebp [0068] ret In each case the * shows where the debugger entered in a simple "step-into". EDIT: Okay, I've now looked through the code and I think I can see how each version works... and I believe the slower version is slower because it uses fewer registers and more stack space. For small values of n that's possibly faster - but when the loop takes up the bulk of the time, it's slower. Possibly the try/catch block forces more registers to be saved and restored, so the JIT uses those for the loop as well... which happens to improve the performance overall. It's not clear whether it's a reasonable decision for the JIT to not use as many registers in the "normal" code. EDIT: Just tried this on my x64 machine. The x64 CLR is much faster (about 3-4 times faster) than the x86 CLR on this code, and under x64 the try/catch block doesn't make a noticeable difference. I want to do something like: myObject myObj = GetmyObj(); //Create and fill a new object myObject newObj = myObj.Clone(); And then make changes to the new object that are not reflected in the original object. I don't often need this functionality, so when it's been necessary, I've resorted to creating a new object and then copying each property individually, but it always leaves me with the feeling that there is a better or more elegant way of handling the situation. How can I clone or deep copy an object so that the cloned object can be modified without any changes being reflected in the original object? Whilst the standard practice is to implement the ICloneable interface (described here, so I won't regurgitate), here's a nice deep clone object copier I found on The Code Project a while ago and incorporated it in our stuff. As mentioned elsewhere, it does require you prefer to use the new extension methods of C# 3.0, change the method to have the following signature: public static T Clone<T>(this T source) { //... } Now the method call simply becomes objectBeingCloned.Clone();. I have been running StyleCop over some C# code and it keeps reporting that my using statements should be inside the namespace. Is there a technical reason for putting the using statements inside instead of outside the namespace? There statements statements. What is the best tool for creating an Excel Spreadsheet with C#? Ideally, I would like open source so I don't have to add any third party dependencies to my code, and I would like to avoid using Excel directly to create the file (using OLE Automation.) The .CSV file solution is easy, and is the current way I am handling this, but I would like to control the output formats. EDIT: I am still looking at these to see the best alternative for my solution. Interop will work, but it requires Excel to be on the machine you are using. Also the OLEDB method is intriguing, but may not yield much more than what I can achieve with CSV files. I will look more into the 2003 xml format, but that also puts a > Excel 2003 requirement on the file. I am currently looking at a port of the PEAR (PHP library) Excel Writer that will allow some pretty good XLS data and formatting and it is in the Excel_97 compatible format that all modern versions of Excel support. The PEAR Excel Writer is here: PEAR - Excel Writer. What is the difference between Decimal, Float and Double in C#? When would someone use one of these?. artefacts of nature which can't really be measured exactly anyway, float/ double are.
http://boso.herokuapp.com/.net
CC-MAIN-2017-26
refinedweb
1,563
66.47
07 May 2010 19:18 [Source: ICIS news] HOUSTON (ICIS news)--Second-half results for LyondellBasell will likely decline from first-half levels as once-surging olefins prices put early profitability at unsustainable levels, the company said on Friday. Lyondell’s earnings before interest, tax, depreciation and amortisation and restructuring costs (EBITDAR) in the first quarter ended 31 March were $640m (€506m), putting it on a pace ahead of overall 2009 earnings of $2.26bn. Of that $640m, $274m came from the Americas olefins and polyolefins chain – even as the polyethylene (PE) segment posted a loss from the margin squeeze brought on by rising ethylene costs, CEO Jim Gallogly said. “We got a bump ending the first quarter and into the early second quarter with olefins prices blowing up because of operating issues,” Gallogly said. “That’s been a pleasant surprise.” Also, new capacity in Asia and the ?xml:namespace> As such, the second half of the year will likely be weaker, particularly in the olefins and polyolefins chain. However, that could be countered by a In PE, the company beat expectations, the result of a successful Going forward, however, lower olefins prices could lead Lyondell to look to polypropylene (PP) and PE exports to play a more important role in its portfolio, it said. “As monomer prices moderate, there is usually some shifting of margins from olefins to polyolefins,” Gallogly said. The company also cited increased Outside of the Of the $152m earned in Europe and In refining, the company posted a $4m profit for the first-quarter as a whole, behind improved performance at its LyondellBasell, which last Friday emerged from nearly 16 months under US bankruptcy protection, said it had been given a second life through the support of the financial community and would execute a more specific and targeted plan than it did after the merger. Also, Lyondell exited bankruptcy as an economic upswing appeared to be at its beginning, rather than at a near peak, as when the Lyondell and Basell merger closed, it said. Lyondell said it hoped to be listed on the New York Stock Exchange (NYSE) by the third quarter and that administrative costs should be lower in coming quarters as a result of exiting bankruptcy. ($1 = €0.79) For more on LyondellBasell visit ICIS company intelligence For more on ethylene
http://www.icis.com/Articles/2010/05/07/9357601/second-half-earnings-to-slip-on-us-olefins-price-decline-lyondell.html
CC-MAIN-2015-22
refinedweb
389
50.4
Maintenance This backport is maintained on BitBucket by Łukasz Langa, the current vanilla configparser2 maintainer for CPython: configparser2 is derived from Lukasz Langa’s configparser mercurial repo. The only difference is a name is not conflicting with the default python3 configparser. A quick pip install configparser2 should do the job. The ancient ConfigParser module available in the standard library 2.x has seen a major update in Python 3.2. This is a backport of those changes so that they can be used directly in Python 2.6 - 3.5. To use the configparser2 backport instead of the built-in version on both Python 2 and Python 3, simply import it explicitly as a backport: from backports import configparser2 If you’d like to use the backport on Python 2 and the built-in version on Python 3, use that invocation instead: import configparser2 For detailed documentation consult the vanilla version at. Whereas almost completely compatible with its older brother, configparser2 configparser2. This backport is intended to keep 100% compatibility with the vanilla release in Python 3.2+. To help maintaining a version you want and expect, a versioning scheme is used where: For example, 3.5.2 is the third backport release of the configparser2. This backport is maintained on BitBucket by Łukasz Langa, the current vanilla configparser2 maintainer for CPython: This section is technical and should bother you only if you are wondering how this backport is produced. If the implementation details of this backport are not important for you, feel free to ignore the following content. configparser2 is converted using python-future and free time. Because a fully automatic conversion was not doable, I took the following branching approach: The process works like.
https://pypi.org/project/configparser2/
CC-MAIN-2017-22
refinedweb
286
52.9
Domain Driven Design with Web API revisited Part 9: the overall database context September 3, 2015 2 Comments Introduction In the previous post we looked at the repository pattern and abstract repositories in general. We discussed the role of repositories and saw how they can be useful in hiding the implementation details of the concrete data store solution. Repositories are great at limiting direct access to the data store from other layers of the application. They force external callers to go through the domain and its logic when performing data access related operations. In this post we’ll start implementing the repository for our aggregate root, i.e. the Timetable object. There’s a lot of ground to cover so the topic will span two posts. We won’t create the database yet. That is the subject of another post that follows upon completing the concrete repository layer. The following several posts will be quite technical in nature as opposed to what we’ve seen so far. EntityFramework code first As mentioned previously we’ll go for MS SQL Server with EF as the object relational mapper as the concrete data store. It is the first choice for many .NET developers so this sample will appeal to most of them I hope. There are 3 approaches to working with EF in your project: - Database-first: you can point EF to a database and it will create a model for you using T4 templates. You can fine-tune the model in a design surface - Model-first: you have no database and use a design surface to draw your model. You can instruct the designer to create the database for you with all the tables - Code-first: write your model in code as classes and instruct EF to generate the necessary database and tables In our case we’ll go with the code-first approach. We don’t have a database so the first option doesn’t apply. Code-first is quite flexible at how you map your proper domain objects with tables in the database. However, here comes a little warning. I’m not an EF expert. The way I proceed to implement the data store may not be optimal. If you have a thorough knowledge of EF then you may find the solution presented here cumbersome and sub-optimal. Feel free to provide your tips on how to improve it in the comments section below. On the other hand this is not a series on EF. We only need a somewhat realistic data store so that we can persist the domain objects. There’s actually an introductory series on code-first on this blog, though it doesn’t go through any complex cases. We’ll reuse most ideas presented there. You can find the series starting here. The overall database context Open the WebSuite demo we’ve been working on. We’ll simulate the case where the database might be used by other bounded contexts. E.g. an administrative application may need to access other properties of our domain objects that are not relevant to load testing. Hence we’ll first create a generic EF database context. A load test specific context will follow after that. This will prepare the way to include other contexts later on. You can of course opt for a single overall database context in your own solution but breaking up the full EF context into smaller parts has a good purpose in a DDD project. FYI: I got this idea from Julie Lerman’s excellent post available here. So, let’s build our complete EF context first with its own data model. Add a new C# class library to the solution. Call it WebSuiteDDD.Repository.EF. Remove Class1. We need to install EF through NuGet. Add the following NuGet package to the repository layer: The most recent stable package is EF 6.1.3 at the time of writing this post. The SQL database will have its own data model reflected by its tables. In code-first we have the advantage of having no database yet. We’re therefore free to create our tables at will. In fact it’s wise to call the tables and columns the same as they are called in the domain layer in order to preserve the ubiquitous language. Add a new folder called DataModel to the repository layer. We’ll add copies of our entities and value objects from the domain layer. However, we’ll introduce several additions here and there to demonstrate that the full database model may be different from the actual domain model. Add all of the below objects to the DataModel folder: Customer: public class Customer { public Guid Id { get; set; } public string Name { get; set; } public string Address { get; set; } public string MainContact { get; set; } } Description: public class Description { public string ShortDescription { get; set; } public string LongDescription { get; set; } } LoadtestType: public class LoadtestType { public Guid Id { get; set; } public Description Description { get; set; } } Location: public class Location { public string City { get; set; } public string Country { get; set; } public double Longitude { get; set; } public double Latitude { get; set; } } Agent: public class Agent { public Guid Id { get; set; } public Location Location { get; set; } } Engineer: public class Engineer { public Guid Id { get; set; } public string Name { get; set; } public string Title { get; set; } public int YearJoinedCompany { get; set; } } LoadtestParameters: public class LoadtestParameters { public DateTime StartDateUtc { get; set; } public int UserCount { get; set; } public int DurationSec { get; set; } } Project: public class Project { public Guid Id { get; set; } public Description Description { get; set; } public DateTime DateInsertedUtc { get; set; } } Scenario: public class Scenario { public Guid Id { get; set; } public string UriOne { get; set; } public string UriTwo { get; set; } public string UriThree { get; set; } } Loadtest: public class Loadtest { public Guid Id { get; set; } public Guid AgentId { get; set; } public Guid CustomerId { get; set; } public Guid? EngineerId { get; set; } public Guid LoadtestTypeId { get; set; } public Guid ProjectId { get; set; } public Guid ScenarioId { get; set; } public LoadtestParameters Parameters { get; set; } } OK, good, we have the database object models ready. As the last step in this post we need to declare the database tables, or data sets. A data set is represented by the DbSet object in the System.Data.Entity namespace. Insert the following class to the Repository layer: public class WebSuiteContext : DbContext { public WebSuiteContext() : base("WebSuiteContext") {} public DbSet<Agent> Agents { get; set; } public DbSet<Customer> Customers { get; set; } public DbSet<Engineer> Engineers { get; set; } public DbSet<Loadtest> Loadtests { get; set; } public DbSet<LoadtestType> LoadtestTypes { get; set; } public DbSet<Project> Projects { get; set; } public DbSet<Scenario> Scenarios { get; set; } } We derive from DbContext which will probably look familiar to you if you’ve worked with EF before. We named our context connection string “WebSuiteContext”. We’ll see how the database is generated later on. Also notice that we don’t create data sets for every single object in the data model. E.g. it’s futile to create a separate Description table. We’ll see later how such embedded objects are translated into tables by the EF migration tool. In the next post we’ll create the load test specific database context. View the list of posts on Architecture and Patterns here. Andras, thanks for your post about this topic. Do you github the source code for this revised version? Alex, I’ll upload the source code to Github towards the end of the series. There are 18 posts in total. //Andras
https://dotnetcodr.com/2015/09/03/domain-driven-design-with-web-api-revisited-part-9-the-overall-database-context/
CC-MAIN-2019-47
refinedweb
1,239
61.46
In this tutorial we will see how to interface NodeMCU with 16x2 LCD without using I2C communication. Here we will interface 16x2 LCD using shift register SN74HC595. We can also interface it even without using any shift register. We will see both kinds of interfacings in this tutorial. Main difference between both interfacings is the number of pins used in NodeMCU. Material Required: - NodeMCU ESP12E - SN74HC595 Shift Register IC - 16x2 LCD Module - Potentiometers - Male-Female wires - Breadboard Shift Register: In digital systems, a shift register is a combination of flip-flops which are cascaded in series and shares the same clock. In this cascaded package, data-out of one flip-flop act as data-in for next flip-flop which results in a circuit that shifts by one position the bit array stored in it. The IC which we are going to use is SN74HC595N. It is a simple 8-bit serial in parallel out shift register IC. In simple words, this IC allows additional inputs or outputs to be added to a microcontroller by converting data between parallel and serial formats. Our microcontroller use 3 pins of this IC to send data serially. Actually 8-bit output will be coming on 8 pins after getting 8-bits information from input pins. Learn more about shift registers here. PIN diagram and PIN functions of IC SN74HC595N is given below: You find interfacing of 74HC595N with Arduino and with Raspberry pi here. Interface LCD with ESP12 without using Shift Register: If you have used 16x2 LCD with the Arduino board then it will going be very easy. You have to just hookup pins in NodeMCU just same as you have done with Arduino board. There are 16 GPIO pins in NodeMCU and we need 6 pins and gnd, vcc. Connect pins according to Circuit diagram given below: We will use 4 data pins and RS, EN of LCD which are connected as: d7 pin of LCD == D8 pin of NodeMCU d6 pin of LCD == D7 pin of NodeMCU d5 pin of LCD == D6 pin of NodeMCU d4 pin of LCD == D5 pin of NodeMCU RS pin of LCD == D2 pin of NodeMCU En pin of LCD == D3 pin of NodeMCU You can use any GPIO for these connections. Now, upload the code using Arduino IDE as explained earlier. Code is same as for Arduino board which can be found in Liquidcrystal example. Program is simple and easily understandable if you want to learn more about the program check our LCD interfacing with Arduino Program. CODE: #include <LiquidCrystal.h> const int RS = D2, EN = D3, d4 = D5, d5 = D6, d6 = D7, d7 = D8; LiquidCrystal lcd(RS, EN, d4, d5, d6, d we saw, we already used 6 pins of NodeMCU. There are already less pins available for this little board and we are left with few pins to interface other sensors. So, to overcome with this problem we will use shift register IC which will minimize the no. of pins used on NodeMCU. Interface LCD with ESP12 using Shift Register SN74HC595N: There are 8 output and 3 input pins available in shift register IC. We will use 6 output pins to connect with the LCD and 3 input pins to NodeMCU. Connections of LCD with IC are given as: D7 pin of LCD == pin 1 of IC D6 pin of LCD == pin 2 of IC D5 pin of LCD == pin 3 of IC D4 pin of LCD == pin 4 of IC RS pin of LCD == pin 7 of IC En pin of LCD == pin 15 of IC Connection of NodeMCU with IC: D6 pin of NodeMCU == pin 14 of IC, DATA PIN of IC D7 pin of NodeMCU == pin 12 of IC, LATCH PIN of IC D8 pin of NodeMCU == pin 11 of IC, CLOCK PIN of IC Connect PIN 16 and PIN 10 of IC to Vcc. Connect PIN 8 and PIN 13 of IC to GND. Make Circuit carefully according to below diagram: Now our Hardware is ready to program. Now, we need a library “LiquidCrystal595” which can be downloaded from this link by following below steps: 1. Goto Sketch menu of Arduino IDE. 2. Click on Include Library. 3. Now, click on Add .zip library. Choose zip file you have downloaded from given link and its done. Now upload the code given below and you will see message printing the LCD. CODE: #include <LiquidCrystal595.h> // include the library LiquidCrystal595 lcd(D6,D7,D8); // data_pin, latch_pin, clock_pin void setup() { lcd.begin(16,2); // 16 characters, 2 rows lcd.clear(); lcd.setCursor(0,0); lcd.print("lcd with nodemcu"); } void loop() { lcd.setCursor(0,1); lcd.print("Success"); } The code is simple as we have to just give data pin, latch pin and clock pin as argument in LiquidCrystal595 lcd(); and rest of the code is same as we have done previously. In this way, you have just saved 3 pins of NodeMCU by using Shift Register. Also, check 16x2 LCD interfacing with other Microcontrollers:
https://circuitdigest.com/microcontroller-projects/interfacing-lcd-with-nodemcu
CC-MAIN-2019-04
refinedweb
834
69.82
1. Opening & Introductions Welcome to the 2nd Joint PASC/TOG/WG15 Revision Group meeting. Andrew Josey called the meeting to order at 9.20am Tuesday March 2, at Sun's facility in Menlo Park, CA. 1.1 Introductions 1.1.1 Attendance list Note: Andrew Gollan, Josee Auber from XNET attended only part-time on Day 1 to discuss networking issues. Andrew Roach attended part time on Day 3. Peter Anvin attended on Day 3 only. 1.2 Approve Agenda Andrew outlined the overall purpose of the meeting, giving some of the choices that we need to address. Nick Stoughton agreed to act as secretary (reluctantly). The position of secretary will be discussed later on the agenda. 1.3 Scope Andrew presented AUSTIN/10, an overview of the project's goals and scope for those who are new. There is a mailing list (austin-group@opengroup.org) , which currently has 64 members. See for further details. 1.4 Minutes of Previous meeting (AUSTIN/7) The previous minutes were approved as distributed. 1.4.1 Matters Arising - Action Item Review A9809-01 AJ to prepare web-site and mailing list for common materials, and publicize both to the group. CLOSED, web-site is A9809-02 CR to check if there are projects using 1003.2d (Batch) within DoD. CLOSED - .2d is in use. A9809-03 AJ to produce some formal strawman requirements for further discussion CLOSED, see AUSTIN/5 A9809-04 AJ to produce a long version of the scope to attach to the PAR CLOSED - see AUSTIN/9 A9809-05 AJ to email AUSTIN/3 and AUSTIN/4 to the people who will be attending Thursday's teleconference CLOSED A9809-06 KS to prepare a liaison statement to ISO-C for our approval ahead of their October 1998 meeting CLOSED, SEE AUSTIN/8 A9809-07 AJ to go away and examine scalability and come back with concrete proposals. CLOSED, see A9809-08 A9809-08 AJ to compile a problem statement in conjunction with the proposals in A9809-07 for the scalability and extensibility issues. CLOSED, see email austin-59 A9809-09 MB to add columns to AUSTIN/6 for what .13, FIPS, & SUS require to be able to control. OPEN A9809-10 AJ to request a PDF version of .13 to add to the group reference CD CLOSED, on the reference CD. A9809-11 TOG to talk to IEEE re IPR issues and report back by next meeting CLOSED, see agenda item on status reports. 2. Procedures Several attempts were made to contact Keld Simonsen in order to obtain his input on the procedures, but his number was busy. (+45 3322 6543). Roger Martin used his pager to send email to Keld to get him to call in. Roger Martin presented AUSTIN/14. This document has been produced by the JPC, The most contentious phrase was felt to be that regarding copyright. The proposal is to drop this phrase, and have any document copyrighted by whichever organization is appropriate. Keld finally reached at 10:42. The only example of a joint copyright is with regard to the NP&L guide; Bob Pritchard and Lowell Johnson are investigating this. AGREED that this should not be an impediment to immediate progress. What about balloting? If each organization conducts its own ballot process in parallel with the JWG developing the draft, who is responsible for ballot resolution? How does ballot resolution work? Nick stated that if the ballot comments received were technically valid comments, then it was the JWG's problem to reword the draft accordingly. Roger agreed, and added that procedural ballots are not our responsibility, and should be addressed by the individual groups. 3. Identification of ORs and Chair Election PASC - Don Cragun. WG15 - Nick Stoughton. TOG - Jim Zepeda. Andrew Josey resigned as chair, and was promptly re-elected (meeting was jointly chaired by the ORs as vice-chairs for 5 seconds). 4. Status Reports 4.1 IEEE AJ has met once, in November, with Denise Pribula, Mary Shepherd, and Judy Gorman at the IEEE. This was a briefing on what the Austin Group is, what are the ramifications, etc. The key to success is to establish a business relationship between IEEE and TOG. IEEE need to derive revenue from sale of the standards. Since TOG also sells, and gives away for free, there must be an agreement on how to prevent loss of revenue to IEEE. The negotiations to date propose that there are different packagings from IEEE and TOG: a UNIX standard from TOG, a POSIX standard from IEEE. If we want an IEEE standard it must go through an IEEE process. A PAR has been submitted to this end. IEEE do not believe that there is any need for ISO copyright. The documents must follow IEEE style rules. A draft memo of understanding from the IEEE is due very soon. This is likely to need substantial modification before this group can agree to it. 4.2 PASC .1a ballot closed yesterday. .2b is ready to go into ballot very soon. .1d is close. .1j is the most likely to fail of the realtime standards. .1h is unlikely to make the deadline. A draft of .1n has now been produced. ACTION AJ to obtain a copy of .1n (corrigenda) for reference CD. The PAR (or PARs) will be presented. Finnbarr pointed out that the 4 projects are locked together so closely that we want to regard it as a single entity, hence one PAR. If the project is split into 4, then there should be wording to the effect that "this standard may not be approved unless the others are". Also that it is a single group that is responsible for the production of these 4 documents. ACTION NS and AJ to prepare the other three PARs in case they are required. There is a strong opinion that there should be only one ballot group. Sign up for one, sign up for all. 4.3 JPC Procedures draft discussed above. 5. Old Business 5.1 Networking Scope Networking interfaces. It was previously agreed that XNS and 1003.1g would only be considered in scope if 1003.1g had passed RevCom by 12/31/99. The most recent recirculation achieved 79%. It is going to RevCom in June. There may be some procedural objections, again. Should we consider networking interfaces anyway? A new version of XNS is being released, which will include IPv6 interfaces. This is expected this year. AGREED network interfaces should be in scope. How do we deal with things that are in .1g as mandatory (XTI and raw sockets for example), but are felt by the WG as needing to be optional (or omitted altogether)? The "loophole" we could use is that if .1g had been published before .13 slice and dice, then .13 would have made these things optional, so we should make this extension anyway. The XNET group was encouraged to mark XTI interfaces as obsolescent/legacy in this new version, thus removing these interfaces from our scope. AGREED: Raw Sockets, XTI and advanced API to IPv6 should be optional (at best). Thread awareness for networking interfaces should be considered. However, this does not need to be in the scope explicitly, since this would fall under the general harmonization principles of the entire document. ACTION AJ to ensure that all networking bodies (e.g. XoTGNET and PASC-Distributed Systems) realize that they are explicitly invited to these meetings. 5.2 Inclusion of C Standard Interfaces How do we handle the duplicate interface definitions between ISO C and the Single UNIX Specification (see email seq 44, 46, 47); for a view of the scope of the issue look at the fseek() specification in ISO C, POSIX and SUS. Keld was again called to obtain his input, but the number was again busy. Issues are in: 1. New interfaces from C9x, not in XSH or POSIX 2. Interfaces derived from SVID, now in XSH but not in POSIX, which have been adopted by C 3. Interfaces in POSIX that have extensions over C 4. Interfaces in POSIX that reference 9899. Most believed it useful for a programmer to have a single complete reference place. This standard (the revised POSIX std) should describe how the C functions behave in a POSIX environment, which may often include extensions to the C requirements. Where POSIX or XSH make additional requirements, there is general agreement that such a function should be documented, at least as well as it is now in POSIX. Should it be completely documented (as in XSH) or extensions only (as in POSIX)? Keld reached. C9x is going out this month for final DIS. It is expected to make IS this year (so will be called c99). ISSUE: how to handle these overlapping functions. Proposal: document all functions but clearly identify which words give way to C standard. ACTION: AJ to provide KS with some examples of how a function (e.g. fseek) might look in time for the next WG14 meeting in London, WB 21 June. ACTION: Keld to supply a list of overlapping functions between XSH and ISO-C that may have problems. 6. New Business 6.1 What to do if POSIX.2b Fails. We should back the efforts being made within PASC to complete this work before the deadline, and address the issue of what to do if it fails only when we are sure that it will. 6.2 What to do if Other POSIX Projects Fail We have a commitment to address any of these projects should they fail. As soon as it becomes clear that a project has not got the time remaining to make the deadline, the PASC WG should make a representation to us on whether or not to take this work on. ACTION: AJ (as PASC study group chair) to pass this invitation to PASC at their next meeting. 6.3 Cross Book References AGREED: We should not revise any of the four parts without considering the whole. 6.4 Name Space Reservation Can we reserve namespace for anticipated work? E.g. Distributed realtime? Use of the posix_ prefix should be sufficient. The current .1a draft has tried to end the reservation of namespaces for all time, but may not have done the job adequately. Whatever happens, we will need to address this area again in the revision process. 7. Breakout Groups Two small groups; System Interfaces (Nick, Frank, Lee), Commands and Utils (Don, Finnbarr, Jim). Substantial issues around style; how do we mark optional parts; how do we mark extensions from ISO-C; how do we present functions/utilities (e.g. standard TOG style, some derivation of POSIX style, etc) DWC would like to retain the TOG style of shading. The name section should call out the unit(s) of functionality that require this function/utility. Any optional behavior is then shaded, with a two character code describing the option. Rationale is interesting! Subgroup consent list (items 1-7 are based on a draft document proposed by Andrew Josey overnight later to become Austin/16): 1.>). 2. There should be a new unit of functionality _POSIX_PII 3. The proposed names POSIX_CHAR_HANDLING, POSIX_MATHS, POSIX_JUMPS, POSIX_GENERAL, POSIX_STRING_HANDLING, and POSIX_DATE_ should be replaced by a single unit of functionality, _POSIX_C_LANG_SUPPORT 4. The units of functionality should all be option names (i.e. start with _) 5. POSIX_ASYNC_IO, POSIX_CHOWN_RESTRICTED, POSIX_NO_TRUNC, POSIX_PRIO_IO, and POSIX_SYNC_IO are not options, but file system properties, and require no shading. 6. Add option groups for POSIX_RAW_SOCKETS, POSIX_XTI, POSIX_IPV6, [[and POSIX_IPV6_EXT??]]. 7. Margin markers are AIO (async IO), FSC (Fsync), MF (Mapped Files), ML (Memlock), MLR (Memlock Range), MPR (Memory Protection), MSG (Message Passing), PIO (Pri IO), PS (Priority Scheduling), RTS (Realtime Signals), SEM (Semaphores), SHM (Shared memory), SIO (Sync IO), TMR (Timers), THR (Threads), TSF (Thread Safe Functions), TSA (Thread Attr Stack Addr), TSS (Thread Attr StackSize), TSH (Thread Process Shared), TPI (Thread PRIO Inherit), TPS (Thread Priority Sched), TPP (Thread PRIO Protect), XTI (XTI), RS (Raw Sockets), IP6 (Ipv6), IPE (Ipv6 Advanced API) 8. On-line version, the margin characters are hyperlinks back to definition. 9. The change history must include the differences from the 1996 version of POSIX, and not just from previous X/Open specs. 10.. 11. EX markings by default disappear (i.e. the shading and margin markings go, but the text remains), and the change history is updated to state that "This new requirement derives from alignment with the Single Unix Specification". 12. FIPS markings, in general, should revert to the POSIX wording. 13. .1a/.2b merge will not happen until those specs are finalized. 14. SEE ALSO section should reference the rationale where appropriate · ACTION: AJ to check SUD to see if there are good examples that could be pulled in to the "Examples" sections. 8. Discussion and approval of structure plans for each volume. NOTE: Lowell Johnson (via telephone) suggested that IEEE will require SGML for electronic publishing. There was a general feeling that this was the editor's choice. ISSUE if SGML is used as a starting point, this may mean that the SUD rather SUS is used as a base doc, which is less familiar to most people in the WG. We should try and end up with SGML, even if this is generated from troff. AGREED: We should extend our scope to allow us to correct UN, PI, OF, OP marked sections of XCU, and these sections should become either mandatory or removed (or possibly moved into an option/feature group). The Application Usage, Examples, Future Directions, and Rationale sections of an interface description are not normative (they are informative), and should have POSIX style rationale bars to show this. AGREED XSH Draft 1 should contain section 1 from POSIX.1, and similarly for .2 and XCU. 9. Consideration of what can be merged into D1 AGREED that POSIX_VERSION will not be changed at draft 1 (use XSH value), but will be set to a new value at publication time. Must do a pass through for D1 looking for material in POSIX not in SUS - introductory material in particular. Usually this will go into the page on the relevant header file (e.g. signal concepts into signal.h). 9.1 Scalability Issues What to do about large files in pax/tar/cpio? POSIX does not tar or cpio, but supports those formats through pax. The pax utility will support a new, extensible, format (in .2b). The ar utility is worse, but the format of the archive file itself is not specified. It is not intended that this be changed. AGREED we will adopt the new pax format from .2b when that document is complete. We will not update the tar or cpio formats. The current standards do not specify the format of ar archives, and we will not add a specification of the ar format. 9.2 Document Review Procedures Either use aardvark or use the TOG web based paragraph comment procedure. AGREED we will use PDF or PostScript documents, comments submitted via aardvark, submitted either via email or a web-based form. ACTION AJ to produce aardvark generator form for web based submission. 9.3 Additional UNIX Units of Functionality Andrew discussed a proposal for AUSTIN/18 ( a document breaking down the XSH/XCU additions over POSIX, with the aim of creating a set of options). At least · System V Signals · System V IPC (or maybe Sys V SHMEM, SV MSG, SV SEM) · VFS Filesystem · STREAMS · User accounting · User context · Pseudo TTY · Historical date & time · Additional process waiting should be added as options or removed. Others should be added as options, possibly with a suggestion (editorial) that this becomes a mandatory function. The group did a pass thru the whole document and noted the following to produce AUSTIN/18. System V IPC: three new options POSIX_SYSV_SHM, POSIX_SYSV_SEM, POSIX_SYSV_MSG. All three options include ftok() and ipcs utility. Symlinks all become mandatory, except lchown(), with editorial note that they are coming from .1a POSIX_TIMERS2 option, recommendation that it become a part of POSIX_TIMER option. Select() moves from this list to join poll in _POSIX_POLL and _POSIX_SELECT options. Priority scheduling and resource limits; new options POSIX_RLIMITS [getrlimit, getrusage, and setrlimit]and POSIX_SCHEDULING [getpriority and setpriority]. [Lowell joined by telephone to discuss procedural things; ACTION AJ to update Lowell on current negotiations with IEEE] Additional directory: new option POSIX_SEEKDIR (seekdir, telldir). Setjmp; POSIX_SETJMP (_setjmp and _longjmp) recommendation that these become mandatory Syslog; POSIX_LOGGING_BASIC (as from SRASS, or POSIX_SYSLOG). User and Group access: POSIX_USERDB recommended mandatory Historical pattern matching (basename/dirname). Temporary option POSIX_PATHNAME with editorial note that these are recommended to become mandatory. BSD String ops; POSIX_STRINGS ffs, strcasecmp and strncasecmp are generally useful. The remainder are obsolescent (bcmp, bcopy, bzero, index and rusage). NDBM: POSIX_NDBM User Accounting? POSIX_UTMPX (ttyslot is legacy and should go) User Context? POSIX_CONTEXT VFS Access? POSIX_VFS Additional Math library; POSIX_MATHLIB significant portion is in c99 and needs checking and marking appropriately Sys V Signals? Remove legacy (sigstack). Temporary option POSIX_SYSV_SIGNALS with editorial note to make them mandatory later. Additional System Interfaces; remove legacy functions (chroot, cuserid, getpass, swab); new temporary option POSIX_STD_FUNCS, recommend all become mandatory except gethostid, and getwd, add fchmod. Mknod should also be here, but optional. Additional stat interfaces; all rolled into POSIX_STD_FUNCS Additional wait interfaces; POSIX_WAITID with just waitid in it. Additional termio interfaces; POSIX_TCGETSID, will probably become mandatory Additional Search Functions; POSIX_SEARCH option. Additional Standard Library interfaces; initstate, setstate, random and srandom all go to POSIX_RANDOM. Ecvt, fcvt, gcvt all go to POSIX_MATHLIB. The remainder (a64l, l64a, realpath and getsubopt) all go to POSIX_STD_FUNCS, and should become mandatory. Valloc - legacy, so gone Tempfile; POSIX_TEMPFILE String; move strdup and memccpy to POSIX_STD_FUNCS, and recommend mandatory. Pseudo TTY; new option POSIX_PTY Vector IO; POSIX_VECTOR_IO, recommend mandatory. Stderr; new option POSIX_FMTMSG. Ftime; option POSIX_DATETIME, recommend legacy. Ctype: move to POSIX_STD_FUNCS recommend mandatory. Hash table: move these to POSIX_SEARCH Random numbers: new option POSIX_RANDOM. Environment: POSIX_PUTENV recommend legacy when 1a merge is done Ulimit: POSIX_RLIMITS, possibly legacy File Tree walking; POSIX_FTW. Dynamic libraries: POSIX_DYNAMIC_LINK option. Threads extensions: POSIX_THREADS_EXT, with recommendation to become part of POSIX_THREADS. Large File Support; POSIX_LFS, recommend mandatory. Data size neutrality; mark as POSIX_INTEGRAL_TYPES, and note that this will be overtaken by c99. Misc functions: move to POSIX_STD_FUNCS, recommend mandatory. I18n: most in MSE already; statement up front that notes that any requirement of c99 is a requirement for the new standard. New option POSIX_I18N for strfmon and strptime recommend mandatory. Message catalogue functions; POSIX_MSG_CATALOG. Character set conversions: POSIX_ICONV, suggest mandatory. AGREED any place that there is existing application usage suggesting a better alternative should become legacy. Encryption functions: POSIX_CRYPT, must be optional by US law. XCU options: POSIX_SYSV_SHM, POSIX_SYSV_SEM and POSIX_SYSV_MSG; any/all of these options control ipcs and ipcrm. POSIX_LINK for link and unlink. POSIX_SCCS option for all sccs commands. POSIX_DEVTOOLS for cflow and cxref. AGREED any other area of XCU marked as EX, this should become mandatory. 9.4 Report of the Study Group ACTION: AJ to request a special plenary of SSWG on Wednesday evening at Charlotte in order to bring in all the other .1 amendments. Formal report required, but it only needs to say "Met, considered, and prepared a PAR". PAR will be promoted by DWC to the PMC. Arguing for a single PAR, but there will be three cloned PARs in Don's back pocket in case. 9.5 Extensibility How do we extend this standard in the future? How does an independent standard (e.g. 1003.21) relate to this work? Do we need to do anything to allow for this (e.g. namespace reservation)? AGREED there will be no additional namespace reservation beyond that brought in by .1a. The group considered extensibility and recommends that other standards define their own extensions using their own Feature Test macro (or macros) and following the namespace rules. The conformance section for the standard should call out the relevant parts of the referenced base document are required. 9.6 Secretary Should the group have a single secretary or a rotating one? No, Nick should be stuck with it. 10. Closing 10.1 Review of Action Items A9809-09 MB to add columns to AUSTIN/6 for what .13, FIPS, & SUS require to be able to control. A9903-01 AJ to obtain a copy of .1n (corrigenda) for reference CD. A9903-02 NS and AJ to prepare the other three PARs for the PASC-PMC in case they are required. A9903-03 AJ to ensure that all networking bodies (e.g. XoTGNET and PASC-Distributed Systems) realize that they are explicitly invited to Austin Group meetings. A9903-04 AJ to provide KS with some examples of how a function (e.g. fseek) might look in time for the next WG14 meeting in London, WB 21 June. A9903-05 Keld Simonsen to supply a list of overlapping functions between XSH and ISO-C that may have problems if specified in the new Standard. A9903-06 AJ (as PASC study group chair) to ensure that all projects that are within the scope (or possible scope; e.g. projects that may fail to reach the deadline) realize that they should be involved in the Austin Group work, and are appropriately invited to attend. A9903-07 AJ to check SUD to see if there are good examples that could be pulled in to the "Examples" sections. A9903-08 AJ to add Peter Anvin to the Austin group reflector A9903-09 FP to obtain a draft of 1003.1n for editorial information (not draft 1 inclusion) ASAP A9903-10 AJ to update Lowell on current negotiations with IEE A9903-11 AJ to request a special plenary of SSWG on Wednesday evening at Charlotte A9903-12 AJ to alter scope to correct UN, PI, OF, OP marked sections of XCU, and OH sections of XSH. A9903-13 AJ to circulate documents from this meeting ASAP A9903-14 AJ to provide DWC with updated PAR (PARs) in time for the PMC meeting. A9903-15 AJ to discuss with Lowell meeting fees for the Montreal meeting 10.2 Document Register 10.3 Next Meeting(s) The next meeting will be in Montreal, July 20-22. 0900-1800 each day. December 6-10 for draft 2 review. Location for December to be decided, but Copenhagen, Reading and Cupertino are all possibilities (at least tentative invitations exist to all three).
http://www.opengroup.org/austin/docs/austin_21.html
CC-MAIN-2016-07
refinedweb
3,733
65.93
Opened 4 years ago Closed 4 years ago #5478 closed bug (fixed) GHC panics when asked to derive Show for ByteArray# Description The following Haskell {-# LANGUAGE MagicHash #-} import GHC.Exts data Foo = Foo0 ByteArray# deriving Show main = return () caused GHC to panic with [1 of 1] Compiling Main ( ticket_5478.hs, ticket_5478.o ) ghc: panic! (the 'impossible' happened) (GHC version 7.2.1 for x86_64-unknown-linux): Error in deriving: Can't derive Show for primitive type ghc-prim:GHC.Prim.ByteArray#{(w) tc 3f} Please report this as a GHC bug: Change History (3) comment:1 Changed 4 years ago by hvr - Summary changed from GHC panics - Can't derive Show for primitive type ByteArray# to GHC panics when asked to derive Show for ByteArray# comment:2 Changed 4 years ago by simonpj@… comment:3 Changed 4 years ago by simonpj - Resolution set to fixed - Status changed from new to closed - Test Case set to deriving/should_fail/T5478 Thanks for finding this bug. Note: See TracTickets for help on using tickets. commit 99a6412c9ff5964bd957da79bd3b7d27c4f41228
https://ghc.haskell.org/trac/ghc/ticket/5478
CC-MAIN-2016-07
refinedweb
173
57.61
Jono's Pythonista Scripts (Fast Image Sharing, Intuitive File Downloading and more) Hey everyone, ever since I bought Pythonista I've been getting more and more into learning Python, generally by creating scripts to fix any little problems or inconveniences I come across in my day-to-day digital life. So far I've made scripts which enable pretty much one-tap image sharing across most messaging services on iOS, automatic file creation and downloading and conversion of any files to text which I find useful for reading files which Pythonista doesn't support opening, but which are readable in .txt format. All of these can be found over at my GitHub repo here. I would be very thankful for any constructive criticism as I'm basically using all this as a learning experience to better my programming skills. Thank you! The script I find most useful is the Script downloader. I tried to port it for use in Editorial, but it says that url.read() attribute on line 82 isn't valid. I guess that function isn't in Editorial. Edit: It shows the same error in Pythonista. By the way, I am trying to get the .zip file from the link you posted. - SpotlightKid The File2Txt.py file could be written much shorter using shutil.copy()and os.path.splitext(). In Python is pays to know your standard library well. @TutorialDoctor: Thanks for the bug report, I just uploaded the fix to both the uipack file and the normal .py, it was an error stemming from some experimentation I did with the ui.in_background() function that I forgot to remove before uploading. It's tested and working again, please let me know if you have any other issues with it :) @SpotlightKid: I'll definitely look into those and attempt a re-write. Thank you for the advice! Everything works in Pythonista now, and it appears to work in Editorial, but no I see no directory in Editorial. Is the path the file is stored in relative to the location of the ScriptDownloader.py file? I wonder if you could make it to where a directory is created and the file is placed there, irrespective of the location of the python script. Do you have Editorial? If so, I could send you a link to the workflow. @TutorialDoctor: The path is indeed relative to the location of Script _ Downloader.py. I'm not sure how to ignore the working directory of the Downloader script as I'd have to know the root directory to do that but I've updated Script _ Downloader to now support paths. In the filename box, just add your path like so: Master.zip in directory temp/zip-files would be "temp/zip-files/master.zip" And the script will download the file to that directory. If the directory does not exist, it will make it and any parent directories as necessary. Again, this is probably going to be relative to the working directory of the script and I'm not too sure how to work around that. One hacky fix is to just have Script_Downloader in the root directory so it can work with any path but that's all I can really think of. I also made a few housekeeping changes to the code which are listed on GitHub but another main change is the inclusion of _ in the name instead of ' ' so it can be imported more easily in the console. Sadly I do not have Editorial so I can't test this stuff myself. def parse_name(url): return url.rpartition('/')[2] or url def parse_extension(name): return os.path.splitext(name)[1].lstrip('.') Also submitted as pull requests on your repo. EDIT: fixed as recommended. @ccc Should the parameter under the parse_extension function be 'name'? I get an error that says the global 'url' is not defined.
https://forum.omz-software.com/topic/1297/jono-s-pythonista-scripts-fast-image-sharing-intuitive-file-downloading-and-more/7
CC-MAIN-2021-39
refinedweb
649
64.61
Created on 2011-10-19 18:59 by Cameron.Hayne, last changed 2014-06-21 09:08 by neologix. This issue is now closed. If the docstring for a method has example code that uses 'self', the 'self' will not appear in the HTML generated by pydoc.writedoc Example: #--------------------------------- def getAnswer(self): """ Return the answer. Example of use: answer = self.getAnswer() """ return 42 #--------------------------------- The generated HTML will have: getAnswer(self) Return the answer. Example of use: answer = getAnswer() where the final "getAnswer" is an HTML link. -------------------------------------------- I believe the problem arises on line 553 of the Python 2.7 version of pydoc.py which is as follows: results.append(self.namelink(name, methods, funcs, classes)) The appended text is the same whether or not the method call in the docstring was prefaced with 'self' or not. The 'self' has been eaten up by the regex and is in the 'selfdot' variable which is ignored by the above line. FWIU, only the name of the method, getAnswer, would be a HTML link in the "self.getAnswer()" line that pydoc should generate, while strong (highlighted) text would be used for instance attributes. I have written a patch for that, by checking for "self." matches (and differentiating between methods and instance attributes) *before* trying to match functions or class instantiations. It seems to work, but I will test it thoroughly tomorrow. Patch updated with the test cases. I have added a third class to Lib/test/pydoc_mod.py and updated two of the existing cases, test_text_doc and test_html_doc, accordingly. The news entry could be: "Issue #13223: Fix pydoc.writedoc so that the HTML documentation for methods that use 'self' in the example code is generated correctly." Patch looks good to me. Ezio, do you have time to review too? (I had forgotten that there were tests checking the exact HTML output. This won’t be fun when we overhaul pydoc to make it generate sensible HTML.) Víctor, can you address my comment on rietveld? The patch still applies cleanly, so I've just updated the comment. test passes, make patchcheck passes. New changeset 7aa72075d440 by Benjamin Peterson in branch '3.4': don't remove self from example code in the HTML output (closes #13223) New changeset e89c39125892 by Benjamin Peterson in branch '2.7': don't remove self from example code in the HTML output (closes #13223) New changeset cddb17c4975e by Benjamin Peterson in branch 'default': merge 3.4 (#13223) I guess the tests --without-doc-strings are broken: > I guess the tests --without-doc-strings are broken: > Attached patch fixes these failures. Berker, I've committed your patch, thanks!
https://bugs.python.org/issue13223
CC-MAIN-2019-43
refinedweb
437
68.26
Why Visual Studio 2017? Let us try it – Part Two Table of contents - Introduction - Download Source Code - Background - Let's start then - What is there for JavaScript developer in VS2017 - References - See also - Conclusion - Your turn. What do you think? Introduction In this article we are going to see some features of the brand new Visual Studio 2017. This is the second article of the Visual Studio 2017 series. Please be noted that, this is not the complete series of new functionalities of Visual Studio 2017, here I am going to share only few things to get you started with the new Visual Studio 2017. I hope you will like this. Now let’s begin. Download Source Code Background You can always find the first part of this series here. If you never use Visual Studio, you can find some articles and code snippets relates to it here Let’s start then Let’s say you have a MVC application with a model Customer as follows. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace VS2017Features.Models { public class Customer { public string CustId { get; set; } public string CustName { get; set; } public string CustCode { get; set; } } } And a controller as follows using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using VS2017Features.Models; namespace VS2017Features.Controllers { public class HomeController : Controller { // GET: Home public ActionResult Index() { List<Customer> lstCust = new List<Customer>(); Customer cst = new Customer() { CustId = "1", CustName = "Royal Enfield", CustCode = "CST01" }; lstCust.Add(cst); return View(lstCust); } } } As of now, I am not going to explain the codes, as it is pretty much clear and easy. Now we are going to see the preceding topics. - Run execution to here feature - The new exception handler - Redisigned Attach to process box - Reattach to process - What is there for JavaScript developer in VS2017 Run execution to here feature Let’s say we have a break point in our code as preceding. Could you notice that there is a small green color icon in the image, in which you get the text “Run execution to here” while hovering it. This is a new feature in VS2017, if you click on the icon which is near to any lines of codes, the execution will hit at that point. That’s pretty much simple right? Now you don’t need to put any unwanted breakpoints for checking out the execution line by line while debugging. Now what if you are using this feature in a loop. Let’s modify our controller code as preceding. public class HomeController : Controller { // GET: Home public ActionResult Index() { List<Customer> lstCust = BuildCustomer(); return View(lstCust); } private static List<Customer> BuildCustomer() { List<Customer> lstCust = new List<Customer>(); for (int i = 0; i < 5; i++) { Customer cst = new Customer() { CustId = i.ToString(), CustName = $"CustName{i}", CustCode = $"CST{i}" }; lstCust.Add(cst); } return lstCust; } } And view as preceding. @model IEnumerable<VS2017Features.Models.Customer> @{ ViewBag.Title = "Index"; } <h2>Index</h2> <style> th, td, tr { border: 1px solid #ccc; padding: 10px; } </style> <table> <thead> <tr> <th>Cust Id</th> <th>Cust Name</th> <th>Cust Code</th> </tr> </thead> <tbody> @foreach (var item in Model) { <tr> <td>@item.CustId</td> <td>@item.CustName</td> <td>@item.CustCode</td> </tr> } </tbody> </table> Now let’s run our application. Put a breakpoint in the loop, when you run you can see that i value is getting incremented which means, the iterations are happening. Once the iterations are over, the execution comes out. Please be noted that you can always come out of this loop by changing the execution point. The new exception popup The new exception box is really handy which can be resized, as an additional features the inner exceptions are added to the pop up itself. No need to search for the inner exceptions now. To create an exception, we can call the preceding function. private void MakeDivideByZeroException() { throw new DivideByZeroException(); } Once you call the above function, you can get an error as follows. If you click on the view details, a quick watch popup with the error information will be opened. Under the exception settings, you can always set whether you need to throw this type of exception only for the particular DLL or for every DLL. Redesigned Attach to process box We all know how we can attach to a process in Visual Studio. But have you ever thought, if we have a search box to search a process and attach to it, it would make the task easier? No worries, in VS2017 you get that. To see the redesigned Attach to process window, please click on Debug -> Attach to process. Now we have attached our process which was running in Microsoft Edge browser. Wait, the game is not over yet. Please click on the Debug option, you can see an option called Reattach to process. What is that? This option gives you an advantage of reattach the recent process that you already attached. That’s cool right? If you have one or more instance of the same process, it will ask you to select which ever you needed. What is there for JavaScript developer in VS2017 VS2017 has lots of improvements for JavaScript language. Let’s find out few of them. - Advanced JavaScript intellisense - Added ECMAScript 6 - Introduction of JS Doc - New Rename options in JavaScript - Find all references of functions or classes Advanced JavaScript intellisense VS2017 has advanced intellisenses for JavaScript. We don’t need to remember the parameters for the in built functions of JavaScript. For example, if you typed jQuery.ajax() you can see the parameters of the functions as follows. The best thing is, it is even shows what exactly the type of the parameter. Yeah, JavaScript types are dynamic, so no one was creating a proper documentation for JavaScript functions. Now we have an option. If you right click on the ajax() function and click on Go To Definition, you can see the definition in a .ts file (TypeScript) as follows. /** * Perform an asynchronous HTTP (Ajax) request. * * @param settings A set of key/value pairs that configure the Ajax request. All settings are optional. A default can be set for any option with $.ajaxSetup(). * @see {@link} */ ajax(settings: JQueryAjaxSettings): JQueryXHR; /** * Perform an asynchronous HTTP (Ajax) request. * * @param url A string containing the URL to which the request is sent. * @param settings A set of key/value pairs that configure the Ajax request. All settings are optional. A default can be set for any option with $.ajaxSetup(). * @see {@link} */ ajax(url: string, settings?: JQueryAjaxSettings): JQueryXHR; In VS2017, all JavaScript related documentations are handled by these TypeScript files. Added ECMAScript 6 In VS2017, the new ECMAScript 6 features are added which are more intuitive and OOP-style. So we can always use those in our application. Shall we start now? class Operations { constructor(x, y) { this.x = x; this.y = y; } add() { return (this.x + this.y); } } var myObj = new Operations(1, 2); console.log(myObj.add()); If you run the above code, you can see an output as follows. Now how can you rewrite the above code to lower ECMAScript version? I leave it to you. You can always read more about ECMAScript6 here. JSDoc Documenting a JavaScript fucntion was a tough task, but not anymore. In VS2017, this makes simple. Let’s check it out. By pressing /** you can easily document your JavaScript functions and classes. Let us rewrite the class and function as preceding. /** * This class performs arithmetic operations */ class Operations { /** * Operations class constructor * @param {any} x * @param {any} y */ constructor(x, y) { this.x = x; this.y = y; } /** * Add function */ add() { return (this.x + this.y); } } var myObj = new Operations(1, 2); console.log(myObj.add()); You can even set the type of the parameter. If you change the type in the JSDoc, the same will reflect in when you create an instances or calling the functions. New Rename options in JavaScript You can now right click on any function name or class name and easily rename those in all references. Find all references of functions or classes You can now find the references of your functions or classes easily, just right click and click on Find all references. That’s all for today. I will come with another set of features of Visual Studio 2017 very soon. Happy coding!. References See also
https://sibeeshpassion.com/why-visual-studio-2017-let-us-try-it-part-two/
CC-MAIN-2019-09
refinedweb
1,398
67.15
Idea. Read several files line by line, concatenate them, process the list of lines in all files. Implementation. This can be implemented this way: import qualified Data.ByteString.Char8 as B readFiles :: [FilePath] -> IO B.ByteString readFiles = fmap B.concat . mapM B.readFile ... main = do files <- getArgs allLines <- readFiles files Problem. This works unbearably slow. What's notable, the real or user time is several orders higher than system time (measured using UNIX time), so I suppose the problem is in spending too much time in IO. I didn't manage to find a simple and effective way to solve this problem in Haskell. For instance, processing two files (30.000 lines and 1.2M each) takes 20.98 real 18.52 user 0.25 sys This is the output when running +RTS -s: 157,972,000 bytes allocated in the heap 6,153,848 bytes copied during GC 5,716,824 bytes maximum residency (4 sample(s)) 1,740,768 bytes maximum slop 10 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 295 colls, 0 par 0.01s 0.01s 0.0000s 0.0006s Gen 1 4 colls, 0 par 0.00s 0.00s 0.0010s 0.0019s INIT time 0.00s ( 0.01s elapsed) MUT time 16.09s ( 16.38s elapsed) GC time 0.01s ( 0.02s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 16.11s ( 16.41s elapsed) %GC time 0.1% (0.1% elapsed) Alloc rate 9,815,312 bytes per MUT second Productivity 99.9% of total user, 98.1% of total elapsed 16.41 real 16.10 user 0.12 sys Why is concatenating files using the code above is so slow? How should I write readFiles function in Haskell to make it faster? You should show us exactly what your processing steps are. This program is very performant even when run on multiple input files of the kind you are using (1.2 MB, 30k lines each): import Control.Monad import Data.List import System.Environment import qualified Data.ByteString.Char8 as B readFiles :: [FilePath] -> IO B.ByteString readFiles = fmap B.concat . mapM B.readFile main = do files <- getArgs allLines <- readFiles files print $ foldl' (\s _ -> s+1) 0 (B.words allLines) Here is how I created the input file: import Control.Monad main = do forM_ [1..30000] $ \i -> do putStrLn $ unwords ["line", show i, "this is a test of the emergency"] Run times: time ./program input -- 27 milliseconds time ./program input input -- 49 milliseconds time ./program input input input -- 69 milliseconds
http://m.dlxedu.com/m/askdetail/3/d682477e8163e241531a04671e6d5dd3.html
CC-MAIN-2018-22
refinedweb
434
79.56
module mymod implicit none type my_t integer :: a = 3 end type my_t contains subroutine by_value(i,m) bind(c,name='BY_VAL') use iso_c_binding integer(c_int), intent(in), value :: i type(my_t), intent(out) :: m m%a = i end subroutine by_value end module mymod program test use mymod implicit none integer :: i type(my_t) :: m i = 5 call by_val(i,m) print *, m%a call by_value(i,m) print *, m%a end program test > ifort tc.f90 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 16.0.2.180 Build 20160204 Microsoft (R) Incremental Linker Version 11.00.60610.1 -out:tc.exe -subsystem:console tc.obj > tc 1060107240 5 The above code shows the inconsistency. If I remove the output argument 'm' in the calls then both output values are '5' as I'd expect. I get the same result with Version 14.0.1.139 Build 20131008. I do not think there is an inconsistency here: the routine BY_VAL has no explicit interface, so the compiler assumes the FORTRAN 77 rules. The fact that there is an alternative, by_value, makes no difference. The compiler has to apply the old rules, so both arguments are passed by reference. Thanks, I understand that. The corollary is that to have one routine that is callable from both C and Fortran (and uses the 'value' attribute) the routine name and bind(c) name need to be different so that the Fortran calls use the non-bind(c) name and therefore get the correct interface. Not quite - if you specify bind(c) and do not give a name, a C routine with exactly that name is simply hidden and the Fortran interface is used instead (if you may express it like that) Have just tried it but it causes linkage problems. A typical example is module x contains subroutine get_panel_geometry(ind, pgeom) bind(c) use iso_c_binding implicit none integer(c_int), intent(in), value :: ind type(SrLRKPanelGeoData), intent(out) :: pgeom end subroutine get_panel_geometry end module x Without an explicit name the dll build fails with unresolved external symbol GET_PANEL_GEOMETRY - another hole world of pain! The error in the first case was that you called the external name rather than the Fortran name. If you change anything about the routine from the Fortran defaults, you need to have the explicit interface visible to the caller. There's no problem having a routine callable from both Fortran and C if you do it consistently. In post 5 it would seem that you're not USEing the module when you call the routine. This is needed not only for the name but for pass-by-value. Why are you deliberately trying to hide the interface from the caller? I'm not meaning to mislead, I'm just trying to extract the salient features from a large code-base. When the procedure 'get_panel_geometry' is called in the Fortran code there is a 'use x' so that it gets the right interface, except that it doesn't. If I undestand Arjen correctly this is because the bind(c,name='GET_PANEL_GEOMETRY') decoration causes the compiler to use the Fortran77 rules so that Fortran code calling 'get_panel_geometry' effectively has the interace (provided by 'use x') masked. The result is that at runtime the value of 'ind' is incorrect when the routine is called from Fortran but correct when it is called from C++. If I change the Fortran name to, for example, get_panel_geometry_f and leave the bind(c) as in the above paragraph, then change the fortran calls to get_panel_geometry_f then the code behaves as expected. If I try just using bind(c) then the build fails because GET_PANEL_GEOMETRY (which is in the .def file) can no longer be found. Simon Geard wrote: I'm not meaning to mislead, .. Not sure what exactly the issue is: why not have the Fortran calling procedures use the Fortran name (by_value in your original post) while exposing the interface as Steve pointing (i.e., "use"ing the module that has the procedure interface on the Fortran calling side); and have the C functions use the name specified in the binding (BY_VAL per your original post). What doesn't make sense in your original post is the 'call by_val(i,m)' statement your Fortran main program. By the way, if you're going to use some C interoperability features, then why not go the full distance? That is, apply bind(C) on all interoperation entities; for the code in the original post, it would mean bind(C) attribute on the derived type (my_t) as well. For simple types with integer components, you may feel this adds little but it will help greatly in the case of more complicated data structures; plus bind(C) should help with data alignment consistency with the companion C processor as well. When you use NAME= with an uppercase name, what you do is give the routine the same global name the compiler would normally give it on Windows (and by "the compiler" I mean Intel Fortran here - other compilers might have different defaults.) Fortran 77 has nothing to do with it. But because you've changed the interface to the routine, that change isn't visible when you call without the interface, so you get wrong results. It's really the same issue as any of the other cases where Fortran requires an explicit interface.. Steve Lionel (Intel) wrote: When you use NAME= with an uppercase name, ... Oh, is that what the confusion is for OP? The instructions in the second paragraph should clear things up: ! Fortran names are case insensitive, so by_val, BY_VAL ! makes no difference implicit none integer :: i type(my_t) :: m i = 5 call by_val(i, m) print *, "m%a = ", m%a stop end program test m%a = 5 Press any key to continue . . . #include <stdio.h> typedef struct { int a; } c_t; extern void BY_VAL(int, c_t *); int main() { int i; c_t m; i = 5; BY_VAL(i, &m); printf("i = %d, m.a = %d", i, m.a); return(0); } Upon execution, i = 5, m.a = 5 I would recommend against using NAME= with an uppercase name - that makes it too easy to use the routine without the interface. All types that are interoperable in our code do have the bind(c) attribute. The reason I raised this was that I was trying to use an existing procedure and it failed. The existing code snippet: module x contains subroutine get_panel_geometry(ind, pgeom) bind(c,name='GET_PANEL_GEOMETRY') use iso_c_binding use pdbtype implicit none integer(c_int), intent(in), value :: ind type(SrLRKPanelGeoData), intent(out) :: pgeom end subroutine get_panel_geometry end module x Then there is the corresponding C prototype: void GET_PANEL_GEOMETRY(int, SrLRKPanelGeoData*); and some calls, e.g. SrLRKPanelGeoData lgeom; GET_PANEL_GEOMETRY(modelIdx, &lgeom); Calls to this procedure made from the C++ code work as expected. Now I want to call the same code from Fortran, so I add a 'use x' to the calling code and some calls, e.g. use x ... call get_panel_geometry(panelID, pgeom) but when I run it it doesn't work - as illustrated in post#1. Thus I can't use the same name for both the C and Fortran code and a solution is to change the Fortran name to be different from the C name. If it is possible to use the same name for both C and Fortran calls I'd like to know how because none of the replies in this thread so far have explained that to me. Hmm. Unfortunately our house style is to make all C++ callable Fortran have uppercase names so that it is more obvious in the C++ code that we're calling into the Fortran subsystem (our C++ procedures are never all uppercase). What you show should work, as long as the interface is visible. What you did in post 1 was call a different name. Simon Geard wrote: .. but when I run it it doesn't work - as illustrated in post#1... If it is possible to use the same name for both C and Fortran calls I'd like to know how because none of the replies in this thread so far have explained that to me. Re: "none of the replies in this thread so far have explained that to me" - did you see Message #10? What you're indicating in Message #12 is not the same as in the original post; in Message #12 you say Fortran name and bind(C, name=..) are the same but that is not how it is in the original post. Show a reproducible example that is exactly consistent with your scenario. You can take the code below (a minor variant of Message #10, here C binding name is not upper case). note it has no issues even though it uses " the same name for both C and Fortran calls", so modify it as needed to reproduce the issue you face and post it on this thread: implicit none integer :: i type(my_t) :: m i = 5 call by_val(i, m) print *, "From Fortran: i = ", i, ", m%a = ", m%a stop end program test Upon execution with Intel Fortran, From Fortran: i = 5 , m%a = 5 Press any key to continue . . . #include <stdio.h> typedef struct { int a; } c_t; extern void by_val(int, c_t *); int main() { int i; c_t m; i = 5; by_val(i, &m); printf("From C: i = %d, m.a = %d\n", i, m.a); return(0); } Upon execution, From C: i = 5, m.a = 5 Press any key to continue . . . In my original post I just tried to illustrate the problem I'm having which I stated quite clearly in post #12 - using a different name is the only way I've found so far of making it work. The actual code base is extensive and I couldn't send to Premier Support even if I wanted to. When I posted #12 your post #10 had not appeared so tomorrow I'll try running it. I did get some wierd picture about the site being lost on the internet so there seems to have been a latency issue. On the face of it your example should fail in exactly the same way as our code does since the structure and use look identical to what I originally tried, but if it does work it should provide a reference to work out why our code doesn't work so TIA. It may well be that the way we build our dlls is a complication (post #7) but the fact is that what I described in post #12 is what is happening and I have to find either a work around (post #1) or a fix (probably derived from your example). Upload a ZIP of a complete example. My guess is there's something you've not shown us that is important. I know you're sending real code to Steve, but let me try and make this even more complicated (sometimes, I can't help myself) In the very original program, the one in post 1 where the interface is declared to be this: subroutine by_value(i,m) bind (c, name = 'BY_VAL' ) The compiler always knows this routine by the name "by_value". So, when you created the call to by_value, it knew that you meant *this* one, and knew the calling convention was to pass integer arguments by value. The name "BY_VAL" is the name given to the linker, and is not recognized (if you will) by the compiler as being *this* routine. Therefore, when you created the call to BY_VAL, the compiler thought it was some random external Fortran routine, used the rules that apply to calling any external Fortran routine (such as passing arguments by reference), and told the linker it was looking for a routine named "BY_VAL". At link time, these two things hooked up and that's why you saw the results you saw in post #1. I also want to second Steve's statement that it should work to have the same name for both C and Fortran, so there is hope. --Lorri As promised this morning I added the code provided by FortranFan, ran the program and it worked. So I removed his/her code, changed my code back to the way it was when it wasn't working and hey-presto it's working now. The only thing I can think is that it was some sort of build issue, but none of the code exhibiting the problem is new so I don't really understand that either. Now it looks as if I've been chasing shadows but they were very real yesterday when the program was crashing. Thanks for your help. BTW, I had some real problems accessing this topic yesterday and today, screenshots attached. The Site has had several periods of flakeyness for me in the last couple of weeks.....
https://community.intel.com/t5/Intel-Fortran-Compiler/Using-the-value-attribute/td-p/1097251
CC-MAIN-2020-29
refinedweb
2,150
58.01
binoyp 12-09-2020 Hi All, Is there any way to make use of non english words like "español" as fulltext in Querybuilder API? ChitraMadan Hi @binoyp, Full text search will be able to find español in the content irrespective of language. Please see below OOTB groovy script which is able to find all the references of español in the content def predicates = ["path": "/content","fulltext": "español"] def query = createQuery(predicates) query.hitsPerPage = 10 def result = query.result println "${result.totalMatches} hits, execution time = ${result.executionTime}s\n--" result.hits.each { hit ->println "hit path = ${hit.node.path}"} Please check if UTF-8 is enabled in Apache Sling Request Parameter Handling configuration.
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/query-regarding-full-text-search/qaq-p/378783/comment-id/80189
CC-MAIN-2020-45
refinedweb
112
50.73
We will train a model for matching these question pairs. Let's start by importing the relevant libraries, as follows: import sysimport osimport pandas as pdimport numpy as npimport stringimport tensorflow as tf Following is a function that takes a pandas series of text as input. Then, the series is converted to a list. Each item in the list is converted into a string, made lower case, and stripped of surrounding empty spaces. The entire list is converted into a NumPy array, to be passed back: def read_x(x): x = np.array([list(str(line).lower().strip()) for line in x.tolist()]) return x Next up is a function that takes a pandas series as input, converts it to a list, and returns it as a NumPy array: def read_y(y): return ...
https://www.oreilly.com/library/view/hands-on-natural-language/9781789139495/05d12c31-dd1a-494d-ac7d-19f056e763aa.xhtml
CC-MAIN-2019-35
refinedweb
132
62.27
Part 6: Working With DPDK Table of contents - Introduction - Set up DPDK Devices - DPDK Worker Threads - Start Worker Threads - Management And Statistics Collection - Running the example Introduction One of the key advantages of PcapPlusPlus over similar libraries is its extensive support for the Data Plane Development Kit (DPDK). PcapPlusPlus provides an extensive wrapper for DPDK which encapsulates most of its important and commonly used APIs in an easy-to-use C++ way. This tutorial will go through the fundamentals of using PcapPlusPlus with DPDK. It will demonstrate how to build a L2 forwarding (bridge) application that receives packets on one interface and sends them on another interface. It may sound simple, but DPDK enables to do that in wire speed (!). This example will demonstrate some of the key APIs and concepts in PcapPlusPlus wrapper for DPDK. This specific example was chosen because it corresponds to a similar example in DPDK documentation called L2 Forwarding which many DPDK users are probably familiar with, and that may help in better understanding the code and the idea behind it. Before starting this tutorial it is highly recommended to have a basic understanding of what DPDK is (you can find a lot of reading material in DPDK web-site) and also read the page describing PcapPlusPlus support for DPDK. For further information about the APIs and classes please refer to the API documentation. Before diving into the code, let’s see how our L2 forwarding (bridge) application will be built: As you can see, we will use 2 DPDK-controlled NICs, one from each side of the network. We’ll also have 2 worker threads. The first thread will receive packets on NIC #1 and will send them on NIC #2 and the other thread will receive packets on NIC #2 and will send them on NIC #1. Now that we have this basic understanding let’s go ahead and build this application! Set up DPDK Devices The first thing any application that uses DPDK should do is initialize DPDK and set up the DPDK interfaces (devices). This initialization involves a couple of steps and we’ll go through all of them. The first step is done before running the application. PcapPlusPlus contains a shell script called setup-dpdk.sh which initializes Huge Pages (which are required for DPDK’s memory allocation) and DPDK kernel driver which removes kernel control from selected NICs and hand it over to DPDK. You can read more about it in PcapPlusPlus support for DPDK page. The second step is done in the application’s code and is a general DPDK initialization phase. It is also described in the PcapPlusPlus support for DPDK page and contains steps like initialize DPDK internal structures and memory pools, initialize packet memory pool, and more. Lets start by writing a general main method and initialize DPDK: #include <vector> #include <unistd.h> #include <sstream> #include "SystemUtils.h" #include "DpdkDeviceList.h" #include "TablePrinter.h" #include "WorkerThread.h" #define MBUF_POOL_SIZE 16*1024-1 #define DEVICE_ID_1 0 #define DEVICE_ID_2 1 int main(int argc, char* argv[]) { // Initialize DPDK pcpp::CoreMask coreMaskToUse = pcpp::getCoreMaskForAllMachineCores(); pcpp::DpdkDeviceList::initDpdk(coreMaskToUse, MBUF_POOL_SIZE); .... .... } There are couple of steps here: - Decide of the mbuf pool size. mbufs are DPDK structures for holding packet data. Each mbuf holds one packet data or a portion (segment) of it. On application start-up DPDK allocates memory for creating a pool of mbufs that will be used by the application throughout its runtime. That way DPDK avoids the overhead of creating mbufs and allocating memory during application run. Let’s decide that the size of this mbuf pool is 16383 mbufs. It is recommended to set a size that is a power of 2 minus 1 (in our case: 16383 = 2^14 - 1) - Decide which CPU cores will take part in running the application. DPDK leverages multi-core architecture to parallelize packet processing. In our case we initialize DPDK with all cores available on the machine - Invoke pcpp::DpdkDeviceList::initDpdk()which runs the initialization Now let’s find the DPDK interfaces (devices) we’ll use to send and receive packets. The class DpdkDevice encapsulates a DPDK interface. The singleton DpdkDeviceList contains all DPDK devices that are available for us to use: // Find DPDK devices pcpp::DpdkDevice* device1 = pcpp::DpdkDeviceList::getInstance().getDeviceByPort(DEVICE_ID_1); if (device1 == NULL) { printf("Cannot find device1 with port '%d'\n", DEVICE_ID_1); return 1; } pcpp::DpdkDevice* device2 = pcpp::DpdkDeviceList::getInstance().getDeviceByPort(DEVICE_ID_2); if (device2 == NULL) { printf("Cannot find device2 with port '%d'\n", DEVICE_ID_2); return 1; } As you can see, we’re using the DpdkDeviceList singleton to get the 2 DPDK devices. The port numbers DEVICE_ID_1 (of value 0) and DEVICE_ID_2 (of value 1) are determined by DPDK and we should know them in advance. The next step is to open the 2 devices so we can start receiving and sending packets through them. Let’s see the code: // Open DPDK devices if (!device1->openMultiQueues(1, 1)) { printf("Couldn't open device1 #%d, PMD '%s'\n", device1->getDeviceId(), device1->getPMDName().c_str()); return 1; } if (!device2->openMultiQueues(1, 1)) { printf("Couldn't open device2 #%d, PMD '%s'\n", device2->getDeviceId(), device2->getPMDName().c_str()); return 1; } As you can see we’re using a method called openMultiQueues(). This method opens the device with a provided number of RX and TX queues. The number of supported RX and TX queues varies between NICs. You can get the number of supported queues by using the following methods: DpdkDevice::getTotalNumOfRxQueues() and DpdkDevice::getTotalNumOfTxQueues(). The reason for opening more than 1 RX/TX queue is to parallelize packet processing over multiple cores where each core is responsible for 1 or more RX/TX queues. On RX, The NIC is responsible to load-balance RX packets to the different queues based on a provided hash function. Doing this inside the NIC makes it much faster and offloads processing from CPU cores. This load balancing mechanism is called Receive Side Scaling (RSS) and is also wrapped by PcapPlusPlus, for more details please see RSS configuration in DpdkDevice::DpdkDeviceConfiguration. In our case we choose the simple case of 1 RX queue and 1 TX queue for each device. That means we’ll use 1 thread for each direction. DPDK Worker Threads Now that we finished the DPDK setup and initialization, let’s move on to the actual work of capturing and sending packets. The way we are going to do that is using DPDK worker threads. We will create 2 worker threads: one for sending packets from device1 to device2, and the other for sending packets from device2 to device1. Each worker thread will run on a separate CPU core and will execute an endless loop that will receive packets from one device and send them to the other. Worker threads on PcapPlusPlus are instances of a class that inherits DpdkWorkerThread. Let’s write the header file of this class and see how it looks like: #pragma once #include "DpdkDevice.h" #include "DpdkDeviceList.h" class L2FwdWorkerThread : public pcpp::DpdkWorkerThread { private: pcpp::DpdkDevice* m_RxDevice; pcpp::DpdkDevice* m_TxDevice; bool m_Stop; uint32_t m_CoreId; public: // c'tor L2FwdWorkerThread(pcpp::DpdkDevice* rxDevice, pcpp::DpdkDevice* txDevice); // d'tor (does nothing) ~L2FwdWorkerThread() { } // implement abstract method // start running the worker thread bool run(uint32_t coreId); // ask the worker thread to stop void stop(); // get worker thread core ID uint32_t getCoreId() const; }; DpdkWorkerThread is an abstract class that requires inherited classes to implement 3 methods: run()- start the worker. This method is called when the thread gets invoked and is expected to run throughout the life of the thread. Typically this method will contain an endless loop that runs the logic of the application stop()- stop the execution of the worker getCoreId()- return the core ID the worker is running on In addition to implementing these method we also have a constructor and an empty destructor. We also save pointers to the RX and TX devices of where the worker will read packets from and send packets to. Now let’s see the implementation of this class’s methods: #include "WorkerThread.h" L2FwdWorkerThread::L2FwdWorkerThread(pcpp::DpdkDevice* rxDevice, pcpp::DpdkDevice* txDevice) : m_RxDevice(rxDevice), m_TxDevice(txDevice), m_Stop(true), m_CoreId(MAX_NUM_OF_CORES+1) { } bool L2FwdWorkerThread::run(uint32_t coreId) { // Register coreId for this worker m_CoreId = coreId; m_Stop = false; // initialize a mbuf packet array of size 64 pcpp::MBufRawPacket* mbufArr[64] = {}; // endless loop, until asking the thread to stop while (!m_Stop) { // receive packets from RX device uint16_t numOfPackets = m_RxDevice->receivePackets(mbufArr, 64, 0); if (numOfPackets > 0) { // send received packet on the TX device m_TxDevice->sendPackets(mbufArr, numOfPackets, 0); } } return true; } void L2FwdWorkerThread::stop() { m_Stop = true; } uint32_t L2FwdWorkerThread::getCoreId() const { return m_CoreId; } The constructor is quite straight forward and initializes the private members. Please notice that the initialized value for the core ID is the maximum supported number of cores + 10. The stop() and getCoreId() methods are also quite trivial and self explanatory. Now let’s take a look at the run() method which contains the L2 forwarding logic. It consists of an endless loop that is interrupted through a flag set by the stop() method (which indicates the thread should stop its execution). Before starting the loop it creates an array of 64 MBufRawPacket pointers which will be used to store the received packets. The loop itself is very simple: it receives packets from the RX device using m_RxDevice->receivePackets(mbufArr, 64, 0). The packets are stored in the MBufRawPacket array. Then it immediately sends those packets to the TX device using m_TxDevice->sendPackets(mbufArr, numOfPackets, 0). You may be asking who takes care of freeing the packet array and mbufs in each iteration of the loop. Well, this is done automatically by sendPackets() so we don’t have to take care of it ourselves. This basically summarizes the implementation of the worker thread. In the current application we’ll set up 2 worker threads: one for receiving packets from DEVICE_ID_1 and send them over DEVICE_ID_2 and another to receiving packets from DEVICE_ID_2 and send them over DEVICE_ID_1. Start Worker Threads Now that we have the worker thread code ready, let’s wire everything up and start the application. First, let’s create the worker thread instances: // Create worker threads std::vector<pcpp::DpdkWorkerThread*> workers; workers.push_back(new L2FwdWorkerThread(device1, device2)); workers.push_back(new L2FwdWorkerThread(device2, device1)); As you can see we give the first worker thread device1 as the RX device and device2 as the TX device, and vice versa for the second worker thread. We store pointers to these two instances in a vector. Next step is to assign cores for these two worker threads to run on. DPDK enforces running each worker in a separate core to maximize performance. We will create a core mask that contains core 1 and core 2, let’s see how this code looks like: // Create core mask - use core 1 and 2 for the two threads int workersCoreMask = 0; for (int i = 1; i <= 2; i++) { workersCoreMask = workersCoreMask | (1 << i); } As you can see, we basically create the value 0x6 (or 0b110) where we set only the bits who correspond to the cores we want to use (1 and 2) Now let’s start the worker threads: // Start capture in async mode if (!pcpp::DpdkDeviceList::getInstance().startDpdkWorkerThreads(workersCoreMask, workers)) { printf("Couldn't start worker threads"); return 1; } Management And Statistics Collection Now wer’e at a point where the 2 worker threads are running their endless loops which receives packets on one interface and sends them to the other interface. Practically we’re done and the bridge should be working now. But to make the program more complete let’s also add a graceful shutdown and user friendly prints to view the RX/TX statistics during application run. For the graceful shutdown we’ll use an utility class in PcapPlusPlus called ApplicationEventHandler which encapsulates user-driven events that may occur during application run, such as process kill (ctrl+c). For using this class we’ll need to add one line at the beginning of our main() method which registers a callback we’d like to be called when ctrl+c is pressed: int main(int argc, char* argv[]) { // Register the on app close event handler pcpp::ApplicationEventHandler::getInstance().onApplicationInterrupted(onApplicationInterrupted, NULL); // Initialize DPDK pcpp::CoreMask coreMaskToUse = pcpp::getCoreMaskForAllMachineCores(); pcpp::DpdkDeviceList::initDpdk(coreMaskToUse, MBUF_POOL_SIZE); ..... ..... Now let’s implement this onApplicationInterrupted callback. It’ll have a very simple logic which sets a global flag: // Keep running flag bool keepRunning = true; void onApplicationInterrupted(void* cookie) { keepRunning = false; printf("\nShutting down...\n"); } Now that we have this flag we can set up an endless loop that will run on the main thread and will keep printing statistics until ctrl+c is pressed. Please notice this is not the loop the worker threads are running, this is a different loop that runs on the management core (core 0 in our case). Let’s dwell on this point a little bit more to better understand how DPDK works: the worker threads are running on cores 1 and 2 and their endless loop consumes 100% of their capacity. This guarantees achieving the best possible performance. However it’s a good practice (although not required) to allocate at least one more CPU core for management, meaning tasks that are not in the application’s fast-path, such as statistics collection, provide user-interface (CLI or other), health monitoring, etc. Usually this management core will be core 0, but you can set up any other core. This management core is also the one running the main() method. Now let’s go back to our application: once we started the worker threads on cores 1 and 2, we would like the management core to continuously gather statistics and print them to the user. The way to do that is to set up and endless loop inside the main() method that will collect and print the stats and will be interrupted when the user presses ctrl+c (and setting the keepRunning flag). Let’s see the implementation: #define COLLECT_STATS_EVERY_SEC 2 uint64_t counter = 0; int statsCounter = 1; // Keep running while flag is on while (keepRunning) { // Sleep for 1 second sleep(1); // Print stats every COLLECT_STATS_EVERY_SEC seconds if (counter % COLLECT_STATS_EVERY_SEC == 0) { // Clear screen and move to top left const char clr[] = { 27, '[', '2', 'J', '\0' }; const char topLeft[] = { 27, '[', '1', ';', '1', 'H','\0' }; printf("%s%s", clr, topLeft); printf("\n\nStats #%d\n", statsCounter++); printf("==========\n\n"); // Print stats of traffic going from Device1 to Device2 printf("\nDevice1->Device2 stats:\n\n"); printStats(device1, device2); // Print stats of traffic going from Device2 to Device1 printf("\nDevice2->Device1 stats:\n\n"); printStats(device2, device1); } counter++; } As you can see the while loop collects statistics, prints them and then sleeps for 1 second. Now let’s see how to gather network statistics: void printStats(pcpp::DpdkDevice* rxDevice, pcpp::DpdkDevice* txDevice) { pcpp::DpdkDevice::DpdkDeviceStats rxStats; pcpp::DpdkDevice::DpdkDeviceStats txStats; rxDevice->getStatistics(rxStats); txDevice->getStatistics(txStats); std::vector<std::string> columnNames; columnNames.push_back(" "); columnNames.push_back("Total Packets"); columnNames.push_back("Packets/sec"); columnNames.push_back("Bytes"); columnNames.push_back("Bits/sec"); std::vector<int> columnLengths; columnLengths.push_back(10); columnLengths.push_back(15); columnLengths.push_back(15); columnLengths.push_back(15); columnLengths.push_back(15); pcpp::TablePrinter printer(columnNames, columnLengths); std::stringstream totalRx; totalRx << "rx" << "|" << rxStats.aggregatedRxStats.packets << "|" << rxStats.aggregatedRxStats.packetsPerSec << "|" << rxStats.aggregatedRxStats.bytes << "|" << rxStats.aggregatedRxStats.bytesPerSec*8; printer.printRow(totalRx.str(), '|'); std::stringstream totalTx; totalTx << "tx" << "|" << txStats.aggregatedTxStats.packets << "|" << txStats.aggregatedTxStats.packetsPerSec << "|" << txStats.aggregatedTxStats.bytes << "|" << txStats.aggregatedTxStats.bytesPerSec*8; printer.printRow(totalTx.str(), '|'); } DpdkDevice exposes the getStatistics() method for stats collection. Various counters are being collected such as the number of packets, amount of data, packet per second, bytes per second, etc. You can view them separately per RX/TX queue, or aggregated per device. It’s important to understand that these numbers are only relevant for the timestamp they are being collected and therefore this timestamp is also included in the data. You can read more about this in the class documentation. If we go back to the code above, you can see we’re collecting stats for the 2 devices. From one we take RX stats and from the other we take TX stats. We are using an utility class in PcapPlusPlus called TablePrinter to print the numbers nicely in a table format. For the sake of simplicity we are taking only the aggregated RX and TX stats, but of course we can also take and prints RX/TX stats per queue. We are almost done. One last thing to do is to run the necessary clean ups once the user presses ctrl+c. The only relevant clean-up is to stop the worker threads, let’s see the code: // Stop worker threads pcpp::DpdkDeviceList::getInstance().stopDpdkWorkerThreads(); // Exit app with normal exit code return 0; That’s it, we’re all set! Now let’s run the program and see the output: Stats #5 ========== Device1->Device2 stats: -------------------------------------------------------------------------------------- | | Total Packets | Packets/sec | Bytes | Bits/sec | -------------------------------------------------------------------------------------- | rx | 2850754 | 134607 | 4307240599 | 1627406832 | | tx | 2851371 | 132058 | 4296728841 | 1592137536 | -------------------------------------------------------------------------------------- Device2->Device1 stats: -------------------------------------------------------------------------------------- | | Total Packets | Packets/sec | Bytes | Bits/sec | -------------------------------------------------------------------------------------- | rx | 160880 | 3273 | 11261910 | 1833416 | | tx | 161001 | 4533 | 10627168 | 2393688 | -------------------------------------------------------------------------------------- This output is printed every 2 seconds and shows for each direction: the total number of packets received and sent so far, the total number of bytes received and sent so far, packets per second and bps (bits per second) Running the example All the code that was covered in this tutorial can be found here. In order to compile and run the code please first download and compile PcapPlusPlus source code or download a pre-compiled version from the latest PcapPlusPlus release. When building from source please make sure to configure the build for DPDK, as explained here. The only platform relevant for this tutorial is Linux as DPDK is not supported on other platforms. After done building PcapPlusPlus and the tutorial and before running the tutorial please run setup-dpdk.sh script to setup the necessary runtime parameters for DPDK. More details on this script can be found here. Please note this tutorial needs a special environment to run on, as it needs at least 2 devices connected only through a third device running this application. If you need help setting up this environment and you have VirtualBox you can use this great tutorial which will walk you through it. The compiled executable will be inside the tutorial directory ( [PcapPlusPlus Folder]/Examples/Tutorials/Tutorial-DpdkL2Fwd).
https://pcapplusplus.github.io/docs/tutorials/dpdk
CC-MAIN-2021-25
refinedweb
3,059
50.77
Introduction One hundred years ago there were about 100 000 domesticated elephants in Thailand, almost all of them employed in the logging industry. In 1965, the Department of Livestock Development (DLD) reported a figure of 11 192. This number had decreased to 3 381 in 1985 and to 2 257 in 1998. Thus, the population appears to be decreasing at a rate of about 3 percent per year. In 1989, a government logging ban to preserve the existing forestland - which amounts to only about 25 percent of the country - caused 70 percent of domesticated elephants to become unemployed. Many elephants have been forced to stray into big cities in order to earn a living for themselves and their mahouts thus posing a danger to the general public. Most of these elephants receive insufficient food and water and are sometimes seriously injured in traffic accidents. Developing ecotourism sites in the various regions of the country might offer more suitable employment opportunities for the elephants and their mahouts, but an appropriate and comprehensive ecotourism development plan should first be formulated. Existing elephant-related tourist activities and working conditions Because of the logging ban, and as a result of increasing interest in ecotourism, elephants and their mahouts can be found working in the tourism industry in all regions of the country, usually in elephant camps. Tables 1 to 5 present the results of the first nationwide field survey of elephant-related ecotourism sites in Thailand. Problems associated with using elephants in ecotourism The biggest problems for elephant owners are providing them with sufficient food each day, meeting the high cost of the large amount needed and removing the dung. Elephants can only digest 40 percent of what they eat, so this means that if you provide 200 kg of food per day there is a significant amount of dung to remove. There are also land use conflicts that pit elephant owners against other members of the community, including government agencies. Northern Thailand: There are 14 elephant camps with 536 elephants in four provinces. As almost all of the feeding areas and trekking routes are in forest reserve lands, there are conflicts between the camp owners and the Forestry Department. The use of these lands has to be certified by the Royal Forest Department. This is a slow process, but meanwhile the tourism business is growing rapidly. Conflicts are still going on and are very serious. Table 1. Northern Thailand Notes The Maesa elephant camp is the biggest and probably the best organized one in Thailand. In April, the Karen elephant owners who are known as natural elephant experts go back home with their elephants to participate in the"Mud Mir Chang or the"Elephant Homecoming Celebration. Sometimes, they do not return to their elephant camps afterwards. The rate for hiring one elephant is 7 000-8 000 baht per month in the high tourist season and 3 500-4 000 in the low season. The monthly salary of a mahout (except for the TECC Lampang) is about 1 500 baht with accommodation, food and medical care. The average charge is 270-350 baht/hour. Table 2. Central Thailand Notes The Ayutthaya Elephant Camp, established in 1997, holds an additional 45 elephants in camps in Kanchanaburi, Phuket, and Chaiyaphum provinces. The Rose Garden, a country resort established in 1965, started the Thai Village Cultural Show using elephants as early as in 1969. They have recorded three generations of elephants. The elephant show at the Crocodile Farm started about 35 years ago. Table 3. Eastern Thailand Notes In the eastern part of Thailand, almost all of the tourists who visit the elephant camps come from East Asian countries such as Korea, Taiwan, and China (except Paniat Chang that has a great number of European tourists). They are generally interested in short rides. Most camps do not have feeding areas large enough for elephants. Only a few have wide feeding areas. Only the Paniat Chang camp uses northern mahouts because the owner's wife is from northern Thailand. The mahouts in other camps come from Surin. The rate for short rides is on an average 300 baht/hour. In popular tourist areas it increases to 3 000 baht/hour. The monthly rate for hiring an elephant is about 7 000-8 000 baht. Table 4. Western Thailand Notes All of them are located at good tourist sites and have feeding areas. Average charge: 230-300 baht/hour. Elephant hire rate: 8 000 baht/month. Medical care is provided by the local vet and Kasetsart University and there are also frequent visits from MEC. Table 5. Southern Thailand Notes Southern Thailand is the highest income tourist site for elephants, but a lack of feeding areas is its weak point. Almost all of the elephants and mahouts come from the northeast region of Thailand. Mahouts get a bonus of 1 baht for every minute of riding and are given a room to stay in and an allotment of rice. Working hours are 07.00-9.00 hours. Rate of hiring an elephant is 9 000-12 000 baht/month. Elephant care mostly comes from local vets. Short rides are 10 minutes, 15 minutes, 30 minutes, and trekking is 60 minutes or more. Central Thailand: The problem is somewhat different from that in the north, but it also involves conflicts of interest. There is an ongoing conflict between the owner of the Ayutthaya Elephant Camp, which has 35 elephants, and the Fine Arts Department as the camp is in the middle of the World Heritage site of Ayuthaya, the ancient capital of the country. Another conflict is related to water pollution: although the camp's owner has made a serious attempt to clear the elephants' dung, numerous nearby waterways and reservoirs have been adversely effected. Eastern Thailand: There are six elephant camps in Chonburi province with 129 elephants. The number of tourists visiting these elephant camps is increasing but competition is very fierce. The only way to attract tourists is to reduce the riding fee to a reasonable rate. This then makes it necessary for the elephant to work five to six hours a day to reach the desired income of 400 baht a day. This condition has forced some mahouts and elephants to leave the camps and go to the big cities. Western Thailand: There are six elephant camps in Kanchanaburi with a total of 116 elephants. This region is an appropriate site for ecotourism. It is an excellent tourist destination with an adequate feeding area and tourism is growing in the province. However, there still are some conflicts between the camps' owners and the Forestry Offices or the Local Administrative Organization. Southern Thailand: There are ten elephant camps in Phuket province with 174 elephants. Although the camp owners have a high, regular income there are too many elephants considering the number of tourism sites and feeding areas. A good solution would be to limit the number of elephants on the island. Future perspective The domesticated elephants in Thailand can be categorized into three groups as follows: Unemployed elephants; Tourism elephants; and Street wandering elephants. The largest group is the unemployed elephants that are estimated to number between 1 200 and 1 400. The second largest group is the tourism elephants that number over 1 000 animals, and the smallest group consists of wandering elephants that amount to about 100 in various cities. There are also an undetermined number of domesticated elephants being used in illegal logging operations. To solve the overall problem, we should concentrate on all of these groups of elephants as the elephants move between these three groups. But the tourism industry should be the main source of permanent jobs for domesticated elephants in Thailand. A model multi-component, multi-site tourism-related project currently being planned by the FIO is described in the following paragraphs. Thai elephants' New World Project Concept: The Thai elephants' New World Project is designed to provide a suitable natural habitat for elephants, and to provide them with excellent health care and a good quality of life. Essentially, the project consists of the construction of elephant conservation centers and associated facilities. The project will fully conform with all relevant Thai laws. Thai Elephant Conservation Centers (TECC): The centers will comprise two types of area: a forest area for growing the elephants' food and the Elephant Conservation Area. The Elephant Conservation Area needs to be fertile so elephants can live there naturally. The land has to be improved to provide them with water and food sources. Measures to prevent elephants from disturbing and destroying cultivated areas and surrounding communities need to be decided. The Thai Elephant Conservation Centers will comprise: 1) Provincial Center The provincial center will coordinate the task of helping the elephants in each province and will provide elephant-related information and spread knowledge about elephants throughout the region. Each provincial center will not only co-operate with each other, but also with other foundations to help straying elephants, out-of-work elephants, unwanted and donated elephants, handicapped elephants, etc. The provincial center will send all these types of elephants to the Elephant Preparation Center to be classified. 2) Elephant Preparation Center The center's duty is to take initial care of the elephants' health, and classify them into the following categories before sending them to the Conservation Center: suitable for returning to their natural habitat; bulls or cows suitable for breeding; old or handicapped elephants; elephants with a record of killing people; and elephants with suitable temperaments for participating in shows and the like. 3) Curing Center This center will have highly trained staff and modern equipment and will treat those elephants in need of serious medical care. The responsibilities and duties of the curing center are: curing and nursing both inbound and outbound elephants; developing elephant health; taking care of elephants that have been cured but cannot work any more or cannot go back to live in the forest alone (handicapped elephants); training and providing knowledge to the owners or mahouts, the Conservation Center's staff, the private sector, and the general public; and co-operating with the government sector and other sectors involved in controlling rampaging elephants and elephants in musth, notifying communities about any dangers. 4) Elephant and Mahout Training School Elephants and mahouts will be given training certificates certifying that they have been trained to acceptable standards. The training school will have the following duties: to train and increase the knowledge and skills of existing mahouts; to train new mahouts; to train those elephants of suitable age for work; to classify the mature elephants and provide them suitable work to do; to determine the criteria and issue the certificates to certify the quality of elephants and mahouts; and to design the school curriculum. 5) Elephant Research and Development Center This center will conduct research concerning the elephants' food, health and illnesses. It will help to strengthen elephant breeding programmes and act as a resource center to co-operate, exchange technical knowledge and elephant news both inside and outside the country. 6) Elephant Museum The elephant museum will comprise exhibition halls divided into permanent, temporary, and open-air exhibition areas, a lecture room or auditorium, data center and library. Its purpose is to strengthen Thai elephant conservation among the Thai people. It will collect data and spread knowledge and basic understanding about the biology and nature of Thai elephants to the youth, students and general public. Besides supporting tourism, it will create new jobs and this will spread income to the locals employed in the elephant museum. 7) Nature Study Center The activities in the center will comprise youth camps, overnight camps, white nature camps (against drug addiction), nature conservation camps with study activities, trekking to admire nature, the promotion of local cultures and souvenir development on elephant motif. It will act as a center for exchanging knowledge in the international arena. The purpose of the center is to promote nature study without altering the ecology of any area and to educate the youth and tourists to behave properly in the forest. The center will provide learning materials, such as nature study manuals. 8) Tourism Development and Service Center The center will provide the tourists with knowledge, understanding and comfort when traveling to various tourism sites. Besides, it will be the point where tourists can rest or call for help in case of difficulties. It will also develop and maintain the natural resources and this will result in an increase in the number of tourists staying overnight in the Thai Elephant Conservation Centers that will add more income to the mahouts and local communities. The number of elephants in FIO's Thai Elephant New World Project The elephants managed by this Project will be allowed to live as natural as possible in an expected density: one elephant per 50 rai (8 ha). The total number of elephants in each center is calculated as shown in Table 6. Development and improvement of the area: The project management should: 1) Maintain and promote the outstanding natural characteristics of each project area and only permit activities that are in harmony with these. 2) Determine the carrying capacity of the area and ensure that the number of people and animals using the area does not exceed this. 3) Provide appropriate facilities and ensure that they harmonise with the natural surroundings Table 6. Expected numbers of elephants in the project centers and camps Note: 6.25 rai is equal to 1 ha. Zoning scheme: The area used is classified into three zones: 1) Public Zone: This zone is for project buildings and to support the visitors to the center and the staff of the Thai Elephant Conservation Center. The area can be used to its full potential. The activities at this zone are Tourist Information Center. Elephant Exhibition Center and Art and Culture in Elephant Village, Elephant Museum, Training and Research Services, etc. 2) Semi-Public Zone: This is a restricted zone that will have some buildings and landscape improvements to support the centers' staff. Part of it can be utilized by visitors. This area is used more sparsely than the Public Zone. The Elephant Preparation Center, Elephant Curing Center, Elephant and Mahout Training School and the Research and Development Center, etc will be located here. 3) Reserved Zone: This is part of the original forest that will be planted with supplementary crops, especially for elephant food. It will also be the location for the centers' water resources. The zone will be used for feeding both tethered and free elephants and will also support trekking for the tourists. Basic infrastructures: Basic infrastructures will consist of: transportation routes; drainage system; water sources; municipal water system; waste collection and disposal system; water collection and treatment system; and power and electrical system. Marketing and personnel development: When the Elephant Conservation Centers are ready to open tourism marketing, activities will be carried out so as to attract visitors and gain incomes for the elephants and centers. Elephant trekking will be offered as well as elephant riding, bird watching, nature study activities, bicycling and others. Besides having the tourism routes between the centers and resorts both in and outside the provinces, the communities around the centers will benefit from the increase in tourism. Some of the profit will be used to give the youth a chance to be trained and for marketing scholarships. As for the locals, they will be supported to develop handicrafts such as cloth weaving, making souvenirs related to elephants, and to cultivate mixed crops, especially crops for elephant food or quick growing plants such as mulberry, together with rice farming. The straw and elephant food will be sold to the Thai Elephant Conservation Centers. Preliminary environmental impact assessment: The development of the Thai Elephant Conservation Center will involve some transformation of the natural environment and there may be some unintended adverse environmental impacts, including impacts on local communities. During the construction and implementation phases the following measures are proposed: 1) Construction phase a) Locate all buildings on a plain or an area where there is little slope. b) Locate the buildings some distance from the natural water sources and institute measures to prevent the soil sediment from the construction area flowing into the water sources. c) Start construction in the dry season d) Give preference to the locals when hiring workers. 2) Implementation phase a) Provide tourists/visitors with a sufficient number of litter boxes in all areas of the Thai Elephant Conservation Center. Collect and dispose of refuse daily. b) Provide water treatment for the staff and the tourist service areas. The wastewater from the Elephant Health and Nursing Center and the Research Center should be treated before being discharged into natural water sources. c) Make the elephants drink before bathing as they will excrete immediately after drinking. To conserve water elephants should only be bathed twice a day. A new pond will be constructed away from the natural water sources especially for the enjoyment of the elephant. d) Improve the water quality both in the reservoir and the pond by planting only those plants that fish eat, and regularly drain the water. e) Take very strong measures to stop the elephants trespassing into nearby plantations. In case of trespassing, suitable compensation should be paid to the landowner. f) Separate the rampaging elephants and the ones in musth. They need to be under control at all times. Clear notices in English and Thai should inform the tourists of the potential danger. Preliminary cost estimation: The construction of the Thai Elephant Centers in five areas will cost a total of 1 056 053 000 baht. This comprises 1 005 765 000 baht for the construction cost and growing elephant food and 50 288 000 baht for survey and design work. Economic feasibility analysis: To develop each Thai Elephant Conservation Center, a huge investment will be needed, but when the project is implemented it will greatly profit the economy. 1) The Thai Elephant Conservation Center, Lampang Using a discount rate of 12 percent, the net present value (NPV) is 139.86 million baht, the benefit cost ratio (B/C Ratio) is 1.38 and the economic internal rate of return (EIRR) is 16.85. Thus the project is economically feasible. In the worst case scenario, the cost will increase by 10 percent while the total benefit will be reduced by 10 percent. However, the project is still suitable for investment as it has a great chance of being successful. 2) The Thai Elephant Conservation Center, Surin Using a discount rate of 12 percent, the net present value (NPV) is 60.42 million baht, the benefit cost ratio (B/C Ratio) is 1.24 and the economic internal rate of return (EIRR) is 15.62. Thus is economically feasible. In the worst case scenario, the cost will increase by 10 percent while the total benefit will be reduced by 10 percent. However, the project is still suitable for investment as it has a great chance of being successful. 3) The Thai Elephant Conservation Center, Krabi Using a discount rate of 12 percent, the net present value (NPV) is 64.66 million baht, the benefit cost ratio (B/C Ratio) is 1.52 and the economic internal rate of return (EIRR) is 19.04. Thus the project is economically feasible. In the worst case scenario, the cost will increase by 10 percent while the total benefit will be reduced by 10 percent. However, the project is still suitable for investment as it has a great chance of being successful. 4) The Elephant Camp at Jed Kod Forest Plantation, Sar. 5) The Elephant Camp at Thong Pha Phum Forest Plantation, Kanchan. Administration of the Project: Legally, the New World for Thai Elephants Foundation will have the status of a juridical person and can raise funds or receive donations for project implementation. Project administration will be the main duty of the National Elephant Conservation Institute (NECI), which is a semi-autonomous body under the Ministry of Agriculture and Co-operatives approved by the Cabinet. At present, this institute is under the New World for Thai Elephants Foundation, operated by the Director-General under the control of the institute committee. The organization chart comprises four Deputy Director Generals, who are responsible for programme execution and management, technical subjects, fund raising and special activities, and the management of regional centers. Recommendations 1. Change the legal status of the domesticated elephant from a transport animal (as defined by the Beast of Burden Act) to an animal"Reflecting the Unique Identity of Thailand. This could help to guarantee the quality of life of elephants in terms of prevention from cruelty and the standard of care. 2. Establish permanent and appropriate jobs for elephants and mahouts: 3. Establish quality standards for elephant and mahouts working in the tourism industry by:3. Establish quality standards for elephant and mahouts working in the tourism industry by: 2.1. Use five to ten elephants and mahouts in every national park (there are more than 150 parks in Thailand) for patrolling, transport and tourist services. 2.2. Set up new elephant-related ecotourism sites in: FIO forest plantations Provincial public areas Regional Thai Elephant Conservation Centers. 3.1. Creating an elephant and mahout training school in each region. Make the TECC a center for training and certify both elephants' and mahouts' qualifications. 3.2. Setting up an Identification Card system for elephants and mahouts in each field of work. 4. Negotiate with tourism organizations and fix appropriate income and working hours for elephants and mahouts. 5. The government should support elephant conservation activities. In particular elephants that are less capable of working or are disabled should be helped by government support for the establishment of a nursing center in each regional TECC. 6. Promote and protect the traditional mahout ways-of-life, especially those of the Swe people in Surin province and other major elephant men. This will benefit elephant-related ecotourism. 7. Set up a specific institution for elephant medical care, research and development and establish proper living standards for the elephants at the veterinarian school. A domesticated elephant carrying agricultural products, Nam Bak district, Luang Prabang province, Lao PDR (December 1999)
http://www.fao.org/docrep/005/ad031e/ad031e0i.htm
CC-MAIN-2014-15
refinedweb
3,708
51.38
Simple MVVM in WPF The WPF scene has exploded as the now-dominant desktop application scene for Windows desktop applications. Couple that with the new Windows 8-style store apps and XAML, it looks like it has a very rosy future indeed. The problem is, WPF is hard, very hard. You have Prism, MVVM Light, MVVM Cross, Catel, and dozens of other frameworks that all claim to be the best way to do MVVM in a WPF application. If you're still relatively wet behind the ears with WPF, and still much prefer the simplicity of sticking with Windows forms, then like me you may have or may be finding that all this choice just seems to make things way too complicated. There Has to Be an Easier Way The good news is, there is, but before we come to that, you need to understand why all these frameworks are needed in the first place. The MVVM model that WPF employs is not all that straightforward, especially when you compare it to things like KnockoutJS, Angular, and many others in the HTML world. For a starters, before your data objects will even begin to start telling their parent application about what's going on, you need to add something called property notifications to them. This generally means that you need to build a base class, and then derive all your models from that base class. Your base class would typically have the stubs in to allow you implement these notifications so that the parent app and its XAML can see the changes to data in your objects. Secondly, you need to use different variable types to those you may be more used to as a Win-Forms developer. For example, many of you might be used to using 'List<T>' for many lists of objects. In WPF, you nearly always have to use an 'ObservableCollection<T>', which you wouldn't ever know until someone pointed it out to you. Why? A 'List<T>' will work quite happily with no obvious errors other than the fact you'll see no list changes when you add/remove data from your list, and adding all the Property Notifications in the world won't help. Once you get your head around all the object changes, you then have the binding syntax to deal with in the XAML itself, and there are about 100 different ways here to do the same thing with the same bit of data. The net result is that all these frameworks have sprung up to 'Make it Easier' to deal with. This new ease comes at a cost generally, though. You have the learning curve that the framework itself exposes, and each framework tends to implement MVVM in what it believes to be a correct use of the pattern. This 'Correct Use' does not always align with how many developers understand, or might even implement, the pattern themselves. If you're coming from Win-Forms and are used to how that works, the gap between the stepping stones is even larger, because many will already have a good set of pre-conceived notions as to exactly what desktop application development entails, and I can pretty much grantee that mentality goes something like: - Load Data - Manipulate Data - Push Data into a Data Context - Let the Component draw it how it needs to In contrast, the WPF version is a lot longer, and involves way more fluff to get the data displayed in the form. Now you have a firm Idea of where I'm heading with this, I'm going to introduce you to "Fody" "Fody"? Isn't That a Small Bird of Some Description? Yes it is, but it's also the name of a rather nifty .NET toolkit designed not just for WPF, but for .NET in general. Fody consists of a very simple transparent kernel (that you typically don't even have to touch or do anything with) and a number of 'Plugins', which in most cases you don't have to do anything with either. Fody, as described at its github page, is "An Extensible tool for weaving .net assemblies" In a nutshell, Fody's exact purpose for existence is to inject 'Stuff' automatically into your code, that you may need but without you having to do the injecting. It handles the link between your project and MSBuild, it makes sure all the dependencies are met, and various other things that you simply just don't need to deal with. The net result is you get value added goodness without the overheads. Okay, So Fody's Great, but What's That Got to Do with WPF? The easiest way to answer that question is to build a simple WPF application. Fire up Visual Studio, head to New Project, and create a new WPF Application project. Name it however you wish; I'll be calling mine 'EasyWpfWithFody'. Once your application is up and running, the first thing we're going to do is design the UI. This won't be anything particularly pretty. I'm a developer by trade and I left my box of crayons behind when I left pre-school. My sense of design is not a particularly good one, so instead of describing the UI step by step, I'm just going to give you a screen shot and the XAML that produces it. Figure 1: Product of the following XAML code The XAML code to produce this screen is as follows: (Warning; there's a lot of it!!) <Window x: <Window.Resources> <Style TargetType="{x:Type ListBox}"> <Setter Property="HorizontalContentAlignment" Value="Stretch"/> <Setter Property="ItemTemplate"> <Setter.Value> <DataTemplate> <Border BorderBrush="Silver" BorderThickness="1" CornerRadius="0" Margin="3"> <Grid> . <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> </Border> </DataTemplate> </Setter.Value> </Setter> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBox"> <Grid Background="{TemplateBinding Background}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid Grid. > <Grid.Resources> <Style TargetType="TextBlock"> <Setter Property="FontSize" Value="15"/> </Style> </Grid.Resources> <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> <ScrollViewer Grid. <ItemsPresenter x: </ScrollViewer> </Grid> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="ItemContainerStyle"> <Setter.Value> <Style TargetType="ListBoxItem"> <Setter Property="Margin" Value="0"/> <Setter Property="Padding" Value="0"/> </Style> </Setter.Value> </Setter> </Style> </Window.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="50" /> <RowDefinition Height="100" /> </Grid.RowDefinitions> <ListBox Grid. <Grid Grid. <Grid.ColumnDefinitions> <ColumnDefinition Width="0.6*" /> <ColumnDefinition Width="0.4*" /> <ColumnDefinition Width="0.8*" /> <ColumnDefinition Width="0.4*" /> <ColumnDefinition Width="0.8*" /> <ColumnDefinition Width="0.6*" /> </Grid.ColumnDefinitions> <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> <Grid Grid. <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> . <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBox Grid. <TextBox Grid. <TextBox Grid. <TextBox Grid. <Button x: </Grid> </Grid> </Window> If you look through the various 'Text', 'Content', and other properties in the XAML, you'll see many of the '{binding xxxx}' entries that bind the data values between our UI and the code we'll write in just a moment. The Jobs list, for example, has its collection bound to 'Jobs', which, as you'll see in just a moment, in an 'ObservableCollection<JobEntry>' bound to a local object in the windows code behind. Most of the bindings used in the UI are purposely done the most simple way possible, so that a developer coming over from WPF can see the relationship to how this may have been done using Win-Forms. Once we have the UI added, it's then time to turn our attention to the code behind the UI, and here's where it gets strange. Many developers used to Win-Forms will be used to having a LOT of code in the code behind. In WPF, the situation is drastically reversed. As you can see above, the code to draw the UI is not by any means short; the code for the code behind is, well... see for yourself: using System.Windows; using EasyWpfWithFody.Models; namespace EasyWpfWithFody { public partial class MainWindow : Window { public JobSheet CurrentJobSheet { get; private set; } public MainWindow() { CurrentJobSheet = new JobSheet(); InitializeComponent(); } private void BtnAddNewClick(object sender, RoutedEventArgs e) { CurrentJobSheet.Jobs.Add(CurrentJobSheet. JobEntryToAdd); } } } Yes, you're reading that correctly; that's ALL there is. It may take some believing; you could even remove that button handler if you wanted, but doing things the button click way feels comfortable (at least to me, anyway). One important thing to point out is the order of initialising the data. You'll see that I create my local data object BEFORE I let .NET initialize the various components and so forth on the UI. If you get this the wrong way around, the bindings simply won't work even if you have all the various property notification stuff correctly wired in. You can see that my local variable 'CurrentJobSheet' is of type 'JobSheet', the UI is connected to this via the following line in the XAML: DataContext="{Binding CurrentJobSheet, RelativeSource={RelativeSource Mode=Self}}" You can see right away, at the very top in the 'Window' definition. From this point on, any bindings that simply point to a variable or property name are expected by XAML to be available inside this object. The class definition for a 'JobSheet' looks like this: using System; using System.Collections.ObjectModel; using System.Linq; using PropertyChanged; namespace EasyWpfWithFody.Models { public class JobSheet { public Double TaxRatePercentage { get; set; } public ObservableCollection<JobEntry> Jobs { get; private set; } public Double TotalCostBeforeTax { get { return Jobs.Sum(x => x.TotalCost); } } public Double TotalCostAfterTax { get { var temp = TotalCostBeforeTax/100; return TotalCostBeforeTax + (TaxRatePercentage*temp); } } public JobEntry JobEntryToAdd { get; set; } public JobSheet() { Jobs = new ObservableCollection<JobEntry>(); TaxRatePercentage = 20; // Default for UK is 20% SetUpDemoData(); } private void SetUpDemoData() { Jobs.Add(new JobEntry { JobTitle = "Fix Roof", ClientName = "Joe Shmoe", CostPerHour = 30, NumberOfHours = 2 }); Jobs.Add(new JobEntry { JobTitle = "Unblock Sink", ClientName = "Fred Flintstone", CostPerHour = 20, NumberOfHours = 4 }); Jobs.Add(new JobEntry { JobTitle = "Mow Lawns", ClientName = "Mr Mansion Owner", CostPerHour = 40, NumberOfHours = 6 }); Jobs.Add(new JobEntry { JobTitle = "Sweep Path", ClientName = "Jane Doe", CostPerHour = 10, NumberOfHours = 1 }); JobEntryToAdd = new JobEntry(); } } } There's not a great deal to it, and most of its bulk consists of setting up the initial dummy data. The other class you'll need to create is the actual job entry itself; it looks like this:; } } } } Once you get to this point with the two models—the code behind and the UI—you should actually be able to hit F5 and get something up and running. You'll notice, however, that adding new entries to the job sheet doesn't update the display, and if anything is entered into to any of the text boxes on the form, the respective elements in the code behind won't get populated as expected. This is exactly the problem that trips up most devs coming from Win-Forms to WPF. They go ahaed and build up a UI in a not too dissimilar way to how they might do it in a Win-Forms application. Then, they inevitably hit a brick wall and don't understand why the updates don't work. This is where you need to start adding your property notifications and sub classing things so you don't have to repeat your code; it's also at this point where Fody comes into play. Jump into your NuGet package manager. You can use the console if you want, but I tend to use the GUI becauses it's easier to do partial name searches. Pop 'fody property' into the search box, and you should see the following: Figure 2: The search results As you can see, I've already selected and clicked install on 'Fody' and 'PropertyChanged.Fody' because these are the only two we need for this example. There are, however, many others, for all sorts of different purposes. I mentioned previously that we could remove the Button Handling from the code behind. Well, you would use 'Commander.Fody' to help you with that one. Feel free to explore; there's some great tools in the kit. For the rest of this article, we'll be looking only at 'PropertyChanged'. At this point you might be thinking, "Okay, great, we have the code, we have the UI, but now I have to learn this Fody thing, so I'm still not skipping the learning curve step." Well, yes, you can dig in and learn all about Fody and everything it can do. OR You can take a quick peek in the object browser and note that all you really need to know is that it implements an 'Aspect' or, as some may know, an 'Attribute' We've not got space here for a full discussion on AOP (Aspect Orientated Programming), but those of you who've used tools such as 'Postsharp' or are very familiar with ASP.NET MVC will have used this model very often. To add the Fody property notification stuff to our data classes, we simply have to make ONE change. Let's take the 'JobEntry' class as an example. Before adding Fody:; } } } } After adding Fody: using System; using PropertyChanged; namespace EasyWpfWithFody.Models { [ImplementPropertyChanged] public class JobEntry { public string JobTitle { get; set; } public string ClientName { get; set; } public Double CostPerHour { get; set; } public int NumberOfHours { get; set; } public Double TotalCost { get { return CostPerHour*NumberOfHours; } } } } Spot the difference? The addition is that line just before the class definition that looks like an array declaration. That's the aspect that Fody makes available, and the one that at compile time will ensure any additional code required to make your objects implement 'IPropertyNotification' is injected into them. Go ahead; add the same line just before the class definition on your 'JobSheet' class, then run and test the app. You should see that when you now add a new entry, everything updates correctly. Now, wasn't that easy? If you've been introduced to desktop programming under Windows using WPF and are one of the newer breed of developers out there, much of this article will be pretty meaningless to you. If, like me, however, you've been used to Win-Forms, and possibly even further back than that to the original WIn32 GDI model, this will hopefully make a lot more sense. Old dev or new Dev, however, Fody in my mind shows just how simple this CAN BE. I Like Fody simply because I don't for one second believe that we have to have all the complexity that all these large frameworks have, which in many cases is simply complexity for the sake of complexity. A developer's job is to find the best, quickest, and most stable solution to a problem, and at this moment in time Fody ticks all those boxes. I'll put a copy of the project from this article on my github page at: for those who want to clone it, and develop WPF in a far simpler way than is the norm. If you have a particular subject that you'd like to see covered, or an issue that's been chewing away at you, please do come and find me lurking around the interwebs. My Twitter handle is @shawty_ds and I can usually be pinged quite easily using that. Let me know your thoughts and ideas either there, or in the comments below this article. If I can accommodate your suggestions, I most certainly will. How to use with vb.netPosted by Leon B on 05/04/2016 06:05am This is very useful, I've tried to use it with vb.net but didn't manage to get the ball rolling. Always telling me that the [ImplementPropertyChanged] is invalid.Reply questionPosted by mahmoud on 04/20/2016 03:52am how to implement PropertyChanged the above code give me a fault in PropertyChangedReply Good ArticlePosted by Vandana on 05/16/2015 02:32pm Very helpful ....Reply Total fields?Posted by fred on 12/29/2014 09:06am hello, thanks for this. What does [ImplementPropertyChanged] is useful for in JobSheet class? I've tried by many way to update the TotalCostBeforeTax and TotalCostAfterTax values but no way. Am I missing something? Thanks for your answers.Reply Nice ArticlePosted by Nagarajan on 12/04/2014 01:01am Thanks for the article, Peter. It has thrown up a new avenue for WPF developers especially since Aspect Oriented Programming has been used in a way that can definitely reduce coding effort and minimize errors. I would definitely dig further into Fody and check it out. I could not find the code for the sample project in the link specified by you in github. RE: Nice ArticlePosted by Peter Shaw on 12/23/2014 07:52am oops my bad. i honestly thought i'd uploaded it. it's there now though, you can find it at shawtyReply
http://www.codeguru.com/columns/dotnet/simple-mvvm-in-wpf.html
CC-MAIN-2017-17
refinedweb
2,809
62.17
Provided by: libgrib-api-tools_1.28.0-2_amd64 NAME grib_ls DESCRIPTION List content of grib files printing values of some keys. It does not fail when a key is not found. USAGE grib_ls [options] grib_file grib_file ... OPTIONS -p key[:{s/d/l}],key[:{s/d/l}],... Declaration of keys to print. For each key a string (key:s) or a double (key:d) or a long (key:l) type can be requested. Default type is string. -F format C style format for floating point values. -P key[:{s/d/l}],key[:{s/d/l}],... As -p adding the declared keys to the default list. . -j json output -B order by directive Order by. The output will be ordered according to the order by directive. Order by example: "step asc, centre desc" (step ascending and centre descending) ) -s key[:{s/d/l}]=value,key[:{s/d/l}]=value,... Key/values to set. For each key a string (key:s) or a double (key:d) or a long (key:l) type can be defined. By default the native type is set. -i index Data value corresponding to the given index is printed. -n namespace All the keys belonging to namespace are printed. -m Mars keys are printed. -V Version. -W width Minimum width of each column in output. Default is 10. -M Multi-field support off. Turn off support for multiple fields in single grib message. -g Copy GTS header. -T T | B | A Message type. T->GTS, B->BUFR, A->Any (Experimental). The input file is interpreted according to the message type. -7 Does not fail when the message has wrong length -X offset Input file offset in bytes. Processing of the input file will start from "offset". -x Fast parsing option, only headers are loaded. AUTHOR This manpage has been autogenerated by Enrico Zini <enrico@debian.org>from the command line help of grib_ls.
http://manpages.ubuntu.com/manpages/eoan/man1/grib_ls.1.html
CC-MAIN-2020-34
refinedweb
314
71.31
# Lossless ElasticSearch data migration ![](https://habrastorage.org/r/w780q1/webt/1e/oc/om/1eocom45kcqgz65q-wneyoqopdg.jpeg) Academic data warehouse design recommends keeping everything in a normalized form, with links between. Then the roll forward of changes in relational math will provide a reliable repository with transaction support. Atomicity, Consistency, Isolation, Durability — that's all. In other words, the storage is explicitly built to safely update the data. But it is not optimal for searching, especially with a broad gesture on the tables and fields. We need indices, a lot of indices! Volumes expand, recording slows down. SQL LIKE can not be indexed, and JOIN GROUP BY sends us to meditate in the query planner. The increasing load on one machine forces it to expand, either vertically into the ceiling or horizontally, by purchasing more nodes. Resiliency requirements cause data to be spread across multiple nodes. And the requirement for immediate recovery after a failure, without a denial of service, forces us to set up a cluster of machines so that at any time any of them can perform both writing and reading. That is, to already be a master, or become them automatically and immediately. The problem of quick search was solved by installing a number of second storage optimized for indexing. Full-text search, faceted, stemming ~~and blackjack~~. The second store accepts records from the first table as an input, analyzes and builds an index. Thus, the data storage cluster was supplemented with another cluster for solely for searching purposes. Having similar master configuration to match the overall *SLA*. Everything is good, business is happy, admins sleep at night… until the machines in the master-master cluster become more than three. Elastic ------- The *NoSQL* movement has significantly expanded the scaling horizon for both small and big data. NoSQL cluster nodes are able to distribute data among themselves so that the failure of one or more of them does not lead to a denial of service for the entire cluster. The cost for the high availability of distributed data was the impossibility of ensuring their complete consistency on the record at each point in time. Instead, NoSQL promotes the *eventual consistency*. That is, it is believed that once all the data will disperse across the cluster nodes, and they will become consistent eventually. Thus, the relational model was supplemented with a non-relational one and gave power to many database engines that solve the problems of the *CAP* triangle with one success or another. Developers got into the hands modent tools to build their own perfect *persistence* layer — for every taste, budget and profile of the load. ElasticSearch is a NoSQL cluster with RESTful JSON API on the Lucene engine, open-source, written in Java, that can not only build a search index, but also store the original document. This trick helps to rethink the role of a separate database management system for storing the originals, or even completely abandon it. The end of the intro. Mapping ------- Mapping in ElasticSearch is something like a schema (table structure, in terms of SQL), which tells you exactly how to index incoming documents (records, in terms of SQL). Mapping can be static, dynamic, or absent. Static mapping does not allow the schema to change. Dynamic allows you to add new fields. If mapping is not specified, ElasticSearch will make it automatically, receiving the first document for writing. It analyzes the structure of fields, makes some assumptions about the types of data in them, skips through the default settings and writes down. At first glance, this schema-less behavior seems very convenient. But in fact, its more suitable for experiments than for surprises in production. So, the data is indexed, and this is a one-directional process. Once created, the mapping cannot be changed dynamically as ALTER TABLE in SQL. Because the SQL table stores the original document to which you can attach the search index. And vice-versa in ElasticSearch. ElasticSearch is a search index to which you can attach the original document. That is why the index scheme is static. Theoretically, you could either create a field in the mapping or delete it. But in practice, ElasticSearch only allows you to add fields. An attempt to delete a field leads to nothing. Alias ----- The alias is an optional name for the ElasticSearch index. Aliases can be many for a single index. Or one alias for many indices. Then the indices seem to be logically combined and look the same from the outside. Alias ​​is very convenient for services that communicate with the index throughout its lifetime. For example, the pseudonym of products can hide both *products\_v2* and *products\_v25* behind, without the need to change the names in the service. Alias ​​is handy for data migration when they are already transferred from the old scheme to the new one, and you need to switch the application to work with the new index. Switching an alias from index to index is an atomic operation. It is performed in one step without data loss. Reindex API ----------- The data scheme, the mapping, tends to change from time to time. New fields are added, unnecessary fields are deleted. If ElasticSearch plays the role of a single repository, then you need a tool to change the mapping on the fly. For this, there is a special command to transfer data from one index to another, the so-called *\_reindex API*. It works with created or empty mapping of the recipient index, on the server side, quickly indexing in batches of 1000 documents at a time. The reindexing can do a simple type conversion of the field. For example, *long* to *text* and back to *long*, or *boolean* to *text* and back to *boolean*. But *-9.99* to boolean is no longer able, ~~this is not PHP~~. On the other hand, type conversion is an insecure thing. Service written in a language with dynamic typing may forgive such sin. But if the reindex cannot convert the type, the whole document will not be saved. In general, data migration should take place in 3 stages: add a new field, release a service with it, remove the old field. A field is added like this. Take the scheme of the source-index, insert new property, create empty index. Then, start the reindexing: ``` { "source": { "index": "test" }, "dest": { "index": "test_clone" } } ``` A field is removed like this. Take the scheme of the source-index, remove the field, create empty index. Then, start the reindexing with the list of fields to be copied: ``` { "source": { "index": "test", "_source": ["field1", "field3"] }, "dest": { "index": "test_clone" } } ``` For convenience, both cases were combined into the cloning function in Kaizen, a desktop client for ElasticSearch. Cloning can recognize the mapping of the recipient index. The example below shows how a partial clone is made from an index with three collections (types, in terms of ElasticSearch) *act*, *line*, *scene*. The clone contains *line* with two fields, static mapping is enabled, and the *speech\_number* field *text* becomes *long* . ![](https://habrastorage.org/webt/rt/ju/dq/rtjudqsh1s-tevki7azjyxctsuy.gif) Migration --------- The reindex API has one unpleasant feature — it does not know how to monitor possible changes in the source index. If after the start of reindexing something changed, then the changes are not reflected in the recipient index. To solve this problem, ElasticSearch FollowUp Plugin was developed, that adds logging commands. The plugin can follow the index, returning the actions performed on the documents in chronological order, in JSON format. The index, type, document ID and operation on it — INDEX or DELETE — are logged. The FollowUp Plugin is published on GitHub and compiled for almost all versions of ElasticSearch. So, for the lossless data migration, you will need FollowUp installed on the node on which the reindexing will be launched. It is assumed that the alias index is already available, and all applications run through it. Before reindexing the plugin must be turned on. When reindexing is complete, the plugin is turned off, and alias is transferred to a new index. Then, the recorded actions are reproduced on the recipient index, catching up with its state. Despite of the high speed of the reindexing, two types of collisions may occur during playback: * in the new index there is no more document with such *\_id*. This means, that the document has been deleted after switching of the alias to the new index. * in the new index there is a document with the same *\_id*, but with the version number higher than in the source index. This means, that the document has been updated after switching of the alias to the new index.. In these cases, the action should not be reproduced in the recipient index. The remaining changes are reproduced. Happy coding!
https://habr.com/ru/post/513610/
null
null
1,460
64.2
Virtual Methods Virtual methods are methods of a base class that can be overridden by a method in a derived class to provide a different implementation of that method. For example, you have method A from Class A and Class B inherits from Class A, then method A will be available from Class B. But method A is the exact method you inherited from Class A. What if you want to have a different behavior for the inherited method? Virtual methods solve this problem. Consider the following example. using System; namespace VirtualMethodsDemo { class Parent { public virtual void ShowMessage() { Console.WriteLine("Message from Parent."); } } class Child : Parent { public override void ShowMessage() { Console.WriteLine("Message from Child."); } } class Program { public static void Main() { Parent myParent = new Parent(); Child myChild = new Child(); myParent.ShowMessage(); myChild.ShowMessage(); } } } Example 1 – Virtual Methods Example Message from Parent. Message from Child. A virtual method can be defined by placing the virtual keyword in the declaration of the method. The virtual keyword indicates that the method can be overridden or, in other words, can have a different implementation. The class that inherits the Parent class contains the method that overrides the virtual method. You use the override keyword to indicate that a method should override a virtual method from the base class. You can use the base keyword to call the original virtual method inside the overriding method. using System; namespace VirtualMethodsDemo2 { class Parent { private string name = "Parent"; public virtual void ShowMessage() { Console.WriteLine("Message from Parent."); } } class Child : Parent { public override void ShowMessage() { base.ShowMessage(); Console.WriteLine("Message from Child."); } } class Program { public static void Main() { Parent myParent = new Parent(); Child myChild = new Child(); myParent.ShowMessage(); myChild.ShowMessage(); } } } Example 2 – Using the base Keyword Message from Parent. Message from Parent. Message from Child. If you only use the base method and no other codes inside the overriding method, then it has a similar effect to not defining the overriding method at all. You cannot override a non-virtual method or a static method. The overriding method must also have similar access specifier as the virtual method. You can create another class that inherits the Child class and override its ShowMessage again and define a different implementation. If you want the overriding method to be final, that is, it cannot be overridden by the other classes that will derive from the class it belongs, you can use the sealed keyword. public sealed override void ShowMessage() Now if another class inherits from the Child class, it cannot override the ShowMessage() anymore. Let’s have another example, we will override the ToString() method of the System.Object. Every class in C# including the ones you created inherits System.Object as you will see in the next lesson. using System; namespace VirtualMethodsDemo3 { class Person { public string FirstName { get; set; } public string LastName { get; set; } public override string ToString() { return FirstName + " " + LastName; } } class Program { public static void Main() { Person person1 = new Person(); person1.FirstName = "John"; person1.LastName = "Smith"; Console.WriteLine(person1.ToString()); } } } Example 3 – Overriding the ToString() Method John Smith Since we overridden the ToString() method of the System.Object class, we have customized the output to print the full name rather than printing the type of the object which is the default.
https://compitionpoint.com/virtual-methods/
CC-MAIN-2021-21
refinedweb
541
58.28
The actual gcc version is gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) When compiled and run with this gcc version, using the command line gcc -O xxx.c && a.out the attached program outputs -1, whereas the correct output is 0. If I use gcc 3.3.6 or leave away the -O flag, the program produces correct output. Subject: Re: New: -(x>y) generates wrong code I cannot create an attachment in Bugzilla, so I'll just append the test program here: #include <stdio.h> #include <limits.h> long foo(long x, long y); int main() { printf("%d\n",foo(INT_MIN,INT_MAX)); return 0; } long foo(long x, long y) { return -(x>y); } It works for me on x86_64 and i686 with 4.0.0, 4.1.0 and 4.1.2. So this looks like a target issue. I can reproduce the problem on a Linkstation Pro with an ARM926EJ CPU. I compiled GCC SVN revision 123155 from the gcc-4_2-branch on it. Creating wrong assembler code is at least a major bug, even when using the optimizing switch (which many programs do), so please change severity to "major". Confirmed. This is a bug in the negscc pattern in arm.md. It's only been there since 1994! Subject: Bug 31152 Author: rearnsha Date: Sat Jun 23 18:07:04 2007 New Revision: 125973 URL: Log: PR target/31152 * arm.md (negscc): Match the correct operand for optimized LT0 test. Remove optimization for GT. *gcc.c-torture/execute/20070623-1.c: New. Added: trunk/gcc/testsuite/gcc.c-torture/execute/20070623-1.c Modified: trunk/gcc/ChangeLog trunk/gcc/config/arm/arm.md trunk/gcc/testsuite/ChangeLog Fixed on trunk. Are you not going to apply this to 4.1 and 4.2? Richard, I think this patch should also be added to the 4.1 and 4.2 branches.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=31152
CC-MAIN-2018-05
refinedweb
318
69.58
This section provides general guidelines and recommendations for designing the final NIS+ namespace for your site: When designing the namespace, do not worry about limitations imposed by the transition from NIS. You can modify your NIS+ domain later, once you know what your final NIS+ configuration will look like. Select the model of information administration, such as the domain structure, that your site will use. Without a clear idea of how information at your site will be created, stored, used, and administered, it is difficult to make the design decisions suggested in this section. You could end up with a design that is more expensive to operate than necessary. You also run the risk of designing a namespace that does not suit your needs. Changing the namespace design after it has been set up is costly. Designing would require rearranging information, reestablishing security, and recreating administration policies. When designing the structure of an NIS+ namespace, consider the following factors, which are discussed in the following sections: The main benefit of an NIS+ domain hierarchy is that it allows the namespace to be divided into more easily managed components. Each component can have its own security, information management, and administration policies. It is advisable to have a hierarchy if the number of clients you have exceeds 500, if you want to set up different security policies for a set of users, or if you have geographically distributed sites. Unless there is a need for a domain hierarchy, not having a hierarchy simplifies your transition to NIS+ because would not get its netgroups information from the subdomain, but instead retrieve the information from its domain, which is the domain above the subdomain; this can be confusing. Another example of when a hierarchy could cause problems would be where the NIS+ server was also used by users to log in remotely and to execute certain commands that they could not execute from their own workstations. If you have only a single root domain, you will not have these problems because NIS+ root servers live in the domain that they serve. When all users are in the same NIS domain, they are directly visible to each other without using fully qualified names. Creating an NIS+ hierarchy, however, puts users in separate domains, which means that the users in one domain are not directly visible to users in another domain unless you use fully qualified names or paths. For example, if there are two subdomains, sales.wiz.com and factory.wiz.com, created out of the earlier wiz.com domain, then for user juan in the sales.wiz.com domain to be able to send mail to user myoko in factory.wiz.com, he would have to specify her name as myoko@hostname.factory.wiz.com (or myoko@hostname.factory) instead of just myoko, as was sufficient when they were in the same domain. Remote logins also require fully qualified names between domains. You could use the table path to set up connections between tables in one domain and another domain, but to do so would negate the advantages of having a domain hierarchy. You would also be reducing the reliability of the NIS+ service because now clients would have to depend upon the availability of not only their own home domains, but also of other domains to which their tables are pathed. Using table paths may also slow request-response time. If you are unfamiliar with domain hierarchies, see Chapter 4, NIS+ Namespace and Structure before designing your domain hierarchy. It describes NIS+ domain structure, information storage, and security. Once you are familiar with the components of a domain hierarchy, make a diagram of how you expect the hierarchy to look when you are finished. The diagram will be a useful reference when you are in the midst of the setup procedure. At a minimum, you need to consider the following issues: One of the major benefits of NIS+ is its capability of dividing the namespace into smaller, manageable parts. You could create a hierarchy of organizations, such as those of the hypothetical corporation, Wizard, Inc., shown in the following figure. Figure 3-1. Organizing a Hierarchy by Logical Location. An example of organizing a hierarchy by logical location. It shows wiz.com. domain and its subdomains called sales.wiz.com. and factory.wiz.com.; sales.wiz.com. has subdomains called big.sales.wiz.com. and small.sales.wiz.com. You could also organize the hierarchy by buildings instead of organizations, as shown in the following figure. Figure 3-2. Organizing a Hierarchy by Physical Location. An example of organizing a hierarchy by physical location. It shows the wiz.com. domain with a subdomain for each building (b1.wiz.com through b6.wiz.com.). The scheme you select depends primarily on how you prefer to administer the namespace and how clients tend to use the namespace. For example, if clients of factory.wiz.com are distributed throughout the buildings of Wizard, Inc., you should not organize the namespace by building. Since the clients would constantly need to have access to other domains, you would need to add their credentials to the other domains and you would increase traffic flow through the root master server. A better scheme would be to arrange clients by organization. On the other hand, building-sized domains are immune to the reorganizations that require organization-based domains to be restructured. You are not limited by the physical layout of the network, since organizational and building-based domain hierarchies illustrated earlier, organizational domain plan, all clients would belong to the big.sales.wiz.com, small.sales.wiz.com, and factory.wiz.com domains, and only clients used for administration would belong to the wiz.com and sales.wiz.com domains. Or you could place the clients of general-purpose departments in higher-level domains. For example, if the domain is organized by building, you could put the clients of the Facilities Department in the wiz.com domain. It is not recommended that you do so, however, because the root domain should be kept unnecessary complexity. You may still prefer to create larger domains rather than a hierarchy because one large domain requires less administration than multiple smaller domains do. could. Planning NIS+ Security Measures provides more information about security levels. Any one domain should have no more than 10 replicas because of the increased network traffic and server load that occur when information updates are propagated to the replicas. Determining the number of replicas a domain requires depends on other factors as well, such as: You should create a minimum of two servers (one master and one replica) for every domain and at least one replica for every physical location. You do not need a replica for every subnet. NIS clients do not have access to servers that are not on the same subnet. The only exceptions are the NIS clients, which can use ypinit to specify a list of NIS servers. The netmask number in these cases would have to be set appropriately.. If the domain hierarchy that you design spans a wide area network (WAN) link, it is recommended that you replicate the domain on either side of the WAN link--with a master server on one side and a replica on the other. This could possibly enable clients on the other side of the link to continue with NIS+ service even if the WAN link were temporarily disabled. Putting servers on either side of a WAN, however, changes the structure of a namespace that is organized by group function rather than by physical layout, since the replica might physically reside within the geographic perimeter of a different domain. Geographically dispersed organizations may determine that organizing their domain hierarchy by functional groups would cause a domain to span more than one time zone. It is strongly recommended that you do not have domains that span multiple time zones. If you do need to configure a domain across time zones, be aware that a replica's time will be taken from the master server, so the database updates will be synchronized properly, using Greenwich mean time (GMT). This may cause problems if the replica machine is used for other services that are time critical. To make domains across time zones work, the replica's timezone has to be locally set to the master server's time zone when you are installing NIS+. Once the replica is running, some time-critical programs may run properly and some may not, depending on whether these programs use universal or local time.. First, choose names that are descriptive. For example, Sales is considerably more descriptive than BW23A. Second, choose short names. To make your administrative work easier, avoid long names such as EmployeeAdministrationServices.WizardCorporation. A domain name is formed from left to right, starting with the local domain and ending with the root domain, as shown in the following figure. Unlike NIS, an NIS+ domain name is not case sensitive. Figure 3-3. Syntax of Domain and Subdomain Names. This illustration shows the name sales.wiz.com. 'sales' is a subdomain in wiz.com. 'wiz.com.' is the root domain. The root domain must always have at least two labels and must end in a dot. The second label can be an Internet domain name, such as com. Also consider implications of particular names for electronic mail domains, both within the company and over the Internet. Depending on the migration strategy chosen, a viable alternative could no longer be unique. Therefore, the e-mail addresses of clients who are not in the root domain may change. As a general rule, client e-mail addresses can change when domain names change or when new levels are added to the hierarchy. NIS+ provides several sendmail enhancements to make the task easier. In addition, NIS+ provides a sendmailvars table. The sendmail program first looks at the sendmailvars table (see the following table), then examines the local sendmail.cf file. Note: (multiplex) records. Each a reference to a collection of objects. Therefore, a server that supports a domain is not actually associated with the domain but with the domain's directories. A domain consists of the following directories: Any workstation that is installed with the appropriate NIS+ file sets can be an NIS+ server as long as it has available file system space. The software for both NIS+ servers and clients is included in the AIX 5.1 product. Therefore, any workstation that has a current operating system release installed can become a server or a client, or both. When you select the servers that will support the NIS+ namespace, consider the following factors, discussed in the following sections: When you select servers, you must differentiate between the requirements imposed by the NIS+ service and those imposed by the traffic load of your namespace. The NIS+ service requires you to assign at least one server, the master, to each NIS+ domain. (An NIS+ server is capable of supporting more than one domain, but use the one server for one domain configuration in small namespaces or testing situations.) How many other servers a domain requires is determined by the traffic load, the network configuration, and whether NIS clients are present. Your anticipated traffic loads determine the total number of servers used to support the namespace, how much storage and processing speed each requires, and whether a domain needs replicas to ensure its availability. Unless you find you must rebalance traffic loads, it is a good idea to assign one master server to each domain in the hierarchy. If certain domains must always be available,add two or more replicas to them. Two replicas allow requests to still be answered even if one of the replicas is damaged. Requests may not be answered in a timely manner if a master has only one replica and that replica is being repaired or updated. A domain with only one replica loses 50 percent of its load capacity when the replica is down. Always add at least one replica to a domain. In small to medium domains, configurations with two to four replicas are normal. In organizations with many distributed sites, each site often needs its own subdomain. Typically,. NIS+ master servers require fewer replicas than NIS servers did, since NIS+ does not depend on broadcasts on the local subnet. Putting replicas on both sides of a weak network link (such as WAN links) is recommended. If the link breaks and the networks are decoupled, both sides of the network can still obtain service. Do not put more than 10 replicas on one domain. If you can, put one on each subnet; otherwise, distribute the servers as best you can and optimize for the best performance. You do not need NIS+ servers on every subnet, unless they support NIS clients. In such cases, you may want to install NIS+ servers on multihomed machines. Try to keep fewer than 1000 clients in a domain. NIS+ clients present a higher load on servers than NIS clients do. A large number of clients served by only a few servers may impact network performance. How much disk space you need depends on four factors: BOS software requires at least 32 MB of disk space. You must also consider the disk space consumed by other software the server may use. For more details on the BOS installation and requirements, see the AIX Version 4.3 Installation Guide. Although NIS+ is part of the operating system distribution, it is not automatically installed in the base installation. NIS+ directories, groups, tables, and client information are stored in /var/nis. The /var/nis directory uses about 5 KB of disk space per client. For example purposes only, if a namespace has 1000 clients, /var/nis requires about 5 MB of disk space. However, because transaction logs (also kept in /var/nis) can grow large, you may want additional space per client--an additional 10-15 MB is recommended. In other words, for 1000 clients, allocate 15 to 20 MB for /var/nis. You can reduce this if you checkpoint transaction logs regularly. If you plan to use NIS+ concurrently with NIS, allocate space equal to the amount you are allocating to /var/nis for /var/yp to hold the NIS maps that you transfer from NIS. You also need swap space equal to three times or more of the size of the NIS+ server process--in addition to the server's normal swap-space requirements. The size of the rpc.nisd process is shown by the ps -efl command. Most of this space is used during callback operations or when directories are checkpointed (with nisping -C) or replicated, because during such procedures, an entire NIS+ server process is forked. NIS+: NIS+ tables differ from NIS maps in many ways, but keep two of those differences in mind when designing your namespace: Review the standard NIS+ tables to make sure they suit the needs of your site. They are listed in the following table. Note that NIS+ uses slightly different names for the automounter tables: The following table lists the correspondences between NIS maps and NIS+ tables. You do not have to synchronize NIS+ Tables and Information.) Key-value tables have two columns, with the first column being the key and the second column being the value. Therefore, when you update any information, such as host information, you need only update it in one place, such as the hosts table. You need not worry about keeping that information consistent across related maps. The dots were changed to underscores in NIS+ because NIS+ uses dots to separate directories. Dots in a table name will cause NIS+ to mistranslate names. For the same reason, machine names cannot contain any dots. For example, a machine named sales.alpha is not allowed. through paths or links. Paths and links should not be used. Such a path would have two main benefits: However, such a path would.wiz.com). Otherwise, NIS+ returns out what a table's search path is, use the niscat -o command (you must have read access to the table). Links: NIS+ cannot distinguish between a human and a workstation when requests are made. Therefore, will have to be changed. Identical user and machine names are a problem even when the machine with the duplicate name does not belong to the user with the same name. The following examples illustrate duplicate name combinations not valid with NIS+:.
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/aixbman/nisplus/trans_design.htm
CC-MAIN-2022-21
refinedweb
2,743
54.73
I! A: So, you might have heard that Beta2 has shipped. I wanted to give a brief overview of the changes that have been made to the designer since beta1. So, without further ado, here are my top 10 new designer features in Beta 2 We made a decision in Beta2 to do a bit of an overhaul on the designers general theme and feel in order to feel more like part of VS. The UI we had in Beta1 was still the first pass at the designer surface that we showed last year in the first CTP at PDC. Here are some screenshots of that new UI. Our UX PM, Cathy, has started a blog, and I expect she will have some really interesting discussions about the design decisions that we made. We’ve rounded out the set of icons as well We’ve also created focus around the header, and surface information within that header (like validation errors) One of the biggest pieces of feedback that we got from the Beta1 release was the need to see what’s going on in my workflow. The model we had in Beta1 was optimized around a few levels of composition, and required the user to double-click to drill into the activity. This model works well to focus on an activity, but comes at the expense of the broader view. We’ve gone ahead and changed that in beta2 that allows the workflow items to expand in place. I am jazzed that we did this, and there is no good way to put it in a picture. Open up the designer and start playing around, and I think you’ll like what you’ll see. The other nice part of this is that I can still choose to drill into if I want to have that focus. This should also support fairly deep nesting of activities, meaning you can visualize large, complex workflows on the canvas. Additionally, you can choose to expand or collapse individual nodes, or go to the “Expand All” command in order to expand out the tree. One thing to note is that we don’t allow expansion inside of a flowchart because we don’t yet have a mechanism that lets me rearrange all of the items around that newly expanded activity. If you have an assembly on disk somewhere that contains some activities, you can now add it into the toolbox through the Choose Items dialog. When you drop these new items on the canvas, the appropriate reference should be wired up for you. One thing that we heard from folks is that they like the vb expressions (well, they like the expressions, they aren’t super excited that we only have one language choice at this point), but they really didn’t like fully qualifying the type names. To help out with this, we’ve introduced the Imports designer which lets you pick out the namespaces you want to import that will be used to make it easier to resolve types. Before: Go to the Imports designer down at the bottom (next to Arguments and Variables) and add System.Xml This will now bring in System.Xml into the expression text box: People seem to like the flowchart model, but one request we heard was that it was tough to see what all the lines meant, especially coming out of the branching constructs. I’ll show a few other interesting flowchart things here. First, hovering over a switch/decision will show the expression Clicking on the small triangle will “pin” this expression so that I can see it while working on the flowchart. Now, hook up some of the lines: We also heard that the labels True and False are descriptive for the decision, they may not capture the intent. So, you can click on the decision, open the property grid and set these to some custom value. Switch will display the switch values as well. Towards the end of Beta1, we slipped a little feature in called IActivityTemplateFactory as an internal API to generate a configured set of activities (like what happens when I drop the messaging activity pairs). We found a number of folks really liked this idea, so we have made this public. This lets you add these factories to the toolbox to drop a templatized, configured set of activities onto the canvas. A real simple one would look like [this drops a Pick with two branches, like what happens today when you drop a Pick]: 1: public sealed class PickWithTwoBranchesFactory : IActivityTemplateFactory 3: public Activity Create(DependencyObject target) 5: return new Pick 6: { 7: Branches = 8: { 9: new PickBranch 10: { 11: DisplayName = "Branch1" 12: }, 13: new PickBranch 14: { 15: DisplayName = "Branch2" 16: } 17: } 18: }; 19: } 20: } We’ve made some OM changes (there have been some I’ll blog about later). One important one is that we needed to move out of the *.Design namespaces, primarily because *.Design is meant to specifically imply VS designer specific stuff, and our designer stuff is not VS specific. So, we’ve moved from .Design to .Presentation as our namespace and assembly names of choice. There has also been some reduction in the number of namespaces, and moving some types out of the root namespace (System.Activities.Presentation) into more specialized namespaces (System.Activities.Presentation.Converters). We’ve also reduced the number of assemblies from 3 to 2, and here is the way to think about them. Within System.Activities.Presentation.dll, here are the public namespaces: We had not originally planned for a designer for WriteLine, we had felt that the default designer was good enough for this. We heard some feedback and one of the testers decided to put one together. This makes it easy to configure the WriteLine on the canvas: This is a feature you may not see, or ever need to think about, but it’s one that I think is pretty neat, and helps us increase performance for large workflows. We’ve built up some infrastructure inside the designer that supports virtualization of the UI surface. The way to think about this is lazy loading the visuals of the designer until they are actually needed. While this isn’t that useful for a small workflow, as a workflow starts to span multiple screens, it makes sense to only load the visual elements that are required for the workflow that are actually in view. You may see artifacts of this if you open a large workflow as you see some of the workflow “draw on the fly.” The advantages of doing this are that bigger workflows will open faster with fewer visual elements on the screen. The information used by virtualization is coupled with the viewstate of the workflow in order to cache the size of the elements in order to reduce the need to do complex computations about size, position and overlap. Now, on hover we will display the information about the arguments and properties of the selected item, and additionally, we will pick up the System.ComponentModel.DesscriptionAttribute data as well. The following code: 1: public sealed class MySimpleActivity : CodeActivity 3: [Description("This is a basic argument which will be used to compute the end value")] 4: public InArgument<string> Text { get; set; } 5: 6: protected override void Execute(CodeActivityContext context) 7: { ... will show up like this in the property grid: Rather than go with the “big flat list” approach, we’ve made the organization of items in the toolbox a little more granular. This also shows some of the updated icons for the activities :-) I’m really excited that beta2 is out the door, and I can’t wait to start hearing about folks using it. I’ll start blogging a little more, and there are a few folks on my team that will start talking about some other interesting designer topics. When :-). One! With :-) 3: [ServiceContract] 4: public interface ILongRunningWork 5: { 6: [OperationContract] 7: string TakeAWhile(int i); 20: [OperationContract(IsOneWay = true)] 21: void TakeAWhileAndTellMeLaterDone(string s); 22: } 23: 24: } And now for the implementation of these; 3: public class Service1 : ILongRunningWork.] I: One?" /> One 3: Activities = 5: new Sequence 7: Activities = 9: new ForEach<string>. Late last night, the PDC team posted an additional "bunch" of sessions, including one I'm particularly interested in:. Tags: Advanced, WF. Advanced, WF They also posted a number of other interesting Oslo sessions that folks might be interested in, these cover a wide range of the things that our group is doing: . "Oslo" builds on Windows Workflow (WF) and Windows Communication Foundation (WCF) to provide a feature-rich middle-tier execution and deployment environment. Learn about the architecture of "Oslo" and the features that simplify deployment, management, and troubleshooting of workflows and services. . It'll be a good time, can't wait to see ya there! I made a quick post a few months back where I tried to talk about the way the designer works and lets us design types, as well as simply configure instances of types. There were a couple of key points that I wanted to make there in that post: A few folks have noticed (and sent me mail), that things look a little different in Beta2. While we are going to have a more thorough, “here’s everything that changed” doc, I want to go ahead and update at least some of the things that I’ve been talking about here. In reality, very little is new, we’ve primarily moved stuff around now. One thing that you may remember is that the DesignTimeXamlReader was not public in beta1, and if you are looking around, you may not find it. We have made this functionality public however. Thus, see the “what’s changed" bit. We took a long look at things and realized we had a bunch of XAML stuff all over the place. We felt it would be a good idea to try to consolidate that into one place in WF, so System.Activities.XamlIntegration.ActivityXamlServices becomes your one stop shop for most things Activity and XAML related. Let’s take a quick look and see what’s in there: Load is used to generally take some XAML and return an Activity which you can then use to execute. If Load encounters a XAML stream for <Activity x:Class, it will subsequently generate a DynamicActivity. This functions basically the same way WorkflowXamlServices.Load() did in beta1. You also see CreateBuilderReader, and CreateBuilderWriter, which are used to surface the DesignTimeXaml capabilities that we used in beta1. These will return an instance of a XamlReader/Writer that handles the transformation between the metatype and the <Activity x:Class XAML. The metatype has changed names from ActivitySchemaType to ActivityBuilder. The table below should help summarize the uses and changes between beta1 and beta2. In this area, I don’t expect any changes between what you see now, and what you will see in RTM. Task Beta1 Beta2 To explore this, use CreateBuilderReader() and XamlServices.Load() on a workflow that you’ve built in the designer and poke around a bit to see what’s going on. Here is some sample code that walks through this: 1: ActivityBuilder ab1 = new ActivityBuilder(); 2: ab1.Name = "helloWorld.Foo"; 3: ab1.Properties.Add(new DynamicActivityProperty { Name = "input1", Type = typeof(InArgument<string>) }); 4: ab1.Properties.Add(new DynamicActivityProperty { Name = "input2", Type = typeof(InArgument<string>) }); 5: ab1.Properties.Add(new DynamicActivityProperty { Name = "output", Type = typeof(OutArgument<string>) }); 6: ab1.Implementation = new Sequence 7: { 8: Activities = 9: { 10: new WriteLine { Text = "Getting Started " }, 11: new Delay { Duration = TimeSpan.FromSeconds(4) }, 12: new WriteLine { Text = new VisualBasicValue<string> { ExpressionText= "input1 + input2" }}, 13: new Assign<string> { To = new VisualBasicReference<string> { ExpressionText = "output" }, 14: Value = new VisualBasicValue<string> {ExpressionText= "input1 + input2 + \"that's it folks\"" } } 15: } 17: }; 18: StringBuilder sb = new StringBuilder(); 19: StringWriter tw = new StringWriter(sb); 20: XamlWriter xw = ActivityXamlServices.CreateBuilderWriter( 21: new XamlXmlWriter(tw, new XamlSchemaContext())); 22: XamlServices.Save(xw , ab1); 23: string serializedAB = sb.ToString(); 24: 25: DynamicActivity da2 = ActivityXamlServices.Load(new StringReader(serializedAB)) as DynamicActivity; 26: var result = WorkflowInvoker.Invoke(da2, new Dictionary<string,object> { {"input1","hello"}, {"input2", "world" }}); 27: Console.WriteLine("result text is {0}", result["output"]); 29: 30: ActivityBuilder ab = XamlServices.Load( 31: ActivityXamlServices.CreateBuilderReader( 32: new XamlXmlReader(new StringReader(serializedAB)))) as ActivityBuilder; 33: 34: Console.WriteLine("there are {0} arguments in the activity builder", ab.Properties.Count); 35: Console.WriteLine("Press enter to exit"); 36: Console.ReadLine(); Good luck, and happy metatyping!. Hot off the presses (and the download center) come the WF4 Beta 2 samples here. The team has invested a lot of time into these samples and they provide a good way to get up to speed on the way a particular feature or group of features work together. Note, there are 2300 files to be unzipped, so hopefully there is a sample in here for everyone. At a high level, we work down the directory structure from technology, sample type, and then some functional grouping of samples. Within the “Sample Type” we have a few different categories we use. This sample demonstrates how to write a workflow simulator that displays the running workflow graphically. The application executes a simple flowchart workflow (defined in Workflow.xaml) and re-hosts the workflow designer to display the currently executing workflow. As the workflow is executed, the currently executing activity is shown with a yellow outline and debug arrow. In addition tracking records generated by the workflow are also displayed in the application window. For more information about workflow tracking, see Workflow Tracking and Tracing. For more information about re-hosting the workflow designer, see Rehosting the Workflow Designer. The workflow simulator works by keeping two dictionaries. One contains a mapping between the currently executing activity object and the XAML line number in which the activity is instantiated. The other contains a mapping between the activity instance ID and the activity object. When tracking records are emitted using a custom tracking profile, the application determines the instance ID of the currently executing activity and maps it back to the XAML file that instantiated it. The re-hosted workflow designer is then instructed to highlight the activity on the designer surface and use the same method as the workflow debugger, specifically drawing a yellow border around the activity and displaying a yellow arrow along the left side of the designer I will be blogging more on some of the interesting (to me) individual samples. What do you think? Are there samples you’d like to see? How are you using these, is there anything we can do to make these more useful? I!
https://blogs.msdn.com/b/mwinkle/default.aspx?Redirected=true&PostSortBy=MostViewed&PageIndex=2
CC-MAIN-2015-18
refinedweb
2,444
52.29
updated copyright years 1: \ image dump 15nov94py 2: 3: \ Copyright (C) 1995,1997,2003,2006,2007: : delete-prefix ( c-addr1 u1 c-addr2 u2 -- c-addr3 u3 ) 21: \ if c-addr2 u2 is a prefix of c-addr1 u1, delete it 22: 2over 2over string-prefix? if 23: nip /string 24: else 25: 2drop 26: endif ; 27: 28: : update-image-included-files ( -- ) 29: included-files 2@ { addr cnt } 30: image-included-files 2@ { old-addr old-cnt } 31: align here { new-addr } 32: cnt 2* cells allot 33: new-addr cnt image-included-files 2! 34: old-addr new-addr old-cnt 2* cells move 35: cnt old-cnt 36: U+DO 37: addr i 2* cells + 2@ 38: s" GFORTHDESTDIR" getenv delete-prefix save-mem-dict 39: new-addr i 2* cells + 2! 40: LOOP 41: maxalign ; 42: 43: : dump-fi ( addr u -- ) 44: w/o bin create-file throw >r 45: update-image-included-files 46: update-image-order 47: here forthstart - forthstart 2 cells + ! 48: forthstart 49: begin \ search for start of file ("#! " at a multiple of 8) 50: 8 - 51: dup 3 s" #! " str= 52: until ( imagestart ) 53: here over - r@ write-file throw 54: r> close-file throw ; 55: 56: : savesystem ( "name" -- ) \ gforth 57: name dump-fi ;
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/savesys.fs?f=h;only_with_tag=MAIN;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.16
CC-MAIN-2019-35
refinedweb
212
63.93
Contact Me! Episode 648 Julie Lerman on Entity Framework Core 5 Julie Lerman describes some of the new features included in EF Core 5. GCast 98: Using the Azure Storage Explorer The Azure Storage Explorer provides a simple way to access objects in an Azure Storage Account. This video walks you through how to install and use this tool..: GCast 96: Using the MinIO Java Client SDK Learn how to use the Java Client SDK to upload and download files to/from a MinIO server Code: GCast 95: Learn how to use MinIO to manage blobs in an Azure Storage Account GCast 94: Creating a MinIO Server Learn how to create a MinIO server, organize into buckets; then, read and write files to the server.. GCast 77: Connecting Azure Synapse to External Data Azure Data Warehouse has been re-branded as Azure Synapse. Learn how to add data from an external system to an Azure Synapse database. GCast 76: Creating an Azure Synapse database Azure Data Warehouse has been renamed to Azure Synapse. This video walks you through the creation of a Synapse database. Episode 592 Jes Schultz on Data Engineering Jes Schultz discusses the roles and responsibilities of a Data Engineer.. The "New data factory" blade displays, as shown in.. GCast 54: Azure Storage Replication Learn about the data replication options in Azure Storage and how to set the option appropriate for your needs.). From the menu, select Storage | Storage Account, as shown in Fig. 2. The "Create Storage Account" dialog with the "Basic" tab selected displays, as shown in. The important field on this tab is "Hierarchical namespace". Select the "Enabled" radio button at this field. Click the [Review + Create] button to advance to the "Review + Create" tab, as shown in. The "File Systems" blade displays, as shown in Fig. 7. Data Lake data is partitioned into file systems, so you must create at least one file system. Click the [+ File System] button and enter a name for the file system you wish to create, as shown in Fig. 8. Click the [OK] to add this file system and close the dialog. The newly-created file system displays, as shown in: GCast 37: Managing Blobs with the Azure Storage Explorer The Azure Storage Explorer is a free resource to manage Azure Storage Accounts. This video shows how to manage Azure blobs with this tool... A blank canvas displays, along with a side menu and a list of fields in your data, as shown in. If you click on "temp" under "Value", you will notice that it shows the Sum of the temperatures, which is not very useful information. You can select something more useful, like "Average", "Minimum", or "Maximum" temperature from this menu, as shown in Fig. 13. If you don't like a bar chart, you can also change the type of visualization by selecting something different from the "VISUALIZATIONS" blade, as shown in Fig. 14. Fig. 14 This quick overview shows some of the features available in Microsoft Power BI. An Azure storage account gives you the ability to create, store, and manage tables, queues, blobs, and files. Available Azure storage services are: A blob is any unstructured or semi-structured object that you want to store. Examples include videos, images, and text files. Blob storage is flexible for storing an object without the system having to know much about the object. Azure Files allow you to store files to Azure and access them as a file share using the standard SMB protocol. Files are built on top of Azure Blob Storage. Azure Table Storage provides a simple NoSQL database for your application. Data in an Azure Table is stored as rows. Each row contains a key and one or more properties. You do not need to pre-define these properties beforehand and you can define different properties for different rows in the same table. A queue is a popular mechanism for designing asynchronous applications. One application can drop a message onto a queue and another application can pick up that message later and process it. This decoupling allows for scalability, flexibility, and faster response times. To create a new Azure Storage Account, navigate to the. Click the [Create a resource] button; then select Storage | Storage account - blob, file, table, queue from the menu, as shown in Fig. 1 The New Storage Account blade displays, as shown in Fig. 2. At the "name" field, enter a unique name for your storage account. You will be able to access this account through a REST API by sending requests to where accountname is the name you enter for your account. At the "Deployment model" radio button, select "Resource manager". At the "Account kind" dropdown, select "Storage (general purpose)" At the "Location" dropdown, select a location in which to create your account. To minimize latency, you should create an account either near you or near the users and applications that will access the data in this account. At the "Resource Group" field, click the "Create new" link to display the "Create Resource Group" dialog, as shown in Fig. 3. Enter a unique name for your new resource group and click the [OK] button to close the dialog and return to the previous blade. Review your Storage Account settings and click the [Create] button to create your new storage account. After a few seconds, you will be able to access the properties of your Storage Account, manage the account, and connect to the account. The "Overview" tab (Fig. 4) displays information about the account, along with links to create and manage Blobs, Files, Tables, and Queues. Click the "Blobs" link in the "Services" section of the "Overview" tab to create and manage blobs and containers, as shown in Fig. 5. Click the "Files" link in the "Services" section of the "Overview" tab to create and manage files, as shown in Fig. 6. Click the "Tables" link in the "Services" section of the "Overview" tab to create and manage tables , as shown in Fig. 7. Click the "Queues" link in the "Services" section of the "Overview" tab to create and manage queues, as shown in Fig. 8. 496 Oren Eini on RavenDB Episode 469 Anne Bougie on Azure Storage Options49 Matt Groves on Couchbase Episode 445 Jes Borland on SQL Server 2016 Episode 399 Matt Winkler on Azure Data Lake. In this video, you will see how to use the portal to quickly create a table linked to an Azure Mobile Service and a Windows Universal App client that connects to that mobile service. G-Cast 2
https://www.davidgiard.com/CategoryView,category,Database.aspx
CC-MAIN-2021-21
refinedweb
1,096
69.92
Monitor CloudWatch metrics for your Auto Scaling groups and instances Metrics are the fundamental concept in Amazon CloudWatch. A metric represents a time-ordered set of data points that are published to CloudWatch. Think of a metric as a variable to monitor, and the data points as representing the values of that variable over time. You can use these metrics to verify that your system is performing as expected. Amazon EC2 Auto Scaling metrics that collect information about Auto Scaling groups are in the AWS/AutoScaling namespace. Amazon EC2 instance metrics that collect CPU and other usage data from Auto Scaling instances are in the AWS/EC2 namespace. The Amazon EC2 Auto Scaling console displays a series of graphs for the group metrics and the aggregated instance metrics for the group. Depending on your needs, you might prefer to access data for your Auto Scaling groups and instances from Amazon CloudWatch instead of the Amazon EC2 Auto Scaling console. For more information, see the Amazon CloudWatch User Guide. Contents Available metrics and dimensions This section lists the different types of metrics in the AWS/AutoScaling namespace. For information about the available metrics in the AWS/EC2 namespace, see List the available CloudWatch metrics for your instances in the Amazon EC2 User Guide for Linux Instances. See Configure monitoring for Auto Scaling instances to learn how to enable detailed monitoring for metrics in the AWS/EC2 namespace or collect memory metrics from the EC2 instances in your Auto Scaling groups. Amazon EC2 Auto Scaling publishes the following metrics in the AWS/AutoScaling namespace. Amazon EC2 Auto Scaling sends sampled data to CloudWatch every minute on a best-effort basis. In rare cases when CloudWatch experiences a service disruption, data isn't backfilled to fill gaps in group metric history. Contents Auto Scaling group metrics When group metrics are enabled, Amazon EC2 Auto Scaling sends the following metrics to CloudWatch. The metrics are available at one-minute granularity at no additional charge, but you must enable them. With these metrics, you get nearly continuous visibility into the history of your Auto Scaling group, such as changes in the size of the group over time. In addition to the metrics in the previous table, Amazon EC2 Auto Scaling also reports group metrics as an aggregate count of the number of capacity units that each instance represents. If instance weighting is not applied, then the following metrics are populated, but are equal to the metrics that are defined in the previous table. For more information about using weights, see Configure instance weighting for Amazon EC2 Auto Scaling and Create an Auto Scaling group using attribute-based instance type selection. Amazon EC2 Auto Scaling also reports the following metrics for Auto Scaling groups that have a warm pool. For more information, see Warm pools for Amazon EC2 Auto Scaling. Dimensions for Auto Scaling group metrics You can use the following dimensions to refine the metrics listed in the previous tables. Predictive scaling metrics and dimensions The AWS/AutoScaling namespace includes the following metrics for predictive scaling. Metrics are available with a resolution of one hour. You can evaluate forecast accuracy by comparing forecasted values with actual values. For more information about evaluating forecast accuracy using these metrics, see Monitor predictive scaling metrics with CloudWatch. The PairIndex dimension returns information associated with the index of the load-scaling metric pair as assigned by Amazon EC2 Auto Scaling. Currently, the only valid value is 0. Enable Auto Scaling group metrics (console) When you enable Auto Scaling group metrics, your Auto Scaling group sends sampled data to CloudWatch every minute. There is no charge for enabling these metrics. To enable group metrics Open the Amazon EC2 console at , and choose Auto Scaling Groups from the navigation pane. Select the check box next to your Auto Scaling group. A split pane opens up in the bottom of the page. On the Monitoring tab, select the Auto Scaling group metrics collection, Enable check box located at the top of the page under Auto Scaling. To disable group metrics Open the Amazon EC2 console at , and choose Auto Scaling Groups from the navigation pane. Select your Auto Scaling group. On the Monitoring tab, clear the Auto Scaling group metrics collection, Enable check box. Enable Auto Scaling group metrics (Amazon CLI) To enable group metrics Enable one or more group metrics by Use the disable-metrics-collection command. For example, the following command disables all Auto Scaling group metrics. aws autoscaling disable-metrics-collection --auto-scaling-group-name my-asg
https://docs.amazonaws.cn/en_us/autoscaling/ec2/userguide/ec2-auto-scaling-cloudwatch-monitoring.html
CC-MAIN-2022-40
refinedweb
758
53.71
unnamed object creation Discussion in 'Java' started by Mattia Bellett Unnamed "Formal Parameter"Mysooru, Jul 24, 2003, in forum: C++ - Replies: - 3 - Views: - 449 - Mike Wahler - Jul 24, 2003 C++ programmers! How do you use unnamed 'namespace's ?Razmig K, Sep 4, 2003, in forum: C++ - Replies: - 3 - Views: - 678 - John L Fjellstad - Sep 5, 2003 forward declaration for typedef of unnamed structsJordi Vilar, Feb 18, 2004, in forum: C++ - Replies: - 5 - Views: - 2,278 - Jonathan Turkanis - Feb 18, 2004 Unnamed compound object as "buffer" argumentsEric Laberge, Sep 4, 2005, in forum: C Programming - Replies: - 7 - Views: - 323 - Michael Wojcik - Sep 9, 2005 - Replies: - 9 - Views: - 704 - Jim Langston - Sep 22, 2005
http://www.thecodingforums.com/threads/unnamed-object-creation.127830/
CC-MAIN-2014-35
refinedweb
112
51.55
Finding number of pairs with a certain XOR value Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 25 minutes | Coding time: 5 minutes. The key to solve this problem is that A xor B = K => A xor K = B. For example: array = {3, 6, 8, 10, 15, 50}, K=5 output = 2 Explanation: (3^6)=5 and (10^15) = 5 xor(^): The xor operation or the exclusive-or operation gives 0 as output when the inputs are same and 1 as output when the inputs are different. The idea is that if a and b are different, then output is 1 or else, it is 0. For example: 2^3 = 1 Here above we first convert 2 and 3 to their binary equivalents and then apply the xor operation in accordance to the truth table. Brute force O(N^2) We can actually solve this by iterating over each element and then finding its xor with all other elements and try to find the required xor. This process would take O(N^2) complexity. pseudo code: take an array a[] input the elements start the nested for loop: for(i=0;i<=size-1;i++) { for(j=i+1;j<=size-1;j++) { //now we check for the condition that finds the required pair if(if (a[j]^a[i]== x) c++; } // increment to count the number of suitable pairs } Code implementation: #include<bits/stdc++.h> using namespace std; int pair_calc(int a[], int n, int x) { int c=0; for (int i=0; i<n ; i++) { for(int j=i+1;j<n;j++) { if (a[j]^a[i]== x) c++; } } return c; } int main() { int a[100]; int n; cin>>n; cout<<"enter the array"<<endl; for(int i=0;i<n;i++) { cin>>a[i]; } int x ; cin>>x; cout << "The result is = " << Pair_calc(a, n, x); return 0; } Complexity: Time complexity: O(N^2) Hashing solution O(N) If a[i]^a[j]=x then it is always true that a[i]^x = a[j]. In this solution we are going to use Unordered map to store the elements a[i] of the array, and with each entry we check if x^a[i] exists in the map. If it does then we have got a favorable case and hence we increase the output by 1. But if doesn't find the match then we simply insert the value like we always do. Pseudo code: set the counter as 0. create an unordered_map "s". No for each arr[i] in arr[] 1.If x ^ arr[i] is in "s", then increase counter by 1. 2. Insert arr[i] into the map "s". 3. return counter. Code implementation: #include<bits/stdc++.h> using namespace std; int pair_calc(int a[], int n, int x) { unordered_map<int,int> s; int c=0; for (int i=0; i<n ; i++) { //in case there is an element with XOR equals to x^arr[i], that means // there exist an element such that the // XOR of element with arr[i] is equal to // x, then increment counter. if (s.find(a[i]^x)!= a.end()) c += s[a[i]^x]; s[a[i]]++; } return c; } int main() { int a[100]; int n; cin>>n; cout<<"enter the array"<<endl; for(int i=0;i<n;i++) { cin>>a[i]; } int x ; cin>>x; cout << "The result is = " << Pair_calc(a, n, x); return 0; } Example: Input : arr[] = {25, 43, 10, 15, 78, 26}, x = 5 Output : 1 Explanation : (10 ^ 15) = 5 In the above example. we can consider the array arr[]. Now for each input in the array we scan for the map. At first each element is inputted because the map would be empty.Now if apparently we start finding the pairs we need then we just increment the pair so found. Here till the element 10, we only insert the elements into the map. And then when we encounter 15, we find 15^5 and we find the match as 10. so it takes o(1) for the map to find the match, and for n elements we are going to get a complexity of O(n). Complexity: O(N). This method of using maps also gets rid of the problem of duplication of elements. So hence give us a clear method to solve the problem.Otherwise this problem could also be solved with the help of sets but hen duplication of elements would have become a real issue.
https://iq.opengenus.org/finding-pairs-with-a-certain-xor/
CC-MAIN-2021-17
refinedweb
769
66.98
Wikibooks:Request for enabling special:import From Wikibooks, open books for an open world Contents What Special:Import does[edit] This tool is used for transwiki, available to administrators. I have used it on wikiversity to import pages from wikibooks and meta to wikiversity. In contrast to the copy-and-paste transwikis that are the current norm, imports bring over the entire page history with all changes. Advantages[edit] - Import ensures that page edit histories are brought over, in better compliance to the GFDL than the current system of copying the page history, because one can search the history to distinguish between major and minor edits, etc. - Import would put the transwiki-from-elsewhere process firmly in the hands of people on the wikibooks side, rather than the random dropping of materials that happens now. Hopefully the wikipedians/wikiversitans/etc. can be convinced to make this a policy on their sides as well. - Using import rather than manual transwiki might help remove the "stigma" of materials being "exiled" into wikibooks (a feeling I myself suffered from for quite some time as a wikipedian). Disadvantages[edit] - Naturally, this means more work for administrators. - Using edit counts for voting elegibility will go out the window, since any edits made on a wikipedia page before the import will be attributed to that username on wikibooks. Votes (wikibookians)[edit] (Support or Oppose with comments. If undecided, use the talk page.) - Support --SB_Johnny | talk (I'm the one making the proposal, so of course I support it :).) - Support If material is moving to here from there, then it makes sense to facilitate that movement. --Whiteknight (talk) (projects) 15:55, 14 September 2006 (UTC) - Support good idea, makes users requesting imports able to make use of new material quicker with edit history intact. --darklama 23:54, 8 October 2006 (UTC) - Support The disadvantages don't seet too worrysome. Cleanup after sloppy copy-pastes will mostly fall on admins anyway as they do much of that type of work anyway. Measuring voting elegebility will not be completely out the window, but sometimes it may be difficult to see if a user's edits are wikibookian edits. --Swift 14:58, 9 October 2006 (UTC) - Support. This is an invaluable and foolproof tool. If this goes through I think cut-n-paste transwikiing should be banned altogether. GarrettTalk 23:28, 12 October 2006 (UTC) - Support. Does anyone know how many pages might be expected to be imported to Wikibooks in a year? --JWSurf 00:38, 13 October 2006 (UTC) - support - Makes preserving the history, a requirement of the GFDL, easier. Easier is better. Gentgeen (I'm also a wikipedian) 03:24, 13 October 2006 (UTC) - Support Better tools are good for everybody. Kellen T 09:50, 13 October 2006 (UTC) - Very Strong Support -- More like why isn't this turned on by default? As far as the number of pages imported to Wikibooks in a year, it is on the order of about 500-1000 pages (at least for last year). Or about 3-5 pages per day on average unless admins are getting lax. Much of that is cruft that an admin might not import directly, but much of it is content marked for deletion on Wikipedia and other Wikimedia projects. This has been an invaluable tool on Wikiversity, and I'm very, very, very, (to the point of ad nausium) glad that I have the option to use this tool to move content from Wikibooks to Wikiversity. Performing similar content transwiki without this tool (for example, the Wikimania content) was a total pain in the rear. It has its shortcomings and bugs, but it is very much worth it. --Rob Horning 20:50, 20 October 2006 (UTC) Votes (wikipedians, see talk)[edit] - Facilitates moves of materials to where they belong. This is good for the Wikipedia project, and so I support it. --Improv 16:16, 5 October 2006 (UTC) Additional request[edit] Special:Import is now enabled. A few more votes for the following (wikimedia only requires a few) would further facilitate the process: Part 2: make "Transwiki:" a full namespace: By making transwiki a namespace, we will be able to put the imports there directly, rather than importing into the main namespace. This has 2 main advantages: - 1. No clutter in the main namespace. - 2. This would allow for stable redirects for use on the wikipedia side (makes the templating easier). The Transwiki:Pagename could then just have a redirect to the page's actual location. (Note that wikipedians frequently complain that the pages they thought were moved here don't seem to be here... in most cases they actually are here, but wikipedians are generally not accustomed to our way of naming things (as books and chapters), and the fact that a search on wikibooks isn't going to move them straight to an article). Votes[edit] - Support --SB_Johnny | talk 19:44, 16 October 2006 (UTC) - Support. and after this you are going to tell us that you absolutely must have something else that requires a vote, I suppose? I'm just kidding with you, this makes good sense. --Whiteknight (talk) (projects) 19:50, 16 October 2006 (UTC) - LOL. No, not for a while. If we like the way it's working for the forking purposes, we might want to enable import from wikisource for annotated books, but let's take it one step at a time :). I had meant to ask for this at the same time (actually, I meant to ask guillom to ask for it at the same time), but managed to drop off irc before he mentioned that buzilla would want a show of support for this too. --SB_Johnny | talk 19:55, 16 October 2006 (UTC) - Support --darklama 20:15, 16 October 2006 (UTC) - Support --Derbeth talk 21:07, 16 October 2006 (UTC) - Support --Swift 21:18, 16 October 2006 (UTC) - Support --Panic 21:21, 16 October 2006 (UTC) - Support --Kempm 21:29, 16 October 2006 (UTC) - Support I hope this becomes a standard feature of MediaWiki. --JWSurf 02:31, 17 October 2006 (UTC) - Support, obviously. GarrettTalk 06:11, 17 October 2006 (UTC) - Support Dude. Kellen T 08:05, 17 October 2006 (UTC) - Support I think this is an outstanding idea. Of course, the "Transwiki" namespace does need some current cleanup for a couple of fools that have nothing better to do with their time. I would also agree that this ought to be a standard feature of MediaWiki. --Rob Horning 20:53, 20 October 2006 (UTC) - Requested in bugzilla:7613. guillom 11:02, 17 October 2006 (UTC)
http://en.wikibooks.org/wiki/Wikibooks:Request_for_enabling_special:import
CC-MAIN-2015-18
refinedweb
1,097
61.77
Category: PHP it. We are sincerely grateful to all of our users and contributors. We have been constantly collecting your feedback and ideas, and continually watching the evolution of PHP, AWS, and the Guzzle library. Earlier this year, we felt we could make significant improvements to the SDK, but only if we could break a few things. Since receiving a unanimously positive response to our blog post about updating to the latest version of Guzzle a few months ago, we’ve been working hard on V3, and we’re ready to share it with you. What’s new? The new version of the SDK provides a number of important benefits to AWS customers. It is smaller and faster, with improved performance for both serial and concurrent requests. It has several new features based on its use of the new Guzzle 5 library (which also includes the new features from Guzzle 4). The SDK will also, starting from V3, follow the official SemVer spec, so you can have complete confidence when setting version constraints in your projects’ composer.json files. Let’s take a quick look at some of the new features. Asynchronous requests With V3, you can perform asynchronous operations, which allow you to more easily send requests concurrently. To achieve this, the SDK returns future result objects when you specify the @future parameter, which block only when they are accessed. For managing more robust asynchronous workflows, you can retrieve a promise from the future result, to perform logic once the result becomes available or an exception is thrown. <?php // Upload a file to your bucket in Amazon S3. // Use '@future' to make the operation complete asynchronously. $result = $s3Client->putObject([ 'Bucket' => 'your-bucket', 'Key' => 'docs/file.pdf', 'Body' => fopen('/path/to/file.pdf', 'r'), '@future' => true, ]); After creating a result using the @future attribute, you now have a future result object. You can use the data stored in the future in a blocking (or synchronous) manner by just using the result as normal (i.e., like a PHP array). // Wait until the response has been received before accessing its data. echo $result['ObjectURL']; If you want to allow your requests to complete asynchronously, then you should use the promise API of the future result object. To retrieve the promise, you must use the then() method of the future result, and provide a callback to be completed when the promise is fulfilled. Promises allow you to more easily compose pipelines when dealing with asynchronous results. For example, we could use promises to save the Amazon S3 object’s URL to an item in an Amazon DynamoDB table, once the upload is complete. // Note: $result is the result of the preceding example's PutObject operation. $result->then( function ($s3Result) use ($ddbClient) { $ddbResult = $ddbClient->putItem([ 'TableName' => 'your-table', 'Item' => [ 'topic' => ['S' => 'docs'], 'time' => ['N' => (string) time()], 'url' => ['S' => $s3Result['ObjectURL']], ], '@future' => true, ]); // Don't break promise chains; return a value. In this case, we are returning // another promise, so the PutItem operation can complete asynchronously too. return $ddbResult->promise(); } )->then( function ($result) { echo "SUCCESS!n"; return $result; }, function ($error) { echo "FAILED. " . $error->getMessage() . "n"; // Forward the rejection by re-throwing it. throw $error; } ); The SDK uses the React/Promise library to provide the promise functionality, allowing for additional features such as joining and mapping promises. JMESPath querying of results The result object also has a new search() method that allows you to query the result data using JMESPath, a query language for JSON (or PHP arrays, in our case). <?php $result = $ec2Client->describeInstances(); print_r($result->search('Reservations[].Instances[].InstanceId')); Example output: Array ( [0] => i-xxxxxxxx [1] => i-yyyyyyyy [2] => i-zzzzzzzz ) Swappable and custom HTTP adapters In V3, cURL is no longer required, but is still used by the default HTTP adapter. However, you can use other HTTP adapters, like the one shipped with Guzzle that uses PHP’s HTTP stream wrapper. You can also write custom adapters, which opens up the possibility of creating an adapter that integrates with a non-blocking event loop like ReactPHP. Paginators Paginators are a new feature in V3, that come as an addition to Iterators from V2. Paginators are similar to Iterators, except that they yield Result objects, instead of items within a result. This is nice, because it handles the tokens/markers for you, getting multiple pages of results, but gives you the flexibility to extract whatever data you want. // List all "directories" and "files" in the bucket. $paginator = $s3->getPaginator('ListObjects', [ 'Bucket' => 'my-bucket', 'Delimiter' => '/' ]); foreach ($paginator as $result) { $jmespathExpr = '[CommonPrefixes[].Prefix, Contents[].Key][]'; foreach ($result->search($jmespathExpr) as $item) { echo $item . "n"; } } Example output: Array ( [0] => dir1/ [1] => dir2/ [2] => file1 [3] => file2 ... ) New event system Version 3 features a new and improved event system. Command objects now have their own event emitter that is decoupled from the HTTP request events. There is also a new request "progress" event that can be used for tracking upload and download progress. use GuzzleHttpEventProgressEvent; $s3->getHttpClient()->getEmitter()->on('progress', function (ProgressEvent $e) { echo 'Uploaded ' . $e->uploaded . ' of ' . $e->uploadSize . "n"; }); $s3->putObject([ 'Bucket' => $bucket, 'Key' => 'docs/file.pdf', 'Body' => fopen('/path/to/file.pdf', 'r'), ]); Example output: Uploaded 0 of 5299866 Uploaded 16384 of 5299866 Uploaded 32768 of 5299866 ... Uploaded 5275648 of 5299866 Uploaded 5292032 of 5299866 Uploaded 5299866 of 5299866 New client options For V3, we changed some of the options you provide when instantiating a client, but we added a few new options that may help you work with services more easily. - "debug" – Set to trueto print out debug information as requests are being made. You’ll see how the Command and Request objects are affected during each event, and an adapter-specific wire log of the request. - "retries" – Set the maximum number of retries the client will perform on failed and throttled requests. The default has always been 3, but now it is easy to configure. These options can be set when instantiating client. <?php $s3 = (new AwsSdk)->getS3([ // Exist in Version 2 and 3 'profile' => 'my-credential-profile', 'region' => 'us-east-1', 'version' => 'latest', // New in Version 3 'debug' => true, 'retries' => 5, ]); What has changed? To make all of these improvements for V3, we needed to make some backward-incompatible changes. However, the changes from Version 2 to Version 3 are much fewer than the changes from Version 1 to Version 2. In fact, much of the way you use the SDK will remain the same. For example, the following code for writing an item to an Amazon DynamoDB table looks exactly the same in both V2 and V3 of the SDK. $result = $dynamoDbClient->putItem([ 'TableName' => 'Contacts', 'Item' => [ 'FirstName' => ['S' => 'Jeremy'], 'LastName' => ['S' => 'Lindblom'], 'Birthday' => ['M' => [ 'Month' => ['N' => '11'], 'Date' => ['N' => '24'], ], ], ]); There are two important changes though that you should be aware of upfront: - V3 requires PHP 5.5 or higher and requires the use of Guzzle 5. - You must now specify the API version (via the "version" client option) when you instantiate a client. This is important, because it allows you to lock-in to the API versions of the services you are using. This helps us and you maintain backward compatibility between future SDK releases, because you will be in charge of API versions you are using. Your code will never be impacted by new service API versions until you update your version setting. If this is not a concern for you, you can default to the latest API version by setting 'version'to 'latest'(this is essentially the default behavior of V2). What next? We hope you are excited for Version 3 of the SDK! We look forward to your feedback as we continue to work towards a stable release. Please reach out to us in the comments, on GitHub, or via Twitter (@awsforphp). We plan to publish more blog posts in the near future to explain some of the new features in more detail. We have already published the API docs for V3, but we’ll be working on improving all the documentation for V3, including creating detailed migration and user guides. We’ll also be speaking about V3 in our session at AWS re:Invent. We will continue updating and making regular releases for V2 on the "master" branch of the SDK’s GitHub repository. Our work on V3 will happen on a separate "v3" branch until we are ready for a stable release. Version 3 can be installed via Composer using version 3.0.0-beta.1, or you can download the aws.phar or aws.zip on GitHub.. Release: AWS SDK for PHP – Version 2.6.12 We would like to announce the release of version 2.6.12 of the AWS SDK for PHP. This release adds support for new regions to the Kinesis client and new features to the AWS Support and AWS IAM clients. Install the SDK - Install via Composer/Packagist (e.g., "aws/aws-sdk-php": "~2.6.12") - Download the aws.phar - Download the aws.zip Release: AWS SDK for PHP – Version 2.6.11 We would like to announce the release of version 2.6.11 of the AWS SDK for PHP. - Added support for Amazon Cognito Identity - Added support for Amazon Cognito Sync - Added support for Amazon CloudWatch Logs - Added support for editing existing health checks and associating health checks with tags to the Amazon Route 53 client - Added the ModifySubnetAttribute operation to the Amazon EC2 client Install the SDK - Install via Composer/Packagist (e.g., "aws/aws-sdk-php": "~2.6.11") - aws.phar - aws.zip Release: AWS SDK for PHP – Version 2.6.10 We would like to announce the release of version 2.6.10 of the AWS SDK for PHP. This release adds support for new regions to the AWS CloudTrail and Amazon Kinesis clients. Install the SDK - Install via Composer/Packagist (e.g., "aws/aws-sdk-php": "~2.6.10") - Download the aws.phar - Download the aws.zip Release: AWS SDK for PHP – Version 2.6.9 We would like to announce the release of version 2.6.9 of the AWS SDK for PHP. This release adds support for uploading document batches and submitting search and suggestion requests to an Amazon CloudSearch domain using the new CloudSearch Domain client. It also adds support for configuring delivery notifications to the Amazon SES client, and updates the Amazon CloudFront client to work with the latest API version. - Added support for the CloudSearchDomain client, which allows you to search and upload documents to your CloudSearch domains. - Added support for delivery notifications to the Amazon SES client. - Updated the CloudFront client to support the 2014-05-31 API. - Merged PR #316 as a better solution for issue #309. Install the SDK - Install via Composer/Packagist (e.g., "aws/aws-sdk-php": "~2.6.9") - Download the aws.phar - Download the aws.zip Guzzle 4 and the AWS SDK Since Guzzle 4 was released in March (and even before then), we’ve received several requests for us to update the AWS SDK for PHP to use Guzzle 4. Earlier this month, we tweeted about it too and received some pretty positive feedback about the idea. We wanted to take some time to talk about what upgrading Guzzle would mean for the SDK and solicit your feedback. The SDK relies heavily on Guzzle If you didn’t already know, the AWS SDK for PHP relies quite heavily on version 3 of Guzzle. The AWS service clients extend from the Guzzle service clients, and we have formatted the entire set of AWS APIs into Guzzle "service descriptions". Roughly 80 percent of what the SDK does is done with Guzzle. We say all this because we want you to understand that updating the SDK to use Guzzle 4 is potentially a big change. What does Guzzle 4 offer? We’ve had several requests for Guzzle 4 support, and we agree that it would be great. But what exactly does Guzzle 4 offer — besides it being the new "hotness" — that makes it worth the effort? We could mention a few things about the code itself: it’s cleaner, it’s better designed, and it has simpler and smaller interfaces. While those are certainly good things, they’re not strong enough reasons to change the SDK. However, Guzzle 4 also includes some notable improvements and new features, including: - It’s up to 30 percent faster and consumes less memory than Guzzle 3 when sending requests serially. - It no longer requires cURL, but still uses cURL by default, if available. - It supports swappable HTTP adapters, which enables you to provide custom adapters. For example, this opens up the possibility for a non-blocking, asynchronous adapter using ReactPHP. - It has improved cURL support, including faster and easier handling of parallel requests using a rolling queue approach instead of batching. These updates would provide great benefits to SDK users, and would allow even more flexible and efficient communications with AWS services. Guzzle 4 has already been adopted by Drupal, Laravel, Goutte, and other projects. I expect it to be adopted by even more during the rest of this year, and as some of the supplementary Guzzle packages reach stable releases. We definitely want users of the AWS SDK for PHP to be able to use the SDK alongside these other packages without causing conflicts or bloat. Consequences of updating to Guzzle 4 Because the AWS SDK relies so heavily on Guzzle, the changes to Guzzle will require changes to the SDK. In Guzzle 4, many things have changed. Classes have been renamed or removed, including classes that are used by the current SDK and SDK users. A few notable examples include the removal of the GuzzleBatch and GuzzleIterator namespaces, and how GuzzleHttpEntityBody has been changed and moved to GuzzleHttpStreamStream. The event system of Guzzle 4 has also changed significantly. Guzzle has moved away from the Symfony Event Dispatcher, and is now using its own event system, which is pretty nice. This affects any event listeners and subscribers you may have written for Guzzle 3 or the SDK, because they will need a little tweaking to work in Guzzle 4. Another big change in Guzzle 4 is that it requires PHP 5.4 (or higher). Using Guzzle 4 would mean that the SDK would also require PHP 5.4+. Most of the changes in Guzzle 4 wouldn’t directly affect SDK users, but there are a few, like the ones just mentioned, that might. Because of this, if the SDK adopted Guzzle 4, it would require a new major version of the SDK: a Version 3. What are your thoughts? We think that updating the SDK to use Guzzle 4 is the best thing for the SDK and SDK users. Now that you know the benefits and the consequences, we want to hear from you. Do you have any questions or concerns? What other feedback or ideas do you have? Please join our discussion on GitHub or leave a comment below. most about this conference was how excited everyone was to be there. The Laravel community is very energetic, and they are growing. I definitely felt that energy, and I believe it helped make the event a good experience for all of the attendees. I was honored to be able to speak to the attendees about Amazon Web Services. My talk was titled AWS for Artisans, and I focused on "The Cloud", AWS in general, and the AWS SDK for PHP. To tie everything together, I walked through the creation of a simple, but scalable, Laravel application, where pictures of funny faces are uploaded and displayed. I showed how the SDK was used and how AWS Elastic Beanstalk and other AWS services fit into the architecture. Here are some of my favorite moments/comments from the presentation: - - - And here are the resources from the presentation: There were already many existing AWS customers at Laracon, and it was nice to be able to talk to them, answer their questions, and hear their feedback and ideas. I also enjoyed talking to the developers that had yet to try AWS. Use the AWS credits I gave you to do something awesome! :-) Thank you to everyone that I had conversations with.
https://aws.amazon.com/blogs/developer/category/php/page/3/
CC-MAIN-2018-09
refinedweb
2,702
63.8
27 June 2008 16:21 [Source: ICIS news] By Aaron Rodrigues MUMBAI (ICIS news)--India’s Rashtriya Chemicals and Fertilizers (RCF) along with the Department of Fertilizers (DoF) plan to revive mothballed plants due to rising demand and shortages of fertilizers, a source from the company said on Friday. “We have been asked to look into and review two plants in the country,” a senior official from RCF told ICIS news. Urea production in India was about 20m tonnes in 2007, with imports of around 5m-6m tonnes making up for the shortfall in supply. The state owned firm plans to revive Hindustan Fertilizer Corporation’s ?xml:namespace> “We are in the first stage of our feasibility study which will be ready in one month’s time,” the source said. Each of the plants is expected to produce 100,000 tonnes/year of urea, he added. The revival for each of the plants would cost roughly Indian rupees (Rs) 45bn ($1.1bn), the source said. The cabinet's approval committeev would take its decision after six months, after gaining support from both house of the government, he said. “As soon as we get the government approval, which is expect to be completed in 2009, we would begin construction and be completed by the beginning or in the middle of 2012,” he added. The source said that RCF and DoF have been assured by the Petroleum Ministry of natural gas supplies within three-to-four years for manufacturing urea. He added that there could be a possible joint venture between RCF and DoF, where RCF would source the technology and DoF would provide funds for the revival of the facilities. ($1 = Rs42
http://www.icis.com/Articles/2008/06/27/9136132/rcf-dof-plan-to-revive-two-india-plants-for-urea.html
CC-MAIN-2014-49
refinedweb
281
65.76
I am very new to Python... If I were to give a list, my function should return the number of times "5" appears times 50. For example, if I were to call fivePoints([1,3,5,5]) it should return 100 since the number 5 appears twice (2*50). Is creating an empty list necessary? Do I use the count function? This is what I have but I'm probably way off. def fivePoints(aList): for i in aList: i.count(5*50) return aList This is one option: x = [1, 2, 5, 5] def fivePoints(aList): y = [i for i in aList if i == 5] z = len(y) * 50 return z fivePoints(x) 100
https://codedump.io/share/HRuLHqdSxlOi/1/how-to-multiply-a-certain-integer-in-list
CC-MAIN-2018-09
refinedweb
115
83.86
8375/a-java-collection-of-value-pairs-tuples What I'm looking for is a type of collection where each element in the collection is a pair of values. Each value in the pair can have its own type (like the String and Integer example above), which is defined at declaration time. The collection will maintain its given order and will not treat one of the values as a unique key (as in a map). Essentially I want to be able to define an ARRAY of type <String,Integer> or any other 2 types. I realize that I can make a class with nothing but the 2 variables in it, but that seems overly verbose. I also realize that I could use a 2D array, but because of the different types I need to use, I'd have to make them arrays of OBJECT, and then I'd have to cast all the time. I only need to store pairs in the collection, so I only need two values per entry. Does something like this exist without going the class route? The Pair class is one of those "gimme" generics examples that is easy enough to write on your own. public class Pair<L,R> { private final L left; private final R right; public Pair(L left, R right) { this.left = left; this.right = right; } public L getLeft() { return left; } public R getRight() { return right; } @Override public int hashCode() { return left.hashCode() ^ right.hashCode(); } @Override public boolean equals(Object o) { if (!(o instanceof Pair)) return false; Pair pairo = (Pair) o; return this.left.equals(pairo.getLeft()) && this.right.equals(pairo.getRight()); } } If you're looking for an alternative that ...READ MORE We can find out the no. of ...READ MORE Assuming TreeMap is not good for you ...READ MORE this problem is solved using streams and ...READ MORE You can refer the following code: public class ...READ MORE for (Map.Entry<String, String> item : params.entrySet()) { ...READ MORE In Java 8 you can do it ...READ MORE List<String> list = new ArrayList<String>(); String[] array = ...READ MORE int[] a = {1,2,3,4,5}; int[] b = Arrays.copyOf(a, ...READ MORE This is the recursive way of finding a ...READ MORE OR
https://www.edureka.co/community/8375/a-java-collection-of-value-pairs-tuples?show=8378
CC-MAIN-2019-26
refinedweb
372
75.4
Coffeehouse Thread11 posts Forum Read Only This forum has been made read only by the site admins. No new threads or comments can be added. Restricting access to a file using web.config in FlexWiki Back to Forum: Coffeehouse Conversation locked This conversation has been locked by the site admins. No new comments can be made. I tried adding this to the web.config file in FlexWiki <location path="WikiBases/MyWiki/WikiPage.wiki"> <system.web> <authorization> <deny roles="AGroup" /> </authorization> </system.web> </location> I want to restrict AGroup from viewing the page with WikiPage as title but failed. Anyone has any idea on how to restrict specific FlexWiki pages to any user or groups? Thanks!! Question, where is FlexWiki from? I know you have a ton of questions about it...is there a reason you keep posting them here? Because I saw some old threads discussing about FlexWiki here therefore I think I can get some answers here. FlexWiki is an open source wiki, it's something like wikipedia.. Can go here to check it out I don't use IIS, I've never written a web.config file in my life and I haven't tested the following, but here is my guess. Nope doesn't work but thanks anyway!!! You are putting it in the root of the server, correct? Not in some sub-directory? Yep I put it in the root directory..!! I think the location tag doesn't support non aspx files I don't think the ASP.NET authorization system ever sees the .wiki part. A typical FlexWiki URL would be, for example, The way I think it works is that IIS's URL parser follows the whole path until it reaches the 'default.aspx' in the middle. It sees that default.aspx is a file, not a directory, so it looks up the aspx extension in the metabase and sees that it's associated with ASP.NET. ASP.NET then determines if the user has access to default.aspx. If so, it runs the page. The remainder of the path, in this case /Channel9.ProductFeedback, is available to FlexWiki's default.aspx in the HttpRequest object's PathInfo property. FlexWiki uses that, and the NamespaceMap.xml file, to determine which file contains the raw wiki data. Here it would be ProductFeedback.wiki in the WikiBase folder for the Channel9 namespace; archives are in ProductFeedback(timestamp-user).awiki files. Basically, you need some form of authorization mechanism within FlexWiki itself, and at the moment, I don't think there is one. Feel free to suggest it on the SourceForge tracker. Hey thank, actually I tried to map non-asp.net files through the aspnet_isapi process so that it can be authenticated..this is done using IIS...see the link below for more info I tried it on many files like.jpg, .html but this doesn't seem to work on .wiki files... sigh...There's no authenticating mechanism for FlexWiki? This is going to be tough...heh This Article may provide some help/insight. Hey thanks...I think this will help a lot. Trying to read it now.
https://channel9.msdn.com/Forums/Coffeehouse/159170-Restricting-access-to-a-file-using-webconfig-in-FlexWiki
CC-MAIN-2018-09
refinedweb
523
69.89
D. CGI Lite Contents: Multipart Forms CGI Lite is a Perl 5 library that will decode both URL-encoded and multipart form data produced by the file upload feature present in Netscape 2.0. This module does not have all of the features of the CGI::* modules, but is lightweight and slightly easier to use. Here is a simple example that outputs all the form data: #!/usr/local/bin/perl5 use CGI_Lite; $cgi = new CGI_Lite (); $cgi->parse_form_data (); print "Content-type: text/plain", "\n\n"; $cgi->print_form_data (); exit (0); The parse_form_data method parses the form data and stores it in an internal associative array, which can be printed out by calling the print_form_data method. Or, you can place the form data in a variable of your choice: #!/usr/local/bin/perl5 use CGI_Lite; $cgi = new CGI_Lite (); %data = $cgi->parse_form_data (); print "Content-type: text/plain", "\n\n"; foreach $key (keys %data) { print $key, " = ", $data{$key}, "\n"; } exit (0); D.1 Multipart Forms The file upload feature of Netscape 2.0 allows you to do just that: send files as part of a form through the network. Here is how to create a multipart form: <HTML> <HEAD><TITLE>CGI Lite Test</TITLE></HEAD> <BODY> <H1>CGI Lite Test</H1> <HR> <FORM ACTION="/cgi-bin/upload.pl" ENCTYPE="multipart/form-data" METHOD="POST"> What is your name? <INPUT TYPE="text" NAME="username"> <P> Select a <B>TEXT</B> file to send: <INPUT TYPE="file" NAME="input_file"> <P> <INPUT TYPE="submit" VALUE="Send the Multipart Form"> <INPUT TYPE="reset" VALUE="Clear the Information"> </FORM> <HR> </BODY> </HTML> There are two things that are very different from what we have seen before. The first is the ENCTYPE attribute in the FORM tag. If we want the form data to be URL-encoded, then we don't have to specify ENCTYPE, in which case it defaults to application/x-www-form-urlencoded. The other is the TYPE attribute in the INPUT tag. By specifying a TYPE of "file", Netscape will display a "Browse" button which allows you to select a file from your disk or network. Figure D.1 shows how the form will be rendered by Netscape. The following program decodes the form information and sends the user-uploaded file back to the browser for display. (That's the reason why we asked the user to send text files.) #!/usr/local/bin/perl5 use CGI_Lite; $cgi = new CGI_Lite (); print "Content-type: text/plain", "\n\n"; $cgi->set_directory ("/usr/shishir") || die "Directory doesn't exist.\n"; The set_directory method allows you to store the uploaded files in a specific directory. If this method is not called, CGI_Lite defaults to /tmp. $cgi->set_platform ("UNIX"); Since this is a text file, we can use the set_platform method to add or remove the appropriate end of line (EOL) characters. The EOL character is a linefeed ("\n") in UNIX, a carriage return ("\r") on the Macintosh, and a combination of carriage return and line feed ("\r\n") on the Windows/DOS platform. $cgi->set_file_type ("handle"); %data = $cgi->parse_form_data (); The set_file_type method with an argument of "handle" returns the filehandle(s) for uploaded files that are stored in the directory specified by the set_directory method. $user = $data{'username'}; $filename = $data{'input'}; print "Welcome $user, let's see what file you uploaded...", "\n"; print "=" x 80, "\n"; Here we simply retrieve the form fields and display a welcome message. Remember, the variable $filename points to a filehandle. if (-T $filename) { while (<$filename>) { print; } close ($filename); } else { print "Sorry! you did not upload a text file.", "\n"; } exit (0); If the uploaded file is a text file, we proceed to output it. If not, an error message is output. Back to: CGI Programming on the World Wide Web © 2001, O'Reilly & Associates, Inc.
http://oreilly.com/openbook/cgi/appd_01.html
crawl-002
refinedweb
629
62.27
Advertisement The nice program for the beginners Thanks for the Example program .. which is so nice for the beginners Hi if the more additons examples are provided means. it so useful. can i have more additonal examples Example Should be explanatory Hello, I think that the example for JUnit is not sufficent.Bec when I tried to same example then I have to modify the same example and then execute it. My suggestion was that the example should be sufficent enough so that it is very easy to run and More examples needed. Hi, Thanks for this Tutorial. Can you post more samples for a better understanding ? junit hi, the above example working good. but it's not sufficient to understand junit or to debug it please try to add few more examples. so that it could help others to use it or to build their code to perfection Junit Is Junit supports Struts2.0? Test Suite cud you show how to run TestSuite need more examples on junit sir, the above example isworking good. but it's not sufficient to understand junit Plzzz Help I've Done Exactly What You ve sed to me and i have one probleme the extension should be .java or .class??? because when it .java it is saying to me the Class not found C:\Documents and Settings\Okolo> java junit.textui.TestRunner CalculatorTest Cla appreciation nice material great whatever given here is a great help for the beginner like me.. i thank for that.. junit test case - Java Beginners junit test case how to use junit for testing email id format Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks testcases - JUNIT testcases hi deepak, can you please send 2 test cases for the following program..i'm unable to write ...please i need it urgently. ________________________________________________________________________ import connectivity - JUNIT connectivity /********************************************************** * Program to create GUI for ATM Simulation 4.3.1 JUnit 4.3.1 JUnit... language. JUnit was originally written by Erich Gamma and Kent Beck. Downloading and installing JUnit 4.3.1 In this section junit - JUNIT JUnit What is Junit
http://www.roseindia.net/tutorialhelp/allcomments/4747
CC-MAIN-2015-40
refinedweb
357
58.18
Building a web application with Google App Engine is quick and easy, and you have the power of the google distributed content delivery network and the ‘BigTable’ database at your disposal. So what’s it good for? When I was 14 my Dad bought a Commodore 64. I poured over the manuals, taught myself C64 BASIC, bought C64 programming magazines and created a stack of audio cassettes full of unfinished projects. It was awesome fun being able to create sounds and make things happen on screen. Then about 2 years later I discovered girls and didn’t become interested in technology again for some years. I often wonder if I could have reached Bill Gates-like heights if only I had continued to apply myself to technology instead of fluorescent colored shirts, skinny ties, and Blue-Light discos. Dave Winer said that because of Google App Engine, "Python is the new BASIC". And my first experience with App Engine did indeed generate that same sense of fun that C64 BASIC did way back in my embarrassing past. With zero Python experience and just the App Engine guestbook tutorial under my belt, I managed to build a functional (albeit utterly pointless), slightly amusing web service called "Now Roll…". Now Roll.. simulates Dungeons and Dragons-style dice rolling. It was straight forward and simple to create — I’m talking push-button simple. Google App Engine is my new Commodore 64. So what exactly is Google App Engine? Well, it includes most of the pieces you usually need to assemble together to create a web application. It has a simple framework component called ‘webapp’, a database referred to as the ‘datastore’, and a deployment system. Once completed you can deploy your web application to run on Google’s infrastructure. It also has a fully functional local development environment that is a duplicate of the live environment. Once you download and install the SDK You can start practicing with the local development environment straight away, but to deploy an application you have to register your app first with your Google account. which includes account confirmation via SMS. The free accounts are also limited to 3 applications — I assume to avoid 5 million variations of "Hello, world!" being deployed. But this limitation might only be during the preview-release period. I have two big tips for beginners: complete the Getting Started tutorial and use the Launcher application (which is bundled with the SDK) to build your first project. When you add a new web application using the Launcher app, it generates an application shell for you, all the files needed for a basic web app, that you can run straight away. It only outputs "Hello world!", but it works. Once you have the shell, you can start playing! A basic GAE application has the following components: - An application configuration file called app.yamlin which you specify routes (or URL patterns) and the python script file that will be executed if the requested URL matches the pattern. - One or more script files (as specified in the app.yamlfile) that each specify a list of URL patterns and the Python classes responsible for handling the URLs. - One or more request handling classes that define a getand/or a postmethod in order to be able to handle HTTP GET or POST requests to the specified URL patterns. They read the requests and compose the responses, for which there’s a templating engine you can use. - Optionally, one or more model classes that define the data structure of the entities you want to store in the datastore. The app.yaml file allows you to specify routes (or URL patterns) called ‘handlers’: you match a URL pattern to a python script file that will be executed if the requested URL matches the pattern. The default app contains a single handler: handlers: - url: .* script: main.py This default configuration specifies that GAE will execute the main.py file for all request URLs. If, like me, you just want to dive in, just use this default setup, it’s fine for a simple app. You can also specify static routes to things like CSS files which is explained in the Getting Started tutorial. The main.py script file creates an instance of the webapp.WSGIApplication class in a function called main and then calls this function. This is all boilerplate code that is automatically generated for you. Here’s what it looks like: def main(): application = webapp.WSGIApplication([('/', MainHandler)], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() The creation of the application object includes two parameters: a list of request handlers and the debug flag. The debug flag, if True, will cause GAE to output stack traces to the browser for runtime errors. The request handlers are each a Python tuple that matches a URL pattern (a regular expression) with a Python class. The default setup has the root URL '/' matched to the MainHandler class. You can have multiple handlers specified in this way. For my Now Roll… app I needed three handlers: application = webapp.WSGIApplication([('/', MainHandler), ('/rolls/(.*)', ArchivedRoll), ('/(.*)', RollHandler)], debug=True) One handler for the home page ('/', MainHandler), one for the permalinks to archived dice rolls ('/rolls/(.*)', ArchivedRoll) and one for the dice rolling function ('/(.*)', RollHandler). You have to list them from most specific to least specific, as the first matching pattern is the one that is executed. The specified python classes must be subclasses of the webapp.RequestHandler class and need to define a get and/or a post method in order to be able to handle HTTP GET or POST requests to the specified URL patterns. The default MainHandler class looks like this: class MainHandler(webapp.RequestHandler): def get(self): self.response.out.write('Hello world!') I have also included groups in the regular expressions, for example '/rolls/(.*)'. Handily, these matching fragments are passed as arguments to the get and post methods of the handling classes. My ArchivedRoll class uses this feature to get easy access to the key value of the archived roll: class ArchivedRoll(webapp.RequestHandler): def get(self,key): # retrieve roll from datastore using the key value... # return the roll data to the browser using a template... To use the GAE datastore you’ll need one or more model classes. There are no tables in the GAE datastore, your python model classes define the data structure and the datastore automatically handles storage, retrieval and index generation. Items in the datastore are known as ‘entities’. You need to import the Google db module: from google.appengine.ext import db and your model classes must be subclasses of db.Model. In my Now Roll… application I want to store every dice roll and to do so I use this DiceRoll class: class DiceRoll(db.Model): roll = db.StringProperty() results = db.ListProperty(int) outcome = db.IntegerProperty() date = db.DateTimeProperty(auto_now_add=True) A model class is very simple; it provides a type name (a ‘kind’ in GAE parlance): DiceRoll, and a list of named properties to store for each entity. To store an entity you create a new instance of your model class, set the property values and then call the object’s put method. dr = DiceRoll() dr.roll = '2d6' dr.results = [1,2] dr.outcome = 3 dr.put() The entity is now in the database. All entities automatically receive a unique identifier that you can retrieve using the object’s key() method like so: key = dr.key() You can retrieve a specific entity from the datastore using its key like so: roll = db.get(key) You can run datastore queries using an API or GQL, Google’s SQL-like query language. If I wanted to add a feature to my Now Roll… app that listed the last 10 dice rolls in reverse date order, the query would look something like this using the API: rolls_query = DiceRoll.all().order('-date') rolls = rolls_query.fetch(10) The equivalent query using GQL is: rolls = db.GqlQuery("SELECT * FROM DiceRoll " "ORDER BY date DESC LIMIT 10") This is all very simple and quite straight forward. For more complex relationships you can define an entity to be the parent of one or more other entities and then use that relationship in queries. Once the request has been handled by your class method it can then output a response. The self.response.out.write method simply outputs the string argument it is supplied. The content type of the response is text/html unless you specify a different value using: self.response.headers["Content-Type"] = 'type' before any output. The alternative (and preferred) method of output is to use templates. The template module bundled with the GAE SDK uses Django’s templating engine. To use it you need to import the template module: from google.appengine.ext.webapp import template, and the os module from the Python standard library: import os. Here’s an excerpt from the template file that displays an archived dice roll: <h1 id="page_title">Now Roll {{ roll.roll }}</h1> <div id="roll_info"> <div id="outcome">{{ roll.roll }} = <span id="result">{{ roll.outcome }}</span> <p id="permalink">(<a href="/rolls/{{ id }}">Result permalink</a>)</p> </div> </div> The {{...}} tokens are placeholders for data that can be inserted by your request handler class’ response like so: roll = db.get(key) template_values = { 'roll': roll, 'id': roll.key(), } path = os.path.join(os.path.dirname(__file__), 'archive.html') self.response.out.write(template.render(path, template_values)) The roll value in the template_values array above is an instance of the DiceRoll model class and all of that class’ properties are available in the template. So {{ roll.outcome }} in the template will be replaced by the outcome property value of the roll object. The job of creating the Now Roll… API was made very easy through the use of templates. The only difference between the webpage version and the XML version of a dice roll result is the template file used to render the response and the response Content-Type value. Deployment to the Google servers is literally a single click on the button labeled ‘Deploy’ if you use the Google App Engine Launcher client. There is zero setup required for this; once deployed, your app is live and distributed. That’s Google App Engine in a nutshell. There’s a handful of other App Engine modules you can make use of like image manipulation, caching and the Users API for integration with Google Accounts. Even though GAE has a collection of useful libraries, it’s noticeably missing a testing framework. Frameworks like Rails have done a lot to popularize the use of automated unit testing, so it’s absence may turn some developers away. It’d be a nice feature to have one click unit testing to match the ease of the one-click deployment feature. Kevin has mentioned previously that although it’s friendly to beginners, applications need to be written specifically for GAE, which is likely to deter some developers from making use of the platform. Although I imagine it’s possible to write a separate module to interface your existing application with the Google framework, I wonder, if you already have a commercial application and a team of developers, if it’s worth your time to port it to GAE. The datastore may also trip up a few developers as they grapple with it’s non-relational nature. For example, the problem of counting records in the datastore has already come up in the GAE forum. You cannot simply do a COUNT(). But, I expect that as more developers sign up for the platform, more usage patterns will be developed and tested and best-practice usage patterns will eventually rise to the top. One thing I predict GAE will be an excellent platform for is single-purpose web services. I’m not talking something frivolous like dice rolling, but some thing useful like a data collection service or an image manipulation service. A small generic component that can be used in a larger application but separated out and hosted on GAE to save application resources. A great example is this OpenID Provider app enabling you to sign into any site supporting openID using your Google Account credentials. One aspect of GAE that I haven’t seen mentioned anywhere else yet is that it integrates with Google Apps for Your Domain. When you create a GAE application you can limit access to your registered domain users: hello company intranet! With GAE, GAfYD becomes a complete software solution for organizations, especially distributed organizations. I can see huge cost savings for a company that can avoid having to purchase servers and bandwidth and the ongoing administration costs of a self-hosted intranet. For me though, apart from the mundane commercial aspects, Google App Engine represents something like the Commodore 64. You know a personal computer that doesn’t do much on its own but is flexible, programmable, inspiring, and fun. You use it to tinker with stuff, finding out how stuff works, trying out ideas for new gizmos, and building custom tools. It embodies the spirit of 80’s era personal computing (but without the bad fashion sense). June 12th, 2008 at 3:44 am Although there’s no testing framework, a presentation at Google I/O outlined a pretty simple set of practices. I haven’t found the presentation on YouTube, but this guy blogged it: Basic idea is that you can upload a new version that is accessible by a slight changed in the URL. Then make it live when you’re done testing. June 13th, 2008 at 10:54 am Thanks Mr Anon, that’s a really useful link. August 2nd, 2008 at 9:06 am There is a testing framework..
http://www.sitepoint.com/blogs/2008/06/11/rollin-with-google-appengine-80s-style/
crawl-002
refinedweb
2,289
64.1
The following is a step-by-step guide for beginners interested in learning Python using Windows 10. Set up your development environment For beginners who are new to Python, we recommend you install Python from the Microsoft Store. Installing via the Microsoft Store uses the basic Python3 interpreter, but handles set up of your PATH settings for the current user (avoiding the need for admin access), in addition to providing automatic updates. This is especially helpful if you are in an educational environment or a part of an organization that restricts permissions or administrative access on your machine. If you are using Python on Windows for web development, we recommend a different set up for your development environment. Rather than installing directly on Windows, we recommend installing and using Python via the Windows Subsystem for Linux. For help, see: Get started using Python for web development on Windows. If you're interested in automating common tasks on your operating system, see our guide: Get started using Python on Windows for scripting and automation. For some advanced scenarios (like needing to access/modify Python's installed files, make copies of binaries, or use Python DLLs directly), you may want to consider downloading a specific Python release directly from python.org or consider installing an alternative, such as Anaconda, Jython, PyPy, WinPython, IronPython, etc. We only recommend this if you are a more advanced Python programmer with a specific reason for choosing an alternative implementation. Install Python To install Python using the Microsoft Store: Go to your Start menu (lower left Windows icon), type "Microsoft Store", select the link to open the store. Once the store is open, select Search from the upper-right menu and enter "Python". Open "Python 3.9" from the results under Apps. Select Get. Once Python has completed the downloading and installation process, open Windows PowerShell using the Start menu (lower left Windows icon). Once PowerShell is open, enter Python --versionto confirm that Python3 has Visual Studio Code By using VS Code as your text editor / integrated development environment (IDE), you can take advantage of IntelliSense (a code completion aid), Linting (helps avoid making errors in your code), Debug support (helps you find errors in your code after you run it), Code snippets (templates for small reusable code blocks), and Unit testing (testing your code's interface with different types of input). VS Code also contains a built-in terminal that enables you to open a Python command line with Windows Command prompt, PowerShell, or whatever you prefer, establishing a seamless workflow between your code editor and command line. To install VS Code, download VS Code for Windows:. Once VS Code has been installed, you must also install the Python extension. To install the Python extension, you can select the VS Code Marketplace link or open VS Code and search for Python in the extensions menu (Ctrl+Shift+X). Python is an interpreted language, and in order to run Python code, you must tell VS Code which interpreter to use. We recommend sticking with Python 3.7 unless you have a specific reason for choosing something different. Once you've installed the Python extension, select a Python 3 interpreter by opening the Command Palette (Ctrl+Shift+P), start typing the command Python: Select Interpreter to search, then select the command. You can also use the Select Python Environment option on the bottom Status Bar if available (it may already show a selected interpreter). The command presents a list of available interpreters that VS Code can find automatically, including virtual environments. If you don't see the desired interpreter, see Configuring Python environments. To open the terminal in VS Code, select View > Terminal, or alternatively use the shortcut Ctrl+` (using the backtick character). The default terminal is PowerShell. Inside your VS Code terminal, open Python by simply entering the command: python Try the Python interpreter out by entering: print("Hello World"). Python will return your statement "Hello World". Install Git (optional) If you plan to collaborate with others on your Python code, or host your project on an open-source site (like GitHub), VS Code supports version control with Git. The Source Control tab in VS Code tracks all of your changes and has common Git commands (add, commit, push, pull) built right into the UI. You first need to install Git to power the Source Control panel. Download and install Git for Windows from the git-scm website. An Install Wizard is included that will ask you a series of questions about settings for your Git installation. We recommend using all of the default settings, unless you have a specific reason for changing something. If you've never worked with Git before, GitHub Guides can help you get started. Hello World tutorial for some Python basics Python, according to its creator Guido van Rossum, is a “high-level programming language, and its core design philosophy is all about code readability and a syntax which allows programmers to express concepts in a few lines of code.” Python is an interpreted language. In contrast to compiled languages, in which the code you write needs to be translated into machine code in order to be run by your computer's processor, Python code is passed straight to an interpreter and run directly. You just type in your code and run it. Let's try it! With your PowerShell command line open, enter pythonto run the Python 3 interpreter. (Some instructions prefer to use the command pyor python3, these should also work). You will know that you're successful because a >>> prompt with three greater-than symbols will display. There are several built-in methods that allow you to make modifications to strings in Python. Create a variable, with: variable = 'Hello World!'. Press Enter for a new line. Print your variable with: print(variable). This will display the text "Hello World!". Find out the length, how many characters are used, of your string variable with: len(variable). This will display that there are 12 characters used. (Note that the blank space it counted as a character in the total length.) Convert your string variable to upper-case letters: variable.upper(). Now convert your string variable to lower-case letters: variable.lower(). Count how many times the letter "l" is used in your string variable: variable.count("l"). Search for a specific character in your string variable, let's find the exclamation point, with: variable.find("!"). This will display that the exclamation point is found in the 11th position character of the string. Replace the exclamation point with a question mark: variable.replace("!", "?"). To exit Python, you can enter exit(), quit(), or select Ctrl-Z. Hope you had fun using some of Python's built-in string modification methods. Now try creating a Python program file and running it with VS Code. Hello World tutorial for using Python with VS Code. Open PowerShell and create an empty folder called "hello", navigate into this folder, and open it in VS Code: mkdir hello cd hello code . Once VS Code opens, displaying your new hello folder in the left-side Explorer window, open a command line window in the bottom panel of VS Code by pressing Ctrl+` (using the backtick character) or selecting View > Terminal. By starting VS Code in a folder, that folder becomes your "workspace". VS Code stores settings that are specific to that workspace in .vscode/settings.json, which are separate from user settings that are stored globally. Continue the tutorial in the VS Code docs: Create a Python Hello World source code file. Create a simple game with Pygame Pygame is a popular Python package for writing games - encouraging students to learn programming while creating something fun. Pygame displays graphics in a new window, and so it will not work under the command-line-only approach of WSL. However, if you installed Python via the Microsoft Store as detailed in this tutorial, it will work fine. Once you have Python installed, install pygame from the command line (or the terminal from within VS Code) by typing python -m pip install -U pygame --user. Test the installation by running a sample game : python -m pygame.examples.aliens All being well, the game will open a window. Close the window when you are done playing. Here's how to start writing your own game. Open PowerShell (or Windows Command Prompt) and create an empty folder called "bounce". Navigate to this folder and create a file named "bounce.py". Open the folder in VS Code: mkdir bounce cd bounce new-item bounce.py code . Using VS Code, enter the following Python code (or copy and paste it): import sys, pygame pygame.init() size = width, height = 640, 480 dx = 1 dy = 1 x= 163 y = 120 black = (0,0,0) white = (255,255,255) screen = pygame.display.set_mode(size) while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() x += dx y += dy if x < 0 or x > width: dx = -dx if y < 0 or y > height: dy = -dy screen.fill(black) pygame.draw.circle(screen, white, (x,y), 8) pygame.display.flip() Save it as: bounce.py. From the PowerShell terminal, run it by entering: python bounce.py. Try adjusting some of the numbers to see what effect they have on your bouncing ball. Read more about writing games with pygame at pygame.org. Resources for continued learning We recommend the following resources to support you in continuing to learn about Python development on Windows. Online courses for learning Python Introduction to Python on Microsoft Learn: Try the interactive Microsoft Learn platform and earn experience points for completing this module covering the basics on how to write basic Python code, declare variables, and work with console input and output. The interactive sandbox environment makes this a great place to start for folks who don't have their Python development environment set up yet. Python on Pluralsight: 8 Courses, 29 Hours: The Python learning path on Pluralsight offers online courses covering a variety of topics related to Python, including a tool to measure your skill and find your gaps. LearnPython.org Tutorials: Get started on learning Python without needing to install or set anything up with these free interactive Python tutorials from the folks at DataCamp. The Python.org Tutorials: Introduces the reader informally to the basic concepts and features of the Python language and system. Learning Python on Lynda.com: A basic introduction to Python. Working with Python in VS Code Editing Python in VS Code: Learn more about how to take advantage of VS Code's autocomplete and IntelliSense support for Python, including how to customize their behavior... or just turn them off. Linting Python: Linting is the process of running a program that will analyse code for potential errors. Learn about the different forms of linting support VS Code provides for Python and how to set it up. Debugging Python: Debugging is the process of identifying and removing errors from a computer program. This article covers how to initialize and configure debugging for Python with VS Code, how to set and validate breakpoints, attach a local script, perform debugging for different app types or on a remote computer, and some basic troubleshooting. Unit testing Python: Covers some background explaining what unit testing means, an example walkthrough, enabling a test framework, creating and running your tests, debugging tests, and test configuration settings.
https://docs.microsoft.com/en-us/windows/python/beginners
CC-MAIN-2021-04
refinedweb
1,905
62.68
At 2002-06-14 14:04 +0000, Arjun Ray wrote: > John Cowan <jcowan@[...].com> wrote: > |> What are the requirements for DTDs in DSDL? > | > | The title of Part 9 is all we have to go on, and it talks about > | namespace-aware and datatype-aware DTDs. > > Insanity. That is what we are trying to learn ... it was identified as an area of consideration, so a part was made for it. We are doing the due-diligence to determine if there are use-cases supporting the effort involved. While proposing straw-man solutions is a fascinating exercise, we need to determine from the use-cases if it is even worthwhile to follow through with a Part 9 for DTDs. I'd be interested to hear from others if they think DTD syntax is even needed for the features we've quoted here ... or will people just rely on Part 2 (RELAX-NG) for their grammar needs? Should we even bother with a Part 9? Why? We aren't doing this for an academic exercise ... we want to reach the constituency of document modelers out there, many of whom are still using DTD syntax. ............ Ken -- Upcoming: 3-days XSLT/XPath and/or 2-days XSL-FO:Sep 30-Oct 4,2002 G. Ken Holman mailto:gkholman@[...] ----------------------------------------------------------------- The xml-dev list is sponsored by XML.org <> , an initiative of OASIS <> The list archives are at To subscribe or unsubscribe from this list use the subscription manager: <>
http://aspn.activestate.com/ASPN/Mail/Message/1244161
crawl-002
refinedweb
242
74.59
hi guys, i just need clarification on some things, for report writing. i know that "public void" defines method, so what do the following do/mean. public class public Insets public boolean final string final int int Point float Thread char boolean class classname parent sorry to sound so stupid, i just dont know what to refer to them as when writing this report. i dont want to keep refering to everything as class or method etc. thanks for any help. You have public and private. They dertermine which methods you can use when you create an object of a class. Class object = new Class() object.anyPublicMethod() If it's a private method, object cannot use it. Only the class itself may call private methods. Void mean the method returns nothing. In place of void, you can have double, int, char, string, or any object type you've made. It defines the return type of the method. Class defines a new class. class MyClass { } That creates a new class called MyClas from which you can create an object of. And object is an instance of a class. public Insets public boolean Those would be the beginning definitions of methods. Public methods which will return an object of type Insets, and a boolean object. Boolean objects can only equal one of two things. True and False. I'd reccommend you find a begginer's book for java. Any begginers book would discuss all this, and probably do a much better job defining it than me. thanks for the help anyway, do you know of any good books for this? Try this link. Free online Java Books Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?139546-quick-help-needed&p=412639
CC-MAIN-2016-18
refinedweb
293
76.72
Friday afternoon, I finally got tired of a bug in "Open Terminal Here", a Bash script I was using so far. I made a Python replacement... Open Terminal Here ? "Open Terminal Here" just a simple Bash script meant to be launched from a right-click within a directory opened in Nautilus, the Gnome file manager. As the name states, the gnome-terminal should be launched with its working directory being set to the currently opened directory. Simply handy! Only one problem: setting the working directory was failing whenever the name of directory contained non-ASCII characters. 🙁 This "Open Terminal Here" is not a standard functionality of Nautilus (as opposed to Dolphin where you can just press Maj+F4 to launch a "konsole"). Rather, you have to download an extension script from G-Scripts, a nice central repository, and drop it in a quite buried directory (~/.gnome2/nautilus-scripts). When it comes to the "Open Terminal Here" function, there are several scripts available. Choosing between these is up to the visitor... "Open Terminal Here" in Bash Here is the central part of the Bash script I was using so far:" I had never paid attention to this code until Friday (especially since I have such a poor Bash understanding). I now realize that line 1 does most of the job, which is : - retreive the $NAUTILUS_SCRIPT_CURRENT_URIenvironment variable. - cut the "file://" prefix from the URI - replace "%20" by real space characters " " This Bash script works most of the time, except with directory names containing non-ASCII characters. Indeed, in a directory called "études notes" for example, Nautilus sets the URI variable to "". One can see how "é" gets encoded as "%C3%A9" and the Bash script doesn't perform any decoding. Now "Open Terminal Here" with Python In order to support non-ASCII, I tried to build my own script. I chose Python simply because that's the language I'm the most familiar with. The entire source is a GitHub Gist. Here are the main lines: import os from urllib import unquote from subprocess import Popen # 1a) Retrieve the URI of the current directory: env = os.getenv("NAUTILUS_SCRIPT_CURRENT_URI", "/home/pierre") # 1b) Process the URI to make it just a regular Path string env = env.replace('file://', '') # Should fail with Python 3 ?! env = unquote(env) # decode the URI # 2) Launch gnome-terminal: Popen(["gnome-terminal", '--working-directory=%s' % env]) Well, it's basically the same as in Bash, except with a useful call to the urllib.unquote function which performs all the URI decoding job. All imports are from the Python standard library: great ! And by the way, with Perl ! While writing this post, I realized I had on my laptop a second "Open Terminal Here" script. I had forgotten about it because it was disabled for I don't know what reason. It's a Perl script, also coming from G-Script. Here are the main lines: use strict; $_ = $ENV{'NAUTILUS_SCRIPT_CURRENT_URI'}; if ($_ and m#^) { s/%([0-9A-Fa-f]{2})/chr(hex($1))/eg; s#^; exec "gnome-terminal --working-directory='$_'"; } What can be said except that it also works ! The URI decoding is nicely performed by "chr(hex($1))". I actually think the author wrote this script for the very same reason I wrote a Python one: "he loves Perl!" 😉 Conclusion ? I don't know if it's that useful to have three versions of a script to solve the very same tiny problem... Still, one can notice the vivid differences in the programming approach: while my Python scripts relies mostly function coming from standard modules, the Perl script is literally built on Regular Expressions. No surprise ! Finally, the Bash script just calls command line utilities ( cut and sed), but doesn't decode the URI. Is there a command line program for that ? And what about Haskell ? I'll keep this for another day 😉 I need to get a Fistful of Monads first. Would be a nice experiment though...
https://pierreh.eu/tag/bash/
CC-MAIN-2021-04
refinedweb
660
63.59
On 9/18/06, Antoine Pitrou <solipsis at pitrou.net> wrote: > > Le lundi 18 septembre 2006 à 09:48 -0600, Adam Olsen a écrit : > > * Bolt-on tracing GC such as Boehm-Demers-Weiser. Totally unsupported > > by the C standards and changes cache characteristics that CPython has > > been designed with for years, likely with a very large performance > > penalty. > Has it been measured what cache effects reference counting entails ? Probably not recently. > With reference counting, each object is mutable from the point of view > of the CPU cache (refcnt is always incremented and later decremented). But each object request is only to one piece of memory, not two (obj and header separate). Just a reminder about Neil Schemenauer's (old) patch to use Boehm-Demers According to PyPy sometimes translates to the use of BDW. I also seem to remember (but can't find a reference) that someone tried using a separate immortal namespace for basic objects like None, but the hassle of deciding what to do on each object ate up the savings. -jJ
https://mail.python.org/pipermail/python-3000/2006-September/003739.html
CC-MAIN-2021-25
refinedweb
174
61.46
#include "ntw.h" #include "ntw.h" Go to the source code of this file. This widget is a combination of a combo_box and an entry widget. An example would be the URL field in a browser bar, which might include a drop down menu of recently visited sites. Get combo_box_entry editability. Get the status of the entry editable flag. Get combo_box_entry maximum length. Get the bounds on the number of characters that may be typed in an entry widget. 0 means no limit. Get the current text of a combo_box_entry widget. Get combo_box_entry visibility. The visibility refers to whether or not the typed characters are echoed to the screen. If FALSE, the characters will be replaced by asterisks, as in a password field. Default is TRUE. Create combo_box_entry widget. Set combo_box_entry editability. Set the ability of a user to change the text in an entry. TRUE is editable, FALSE is non-editable. Default is TRUE. Set combo_box_entry maximum length. Set the bounds on the number of characters that may be typed in an entry widget. 0 means no limit. Set the current text of a combo_box_entry widget. Note that this is destructive, none of the old text will remain. Text string should be in UTF-8 format (ASCII text is valid UTF-8). Set combo_box_entry visibility.
http://ntw.sourceforge.net/Docs/CServer/combo__box__entry_8h.html
CC-MAIN-2018-05
refinedweb
216
71.71
A while ago, a feature was added to the cxf-codegen-plugin to automatically scan the directory and run wsdl2java on all the wsdl's there instead of explicitly having to configure wsdlOptions for every wsdl . This was a feature request and certainly helped us. The pom for our testutils shrunk a LOT as we were able to autoscan all 40+ test wsdls and not configure each of them. It also is designed to more "mimic" the standard Maven plugins that run over everything they find in directories. (like resources, compiler, javadoc, etc....) Convention over configuration. The PROBLEM is that this is causing issues for users. Several times in the last couple months, we've hit issues with JAXB or wsdl2java or similar that are caused by the plugin running wsdl2java on wsdl's that they shouldn't be and overwriting stuff generated from the wsdls that it SHOULD run on. Thus, I think something needs to be done about this, just not sure what. The easiest is to just add an "autoScanDirectory" option to the plugin to turn on/off the behavior. I'm just not sure what the default should be. "true" to match current behavior, or "false" to reduce issues? The other thing I think we should do is if there are wsdlOptions configured, process those WITHOUT configuration first, then those with configuration. Thus, the configured stuff would overwrite any non-configured things. The OTHER way to do it, which would require a LOT more work, would be to process the configured stuff first, but add hooks into the wsdl loading to record any wsdl's that are included from those and then remove those from the auto scan. That's quite a bit more complex, and I don't really think completely solves the problem. Other wsdl's NOT included could also use the same namespaces or similar and overwrite ObjectFactories and such. Thoughts? Other ideas? -- Daniel Kulp dkulp@apache.org
http://mail-archives.apache.org/mod_mbox/cxf-dev/200911.mbox/%3C200911021038.57697.dkulp@apache.org%3E
CC-MAIN-2014-10
refinedweb
325
62.48
7:44 PM, Tobias Olausson <tobsan at gmail.com> wrote: > Hello all. > I am currently implementing an emulation of a CPU, in which the CPU's > RAM is part of the internal state > that is passed around in the program using a state monad. However, the > program performs > unexpectingly bad, and some profiling information makes us believe > that the problem is the high > memory usage of the program. > > The program below is similar to our main program used when testing a > sorting algorithm in this CPU: > > module Main where > > import Control.Monad.State.Lazy > import Data.Word > import Data.Array.Diff > import Control.Concurrent (threadDelay) > > data LoopState = LoopState > { intVal :: Integer > , diff :: DiffUArray Word8 Word8 > } > > initState :: LoopState > initState = LoopState 0 (array (0x00,0xFF) [(idx,0)|idx<-[0x00..0xFF]]) > > main :: IO () > main = do > execStateT looper initState >>= putStrLn . show . intVal > > looper :: StateT LoopState IO () > looper = do > st <- get > let res = intVal st + 1 > idx = fromIntegral res > put $ st { intVal = res, diff = (diff st) // [(idx,idx)] } > if res == 13000000 > then return () > else looper > > Of course our program does more than updating a counter ;-) > Compiling and running this program yields the following result: > > [~]:[olaussot] >> ghc --make -O2 -o array ArrayPlay.hs > [~]:[olaussot] >> ./array +RTS -sstderr > ./array +RTS -sstderr > 13000000 > 313,219,740 bytes allocated in the heap > 1,009,986,984 bytes copied during GC > 200,014,828 bytes maximum residency (8 sample(s)) > 4,946,648 bytes maximum slop > 393 MB total memory in use (3 MB lost due to fragmentation) > > Generation 0: 590 collections, 0 parallel, 3.06s, 3.09s elapsed > Generation 1: 8 collections, 0 parallel, 3.56s, 4.21s elapsed > > INIT time 0.00s ( 0.00s elapsed) > MUT time 0.27s ( 0.27s elapsed) > GC time 6.62s ( 7.30s elapsed) > EXIT time 0.00s ( 0.00s elapsed) > Total time 6.89s ( 7.57s elapsed) > > %GC time 96.1% (96.4% elapsed) > > Alloc rate 1,155,958,754 bytes per MUT second > > Productivity 3.9% of total user, 3.6% of total elapsed > > Why does the program spend 96.1% of its total running time collecting garbage? > Any tips to make this program perform better are appreciated. > Please do tell if anything is unclear. > > -- > Tobias Olausson > tobsan at gmail.com > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > >
http://www.haskell.org/pipermail/haskell-cafe/2009-March/057041.html
CC-MAIN-2014-35
refinedweb
379
68.16
Return a column's name #include <qdb/qdb.h> const char *qdb_column_name( qdb_result_t *res, int col ); qdb This function returns the name of a specified column index col, as defined in a database schema when the table was created. A pointer to the specified column's name, or NULL if an error occurred (errno is set). The string containing the column name is part of the results set, so the string memory is freed (along with the rest of the results set memory) by qdb_freeresult(). If you want to keep the column name longer, you must create a copy of the string (and manage that copy's memory). QNX Neutrino
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.qdb_en.dev_guide/topic/api/qdb_column_name.html
CC-MAIN-2018-43
refinedweb
110
67.08
Ownership and User-Schema Separation in SQL Server A core concept of SQL Server security is that owners of objects have irrevocable permissions to administer them. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. User-Schema Separation User-schema separation allows for more flexibility in managing database object permissions. A schema is a named container for database objects, which allows you to group objects into separate namespaces. For example, the AdventureWorks sample database contains schemas for Production, Sales, and HumanResources. The four-part naming syntax for referring to objects specifies the schema name. Server.Database.DatabaseSchema.DatabaseObject Schema Owners and Permissions Schemas can be owned by any database principal, and a single principal can own multiple schemas. You can apply security rules to a schema, which are inherited by all objects in the schema. Once you set up access permissions for a schema, those permissions are automatically applied as new objects are added to the schema. Users can be assigned a default schema, and multiple database users can share the same schema. By default, when developers create objects in a schema, the objects are owned by the security principal that owns the schema, not the developer. Object ownership can be transferred with ALTER AUTHORIZATION Transact-SQL statement. A schema can also contain objects that are owned by different users and have more granular permissions than those assigned to the schema, although this is not recommended because it adds complexity to managing permissions. Objects can be moved between schemas, and schema ownership can be transferred between principals. Database users can be dropped without affecting schemas. Built-In Schemas SQL Server ships with ten pre-defined schemas that have the same names as the built-in database users and roles. These exist mainly for backward compatibility. You can drop the schemas that have the same names as the fixed database roles if you do not need them. You cannot drop the following schemas: dbo guest sys INFORMATION_SCHEMA If you drop them from the model database, they will not appear in new databases. Note The sys and INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects in these schemas and you cannot drop them. The dbo Schema The dbo schema is the default schema for a newly created database. The dbo schema is owned by the dbo user account. By default, users created with the CREATE USER Transact-SQL command have dbo as their default schema. Users who are assigned the dbo schema do not inherit the permissions of the dbo user account. No permissions are inherited from a schema by users; schema permissions are inherited by the database objects contained in the schema. Note When database objects are referenced by using a one-part name, SQL Server first looks in the user's default schema. If the object is not found there, SQL Server looks next in the dbo schema. If the object is not in the dbo schema, an error is returned. External Resources For more information on object ownership and schemas, see the following resources.
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/ownership-and-user-schema-separation-in-sql-server
CC-MAIN-2019-35
refinedweb
521
62.78
hu_ECPVSRecoverDecrypt() Decrypts a message using ECPVS. Synopsis: #include "huecpvs.h" int hu_ECPVSRecoverDecrypt(sb_Context ecpvsContext, size_t rLen, const unsigned char *rValue, size_t *recoverableMessageLen, unsigned char *recoverableMessage, sb_GlobalCtx sbCtx) Arguments: - ecpvsContext ECPVS context object pointer. - rLen The length (in bytes) of rValue. - rValue The r component from the signature computation. - recoverableMessageLen The length (in bytes) of recoverableMessage. - recoverableMessage The recoverable part of the message. - sbCtx A global context. Library:libhuapi (For the qcc command, use the -l huapi option to link against this library) Description: This is the third API function to be called during the ECPVS recovery process. It can be called multiple times to decrypt parts of the message; this is useful if the message is particularly large. The first call to this function must have an rValue whose length (i.e. rLen) is at least that of the length of the padding. If the entire padding is not passed in the initial call to this function, i.e. rLen is less than the length of the padding, then the error SB_ERR_BAD_INPUT_BUF_LEN will be returned as the expected padding was not found. If the length of the recoverable part of the message is known, a pointer to a buffer large enough to hold this part should be passed in recoverableMessage and its length in recoverableMessageLen. This function will copy the recovered data into recoverableMessage and set the actual length of it in recoverableMessageLen. If there is no recoverable data - or you just want to check the padding - set both recoverableMessage and recoverableMessageLen to NULL. The total recoverable message is a concatenation of all output recoverableMessage, in order, by this function. Returns: - SB_ERR_NULL_CONTEXT Context object is NULL. - SB_ERR_BAD_CONTEXT Context object is invalid. - SB_ERR_NULL_OUTPUT_BUF_LEN_PTR The recoverable message length is NULL. - SB_ERR_NULL_GLOBAL_CTX Global context is NULL. - SB_ERR_BAD_INPUT_BUF_LEN rValue does not contain the entire padding on the initial call to this function. - SB_ERR_BAD_OUTPUT_BUF_LEN The recoverable message length is invalid. - SB_ERR_BAD_OUTPUT_BUF_LEN The recoverable message length is invalid. - SB_FAIL_INVALID_SIGNATURE Redundancy check failed. The padding value did not match the expected result. - SB_FAIL_ALLOC Memory allocation failure. - SB_SUCCESS Success. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.crypto.lib_ref/topic/hu_ECPVSRecoverDecrypt.html
CC-MAIN-2019-35
refinedweb
362
51.44
Provided by: libgetdata-doc_0.10.0-5build2_all NAME gd_native_type — returns the native data type of a field in a Dirfile SYNOPSIS #include <getdata.h> gd_type_t gd_native_type(DIRFILE *dirfile, const char *field_code); DESCRIPTION The gd_native_type() function queries a dirfile(5) database specified by dirfile and determines the native type of data specified by field_code, which may contain a representation suffix. The dirfile argument must point to a valid DIRFILE object previously created by a call to gd_open(3). The native data type of a field of a given entry type is calculated as: BIT INDEX GD_UINT64; CONST CARRAY the data type of the field; DIVIDE MULTIPLY if either input field is complex valued: GD_COMPLEX128, otherwise: GD_FLOAT64; INDIR the data type of the input CARRAY; LINCOM POLYNOM if any of the scalar parameters is complex valued, or if the native data type of any of the input fields is complex valued: GD_COMPLEX128, otherwise: GD_FLOAT64; LINTERP if the look-up table is complex valued: GD_COMPLEX128, otherwise: GD_FLOAT64; MPLEX WINDOW the native data type of the data field; PHASE the native data type of the input field; RAW the data type of the raw data on disk; RECIP if the dividend or the native data type of the input field is complex valued: GD_COMPLEX128, otherwise: GD_FLOAT64; SARRAY SINDIR STRING GD_STRING; SBIT GD_INT64. Furthermore, if the supplied field_code contains a representation suffix, and the native data type of the field is complex valued, the native type returned will be the corresponding real valued type. RETURN VALUE Upon successful completion, gd_native_type() returns the native data type of the field code specified. This will equal one of the symbols: GD_UINT8, GD_INT8, GD_UINT16, GD_INT16, GD_UINT32, GD_INT32, GD_FLOAT32, GD_FLOAT64, GD_COMPLEX64, GD_COMPLEX128, GD_STRING. The meanings of these symbols are explained in the gd_getdata(3) manual page. On error, this function returns GD_UNKNOWN and stores a negative-valued error code in the DIRFILE object which may be retrieved by a subsequent call to gd_error(3). Possible error codes are: GD_E_ALLOC The library was unable to allocate memory.. GD_E_IO An error occurred while trying to read a LINTERP table from disk. GD_E_LUT A LINTERP table was malformed. GD_E_RECURSE_LEVEL Too many levels of recursion were encountered while trying to resolve field_code. This usually indicates a circular dependency in field specification in the dirfile. A descriptive error string for the error may be obtained by calling gd_error_string(3). HISTORY The get_native_type() function appeared in GetData-0.6.0. The return type for STRING fields was GD_NULL. In GetData-0.7.0, this function was renamed to gd_native_type(). Before GetData-0.10.0, the return type for STRING fields was GD_NULL. SEE ALSO gd_error(3), gd_error_string(3) gd_getdata(3), gd_open(3), dirfile(5)
http://manpages.ubuntu.com/manpages/disco/man3/gd_native_type.3.html
CC-MAIN-2019-30
refinedweb
448
51.18
Lab Exercise 10: Classes and Inheritance The purpose of this lab is to give you practice in creating a base class and many child classes that inherit methods and fields from the base class. In addition, we'll be modifying the L-system class to enable it to use rules that include multiple possible replacement strings and choose them stochastically (randomly). We continuing to draw material from The Algorithmic Beauty of Plants. You can download the entire book from the algorithmic botany site Tasks In lab today we'll be upgrading our L-systems and then building a parent class called Shape that will enable us to create quickly many other child classes that implement different kinds of shapes. - On the Desktop, make a folder called lab10. - Make a copy of your lsystem2.py file from Project 9 and rename it lsystem3.py. - The only new capability we're going to add is the ability to use more than one replacement string in a rule. When making use of a rule with multiple replacement strings, the buildString function will randomly select one of the replacement strings. To make managing the rules easier, we're going to use dictionaries. The symbol to be replaced will be the dictionary key, and the dictionary value will the the list of one or more replacement strings. Start by editing your setRules function and assigning the empty dictionary to the rules field of self instead of the empty list. Because we're using the addRule method to actually add the rule itself, we don't have to change anything else. In the addRule function, you still need to copy the replacement strings (but not the symbol) from the rule into a new list. Then you can add the new symbol and replacement list pair to the dictionary. def addRule( self, rule ): # set a local variable to the empty list # for each replacement string in the rule (2nd element on) # append the element to the local list # set the dictionary entry for the key (rule[0]) to be the new list To add a new entry to a dictionary, you simply index the dictionary with the new key and make the assignment. For example, the following code snippet creates a new entry using the value in variable A as the key and makes the value in the variable B the entry. d = {} a = 'F' b = ['FF', 'FFF' ] d[a] = b The only other function we need to edit is the replace function, which needs to make use of the new dictionary form of the rules. The new form of the replace function is simpler than the old one, because we don't have to search for the right rule. Either the key exists or it doesn't, and we can ask the dictionary. Edit your replace function so it matches the following. def replace( self, istring ): # initialize the newstring to the empty string # for each symbol in the original string (istring) # if the symbol is in the rules dictionary # add to newstring a random choice from the dictionary entry self.rules[symbol] # else # add the symbol to newstring # return newstring Test your Lsystem by running its test function using one of the following files. - We're now going to start building a system that makes use of our transformer class to build shapes that are not necessarily the output of L-systems. Just as we made functions in the second project to draw shapes, now we're going to make classes that create shape objects. Create a new file called shapes.py. Start by including the transformer class using from transformer import *. Make sure the module name matches the name of the file in which you define Transformer. - Begin a parent class called Shape. Then define an __init__() function that takes self and three arguments with default values: distance, angle, and color. Color should be a 3-element tuple like (0, 0, 0). The init function should create the object fields distance, angle, color, and string and set them to the arguments of the init function, except for string, which should be set to the empty string. - Create accessor functions setColor(self, r, g, b), setDistance(self, value), setAngle(self, angle), and setString(self, value). - Create a function draw() that executes the following algorithm. def draw(self, xpos, ypos, scale=1.0, orientation=0): # create a Transformer object # have the Transformer object place the turtle at (xpos, ypos) # have the Transformer object orient the turtle to orientation # have the Transformer object set the turtle color # have the Transformer and color # use the setString method of self to set the string to 'F-F-F-F-' # use the setAngle method of self to set the angle to 90 - In the same file, below the Square class, create a new class called Triangle that is the same as the Square class, but sets the string to 'F-F-F-' and sets the angle to 120. - Once you have completed your Square and Triangle classes, download and run the testshapes.py program to see if your classes work properly. When you are done with the lab exercises, go ahead and get started on the assignment.
http://cs.colby.edu/courses/S09/cs151-labs/labs/lab10/
CC-MAIN-2017-51
refinedweb
866
66.78
14 January 2010 17:42 [Source: ICIS news] TORONTO (ICIS news)--Sulzer does not expect to see a quick recovery in its key markets this year, the diversified Swiss chemicals engineering firm said on Thursday. Order intake, down 24% to Swiss francs (Swfr) 3bn ($2.9bn) in 2009 after a very strong 2008, was expected to fall further this year, although some markets would stabilise, Sulzer said. The decrease in orders is likely to be driven by lower project activity in the power generation and hydrocarbon processing industries, it said. Regionally, market activity in Europe and ?xml:namespace> In 2009, Sulzer’s business was hit by the economic downturn and overcapacities in the petrochemicals industry, among other factors. Demand for new equipment was significantly lower due to a decline in the consumption of oil, chemicals and plastics, it said.Sulzer said last year it would cut 11% of its global workforce, mainly in Europe and the ($1 = Swfr1
http://www.icis.com/Articles/2010/01/14/9325964/swiss-chem-engineer-sees-no-quick-recovery-in-2010.html
CC-MAIN-2015-22
refinedweb
158
52.19
For example suppose that there is an App where you wanted a Panel control to resize on click of button, then u would probably use setting the width and height of the panel.Which will resize the Layout, but the resize is not smooth [i.e immediate movement in resize] ,so we have to slow down the resize so that it looks fine [smooth]. So here we have to use a tag called <mx:Resize /> to achieve this. Example : <mx:Resize duration=”2000″ id=”r1″/> Now this resize Effect can be included with any layouts. Here iam using a Canvas, below one shows it., <mx:Canvas id=”cnv” width=”50%” height=”100%” resizeEffect=”r1″> <mx:Panel layout=”absolute” id=”p” height=”100%” width=”100%”> <mx:ControlBar horizontalAlign=”center”> <mx:Button label=”FullScreen” click=”fullscreen()” id=”f”/> <mx:Button label=”Compact” click=”compact()” id=”c”/> </mx:ControlBar> </mx:Panel> </mx:Canvas> Here resizeEffect is the property iam using [in canvas] to specify the resize effect to the canvas. And the functions are here which will move the canvas. to full screen., <mx:Script> <![CDATA[ import mx.effects.easing.*; import mx.controls.Alert; private function fullscreen():void { cnv.width=this.screen.width-100; } private function compact():void { cnv.width=(this.width)/2.2; } ]]> </mx:Script> So atlast the the resize of canvas is smoothen by using resize effect..using <mx:Resize /> Tag and resizeEffect Property. simple and small but very use full. Very good! I have never saw it before Thanks, this helped me “get it done” faster. hi i cant understand Hi , Can you tell me which part , i can rewrite the post.Can you be clear. regards, kumar. how can we give effect move and resize for container which should increse its height only to up side ? Container are resizable from the both the ends and to achive this you got to move the container[x,y] to the position from where you actually want to stat the resize and do the height thing. comment me for any queries. regards, kumar. HI kumar , u r idea has helped me out . thank u , Hi Kumar Nice blog. But distorted logo. Please do something about it Thanks! this is exactly what i was looking for! Took me more than 40 minutes to figure it out that i had to google “resize Effect” instead of “show effect” Hi, Yes i did my best to include keywords on the post name, so that they appear on top easily on search engines.:-) regards, kumar. hey kumar, its really helpful.exactly for what I am looking for. Gud work !!!!!!!!! Hi Kumar, its very helpful but is there a way to see such that the resize can be done only to left and the right side remains as such becoz in my application i have tool icons on left and the right side can be overlapped when resized. Hi, Thanks for that info. regards, kumar. Hi Please guide me how to configure Flex 4.5 in eclipse Helios.Give me the full details na
http://flexonblog.wordpress.com/2008/01/25/smoothing-the-resizing-of-layouts-in-flex-with-effects/
crawl-003
refinedweb
507
74.9
. What are Z-Boxes? If you have heard of Z-Boxes in a lecture or read about them in a book, you probably encountered\begin{array}{ll}(1)&\;\text{Let }s:=s_0\cdots s_{n-1}\in\Sigma^n\\(2)&\;\forall i\in\{1\cdots n-1\}:\;Z_i:=max\{j\in[0\cdots n]: s_i\cdots s_{i+j-1}=s_0\cdots s_{j-1}\}\\(3)&\;\text{Z-Box starting at position i}:=s_i\cdots s_{i+Z_i-1}\,\text{if }Z_i\neq0\end{array} But let’s be honest. These definitions are mainly for people who already understand what this means. To most students this is not very useful in understanding an algorithm. Let’s break it down anyway and see what we can salvage. (1) This just says that s is a string of length n (this definition is used). Technically \Sigma is the alphabet of the string – but for all practical intents and purposes this is defined by the programming context anyway, so we’ll just ignore \Sigma. So let’s assume n=6. Then each of those would meet the definition: s = "abc123" s = "fo bar" s = " " s = "abcabc" In almost all defitions for algorithms dealing with strings, you’ll find a similar definition. (2)\&(3) is slightly more complicated. I’ll explain it without talking about the definition at first. Here’s the easy part: The Z-Box starting at position i is just a substring of s, starting at position i. Z_i is its length. Unfortunately, the hard part is that it’s not just any substring starting at position i. But how can we check if a Z-Box is valid? Let’s just have a look at a fragment of python code: def is_valid_zbox(s, i, zi): # zi is the length of the zbox, compute the zbox zbox = s[i:i + zi] # Compute the prefix of the same length prefix = s[0:zi] return zbox == prefix is_valid_zbox("abcabc", 2, 2) # "ca" != "ab" => False is_valid_zbox("abcabc", 2, 3) # "cab" != "abc" => False is_valid_zbox("abcabc", 3, 2) # "ab" == "ab" => True is_valid_zbox("abcabc", 3, 3) # "abc" == "abc" => True So a Z-Box is valid if two strings are equal: The Z-Box itself and a prefix of s. As we can see in the examples, there can be multiple valid Z-boxes at any position i. Which one do we use? Due to the max in (2), we always use the longest one. A simple Z-Box algorithm So here’s how to compute the Z-boxes using what we know so far: def is_valid_zbox(s, i, zi): # zi is the length of the zbox, compute the zbox zbox = s[i:i + zi] # Compute the prefix of the same length prefix = s[0:zi] return zbox == prefix def zbox(s, i): # Maximum length of substring starting at pos. i # (s has only so many characters) maxlen = len(s) - i # Try out every zbox, starting at the longest # i.e. maxlen, maxlen-1, ..., 1 for zi in range(maxlen, 0, -1): if is_valid_zbox(s, i, zi): # Return the first valid zbox # As we're starting from the longest, # this is always the longest zbox zbox = s[i:i + zi] return zbox # Compute zboxes s = "abcabc" [zbox(s, i) for i in range(1, len(s))] # [None, None, 'abc', None, None] Why are there so many None values in the result? Because, as you can see in the string, a, the first character in the string, only occurs again at position 3. And why do we only compute Z-boxes starting at [1\cdots n-1]? We could as well compute them for [0\cdots n-1]!?!? But if you take any substring of s of length j starting at 0, and compare it to the prefix of s of length j – they are the same, no matter how long they are! That’s because the prefix of s of length j is the s of length j starting at 0 (pre means before, therefore we start at 0!) Let’s have a look at another example: # Compute zboxes s = "ananas" [zbox(s, i) for i in range(1, len(s))] # [None, 'ana', None, 'a', None] And there are also examples where we can’t find any Z-box! # Compute zboxes s = "foobar" [zbox(s, i) for i in range(1, len(s))] # [None, None, None, None, None] But for some we can find a lot! # Compute zboxes s = "abaabaabab" [zbox(s, i) for i in range(1, len(s))] # [None, 'a', 'abaaba', None, 'a', 'aba', None, 'ab', None] Additional reading: Another easy-to-understand blogpost about Z-boxes including a more efficient algorithm Lecture-type material: section 2.5.1
https://techoverflow.net/2017/02/23/an-introduction-to-z-boxes/
CC-MAIN-2018-05
refinedweb
783
67.28
IntroductionIn this article I will explain how to connect to an Oracle Database 11g Express Edition database from a Windows Forms application using C# and display data from the Oracle Database to a DataGridView control. DescriptionStep 1Download and install Oracle Database 11g Express Edition.Go to and download the express edition of the Oracle Database 11g. You need to sign up to download this setup. Select 32 bit or 64 bit setup depending on your OS.Extract the downloaded zip file and run the setup.exe file inside the OracleXE112_Win32\DISK1 folder. Just follow the setup. This will install the database to your system.Note: While installing this setup, it will suggest default port numbers for TNS Port, MTS Port, and HTTP Port. But if any of the ports are already in use, change that port to a port which is not in use and click on the Back and then the Next button.Step 2Download and install SQL Developer.Go to and download SQL Developer (Oracle equivalent of SQL Server's Management Studio), GUI tool to browse and create database objects.Step 3Configure sample table provided with the database.Uncompress the downloaded zip file of SQL Developer and run sqldeveloper.exe from the sqldeveloper-3.2.20.09.87\sqldeveloper folder to open SQL Developer.In the Connections tab, expand Auto-Generated Local Connections and then system-XE. Now it will ask for Username and Password. Leave the default username "system" and enter the password that you had given at the time of installing Oracle Database (Step 1) and click "OK".We will display data from the Employees table that is provided with the database inside HR user. To be able to access this table from a .NET application we have to grant some privileges to the HR user. Expand "Other Users" and right-click on the "HR". Click "Edit User" to open the "Create/Edit User" dialog box. Go to the "Roles" tab and ensure that "Granted" and the "Default" checkbox against "Connect and Resource roles" are checked.Step 4Create a Windows Application to fetch data from the database.Create a Windows Forms application and drag a DataGridView control onto Form1. Set the Dock property of the DataGridView to Fill.Step 5Include the Oracle data access library in the project.Add a reference to the Oracle.DataAccess dll using the Add Reference dialog box. You can find it in the .NET tab. And add the following namespace at the top:using Oracle.DataAccess.Client;Step 6Write code to retrieve data from the database, as in the followng: private void Form1_Load(object sender, EventArgs e) { LoadData(); } private void LoadData() try { string ConString = "Data Source=XE;User Id=system;Password=*****;"; using (OracleConnection con = new OracleConnection(ConString)) { OracleCommand cmd = new OracleCommand("SELECT * FROM HR.Employees", con); OracleDataAdapter oda = new OracleDataAdapter(cmd); DataSet ds = new DataSet(); oda.Fill(ds); if (ds.Tables.Count > 0) { dataGridView1.DataSource = ds.Tables[0].DefaultView; } } } catch (Exception ex) MessageBox.Show(ex.ToString()); }You may wonder about the data source name "XE". It is the name that contains the connection information of the Oracle Database. You can always change it to any other name. You can find it at the following location:C:\oraclexe\app\oracle\product\11.2.0\server\network\ADMIN\ tnsnames.ora It looks something like the following: OutputAll the rows from HR.Employees table are loaded in the DataGridView control. View All
https://www.c-sharpcorner.com/UploadFile/deepak.sharma00/connecting-to-oracle-database-from-windows-forms-application/
CC-MAIN-2018-13
refinedweb
568
51.65
This article describes the procedure to update SignalR applications to version 2.0. Introduction As you know SignalR is a library that adds real-time web functionality to the development of ASP.NET applications and apart from that the current version of SignalR 2.0 offers a consistent development experience across server platforms using OWIN. This article describes the procedure to update SignalR applications to version 2.0. In this article, I am using an existing web application developed in a MVC 5 project template that is a real time web app with SignalR as I defined in SignalR in MVC 5. We'll update it to SignalR version 2.0. So, let's proceed with the following procedure. Step 1: Open the Library Package Manager then select "Package Manager Console" and enter the following command: Uninstall-Package Microsoft.AspNet.SignalR -RemoveDependencies Step 2: Enter the following command to update SignalR: Install-Package Microsoft.AspNet.SignalR Step 3: Open the Chat.cshtml page. Step 4: Update the Script reference with the following reference: <script src="~/Scripts/jquery.signalR-2.0.1.js"></script> Step 5: Open the Global.asax file and remove the MapHubs call method as shown below: 6: Create an OWIN Startup class. Step 7: Open the OWIN Startup class and replace the code with the code below: using Microsoft.Owin; using Owin; [assembly: OwinStartupAttribute(typeof(SignalRDemo.Startup))] namespace SignalRDemo public partial class Startup { public void Configuration(IAppBuilder MyApp) { MyApp.MapSignalR(); } } The assembly attribute adds the class to OWIN's Startup class so that the Configuration() executes when OWIN starts up. This execute the MapSignalR() to create routes for all SignalR hubs in the application. Step 7: Now, just build the solution and run the application. While on the browser chatting session copy the URL and paste it to another browser to continue the chat. Summary With this article you can update your application with SignalR version 2.0. You can enable it in your existing application or create your new application with this SignalR update. Thanks for reading. View All View All
https://www.c-sharpcorner.com/UploadFile/4b0136/updating-the-signalr-application-to-2-0/
CC-MAIN-2022-05
refinedweb
345
50.33
Today's pattern is the Prototype pattern which is utilized when creating an instance of a class is expensive or complicated. Prototype in the Real World While the prototype that you first think of is a first draft of a new product, the prototype pattern is slightly different. The prototype pattern involves copying something that already exists. An example of this in the real world could be spliting a cell, where two identical cells are created. But let's not get too hung up on the real world - let's see the pattern in a software setting. Design Patterns Refcard For a great overview of the most popular design patterns, DZone's Design Patterns Refcard is the best place to start. The Prototype PatternThe Prototype pattern is known as a creational pattern, as it is used to construct objects suchthaey they can be decoupled from their implementing systems. The definition of Prototype as provided in the original Gang of Four book on Design Patterns states: Create objects based on a template of an exsiting object through cloning. In summary, instead of going to the trouble of creating object from scratch every time, you can make copies of an original instance and modify it as required. The pattern is quite simple: the Prototype interface declares a method for cloning itself, while the ConcretePrototype implements the operation for cloning itself. In practice you will add in a registry to manage the finding and cloning of the objects. The detail of how it is used is best described in a code example. When Would I Use This Pattern? The Prototype pattern should be considered when - Composition, creation and representation of objects should be decoupled from the system - Classes to be created are specified at runtime - You need to hide the complexity of creating new instance from the client - Creating an object is an expensive operation and it would be more efficient to copy an object. - Objects are required that are similar to existing objects. So How Does It Work In Java? Let's use a simple example in Java to illustrate this pattern. For this example, let's use a shopping cart example. Let's say that we have a number of items that can go in the cart - books, cds, dvds. While our example doesn't include anything that's particularly expensive to create, it should illustrate how the pattern works. First, we'll create an abstract class for our Item, which will be our Prototype that includes a clone method. //Prototype public abstract class Item { private String title; private double price; public Item clone() { Item clonedItem = null; try { //use default object clone. clonedItem = (Item) super.clone(); //specialised clone clonedItem .setPrice(price); clonedItem.setTitle(title); } catch (CloneNotSupportedException e) { e.printStackTrace(); } // catch return clonedItem ; } public String getTitle() { return title; } public double getPrice() { return price;} } Next we'll create two ConcretePrototypes //Concrete Prototypes public class Book extends Item { //extra book stuff } public class CD extends Item { //extra cd stuff } Now let's create a registry for item creation public class ItemRegistry { private Hashtable map = new Hashtable(); public ItemRegistry { loadCache(); } public Item createBasicItem(String type) { return map.get(type).clone(); } private void loadCache() { Book book = new Book(); book.setTitle("Design Patterns"); book.setPrice(20.00); map.add("Book", book); CD cd = new CD(); cd.setTitle("Various"); cd.setPrice(10.00); map.add("CD", cd); } } Finally, here's a Client that makes use of the Prototype. If we need to create a book, we can use the cached implementation to load it up, and modify it after. public class Client { public static void main(String[] args) { ItemRegistry registry = new ItemRegistry(); Book myBook = registry.createBasicItem("Book"); myBook.setTitle("Custom Title"); //etc } } Watch Out for the Downsides One of the downsides to this pattern is that the process of copying an object can be complicated. Also, classes that have circular references to other classes are difficult to clone. Overuse of the pattern could affect performance, as the prototype object itself would need to be instantiated if you use a registry of prototypes. Other Articles in This Series The Observer Pattern The Adapter Pattern The Facade Pattern The Factory Method Pattern The Abstract Factory Pattern The Singleton Pattern The Strategy Pattern The Visitor Pattern The Decorator Pattern The Proxy Pattern The Command Pattern The Chain of Responsibility Pattern Next Up We're going to look at the Memento pattern next week. Jiang Yixin replied on Fri, 2010/04/09 - 9:44pm Csaba Palfi replied on Wed, 2010/04/21 - 1:34am
http://java.dzone.com/articles/design-patterns-prototype
CC-MAIN-2014-42
refinedweb
752
52.19