text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Optimizing NameNode disk space with Hadoop archives Hadoop Archives (HAR) are special format archives that efficiently pack small files into HDFS blocks. The Hadoop Distributed File System (HDFS) is designed to store and process large data sets, but HDFS can be less efficient when storing a large number of small files. When there are many small files stored in HDFS, these small files occupy a large portion of the namespace. As a result, disk space is under-utilized because of the namespace limitation. Hadoop Archives (HAR) can be used to address the namespace limitations associated with storing many small files. A Hadoop Archive packs small files into HDFS blocks more efficiently, thereby reducing NameNode memory usage while still allowing transparent access to files. Hadoop Archives are also compatible with MapReduce, allowing transparent access to the original files by MapReduce jobs.
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/data-storage/content/optimizing_namenode_disk_space_with_hadoop_archives.html
CC-MAIN-2019-22
refinedweb
140
51.28
Referencing XML schemas and DTDs Your XML file may reference an external XML schema (XSD) or DTD file, for example: or If the referenced URL or the namespace URI is "unfamiliar", it's marked as an error. To solve the problem: Place the caret at the referenced URL and press Alt+Enter. From the list of suggested options, select one of the following: Fetch external resource. PhpStorm. PhpStorm won't validate the XML file, however, it will check if the XML file is well-formed. Add Xsi Schema Location for External Resource. This intention action lets you complete your root XML elements. If the namespace is already specified, PhpStorm can add a couple of missing attributes. For example, if you have a fragment like this: and you invoke the Add Xsi Schema Location for External Resource.
https://www.jetbrains.com/help/phpstorm/referencing-xml-schemas-and-dtds.html
CC-MAIN-2022-27
refinedweb
136
63.19
Dear All, Our requirement is to clear(uncheck) all the filters of the page. For a button I wrote the below script in Script control, but it's giving the error as " 'NoneType' object has no attribute 'Values' ". To my understanding this is happening because it's not able to read FilterA. I'm not able to solve this error. Request someone to please help rectify this error.) Thank you Regards, Anupam There is already a function available for this 'Reset All Filters'. You should see this in your 'Action Control' panel under 'Available Functions > Functions' Nitiz Hi Nitiz, This function selects all the checkboxes, while our requirement is to uncheck all of them. Anupam, If you uncheck all of them then you won't get anything in the result. Why you want to do that? Am I missing something here? Nitiz I had a look at the your script and the APIs. It look correct to me. Then I tested it and found that it did work. CurPanel = Document.ActivePageReference.FilterPanelFilterA = CurPanel.TableGroups[0].GetFilter("component")CheckBoxes = FilterA.FilterReference.As[filters.CheckBoxFilter]() I think the reason you are getting the error is because one of the values is NULL and it does not handle it gracefully. -Nitiz Thanks Nitiz, That feature is a client requirement, so I have to implement it. :)This code I tried again and is working fine when I'm using a single data table. I believe I'm not able to use it properly when I'm having filters from multiple data tables in the filter panel. Regards,Anupam I got the exact reason of error. The reason was the that of filter type. I was using heirarchy to filter and I didn't know that Spotfire treats Hierarchy as different from Checkbox filter. Below is the script which ran successfully: from Spotfire.Dxp.Application import Filters as filters CurPanel = Document.ActivePageReference.FilterPanelFilterA = CurPanel.TableGroups[0].GetFilter("filter1")CheckBoxes = FilterA.FilterReference.As[filters.CheckBoxHierarchyFilter]()CheckBoxes.UncheckAllNodes() CurPanel = Document.ActivePageReference.FilterPanelFilterA = CurPanel.TableGroups[0].GetFilter("filter1")CheckBoxes = FilterA.FilterReference.As[filters.CheckBoxHierarchyFilter]()CheckBoxes.UncheckAllNodes() Thank you Nitiz for your kind help. If you want the basical meet that in sound clarity and impeccable sound reproduction, you should certainly try out the Beats By Dre Pro. Immerse yourself in high-quality sound with these Beats By Dre Pro that feature input/output cable ports for simple signal transfer to other Beats By Dre. These Beats By Dre Tour come with "The World's First Tangle-Free Cable". We would highly recommend buying the thisDr Dre Headphonesif you are looking for an entry level high end earbud. The Diddybeats are on the large size for earphones.This Monster Beats brings you headphones with fasionable stytle and excellent performance. Lady Gaga Headphones are holistically designed to deliver the soundtrack of your life with clarity and power, as well as satisfy your passion for fashion. Immerse yourself in pitch-perfect highs, precise mids, and club caliber bass. It is the good Beats By Dr Dre .If you enjoy hip-hop music, if you like justin bieber , then just have in your hand. Please enjoy your lovely song with this excellent earphones. Advanced proprietary titanium coated driver technology ensures Beats By Dre Solo HD preserve every bit of the lush bass, vital details and high energy your favorite artists created for their fans. ThisDr Dre Beats 's cable also features Monster ControlTalk for iPod playback control and iPhone/music phone hands-free calling. Beats By Dre Studio is America's recent most fervent headphones. This Monster Headphones, with built-in integrated amplifier system, integrated drop dry system. Great MBT Shoes are for cheap now.The MBT Shoes was released in 1996. Wearing MBT footwear is like walking barefoot on springy moss or on a sandy beach. MBT Sandals make use of the TPU and glass fiber shank adds firmness to sole construction. MBT Kisumu are designed to relieve stress on your joints, your back, and yes, even your feet, and also to improve your gait.Now more and more people like to wear thisMBT Trainers ,they could also contribute to body movement. And here we want to introduce the MBT Womens Shoes to you,this one is paticular designed to the women.Though this MBT Shoes Sale are not in beauty,but they can do many benefit to the body! Or you can have a look at the other styles. MBT Tunisha with an adjustable ankle buckle give you a secure fit in this summer. Come on to enjoy the Discount MBT Shoes you like now. We promise that we only have genuine mbt shoes in our stock. MBT Fanaka as a casual lace-up shoe, features a rich leather or nubuck leather upper, perfect for wearing to work or if you are just going out-and-about. Mens MBT Shoes for sale is a water-resistant shoe and dressy enough for work!MBT Shoes UKwill not only change the way you use your muscles, but will improve the use of your joints and spine. And we believe that living better, healthier and more actively isn't just about exercise and eating right. We have many different kinds of MBT Shoes Clearance on sale. Welcome to order our MBT Shoes. Do not hesitate longer! 100% quality and price guarantee.We not only provide you with good MBT Trainers, and provide good after-sales service. Come on now. Welcome to buy supra shoes online. There are variety of Supra Shoes on sale on our web. Supra Shoes US not only possess its unique personality, but also has the remarkable in appearance. Welcome to Supra Footwear online store, We serve for you enthusiastically! Supra Shoes UKtops, Supra Vaider, Supra TK Society and more recently the Supra Skytop 2's. Supra Shoes For Sale popularity has been further bolstered by the backing of some great skates, guys suck as Supra TK, Terry Kennedy (TK Society), Erik Ellington, Antwian Dixon and Chad Muska to name a few. Cheap Supras Shoes Online Store are a big online store to sale Justin Bieber Shoes at wholesale price, various color and vogue design, all the Supra Shoes Canada with high quality and reasonable price, really popular with people, 62% Discount for you to enjoy, free shipping and fast home delivery(5-7 days to your door), hurry up! Supra TK Society Shoes and Supra Shoes For Kids also hot sale online.Our Supra Shoes attaches great importance to the performance, the specialized shoe shockproof system can better protect your feet. the perfect combination of fashion, beautiful, tide make Supra Skate Shoes become not only a Canada . industry but also a trend of the pop culture, because of superior quality, Supra TK Society is popular with people in the world. Supra Skytop Shoes becomes more trendy all over the world. You wear it, you will be more fashionable. Artists and producers spend countless hours fine-tuning and mixing music to get it exactly how they want their fans to hear it. But the vast majority of headphones can't accurately reproduce the intricacies produced in the studio. Simply put, Beats By Dre Studio can. Engineered and artfully designed with the pro in mind, Beats By Dre Pro were built from the ground up with the cooperation of audio professionals. Beats By Dre UK deliver pro-caliber even response and passive sound isolation, revealing every note with distortion-free clarity. Committed to giving music enthusiasts the best in sonic performance, Monster Beats gives you a high definition sound experience. This Beats By Dre Solo HD is easy to carry.At the press conference way back in Lady Gaga Heartbeats, stated the reason for the design of a triangle was to exemplify the three things she would die without: love, fashion, and writing. So the earbud is shaped as a triangle and the design inset inside that triangle is a bunch of smaller triangles. Whether you work out hard at the gym, play to win in team sports, or enjoy moving to music, Monster Power Beats By Dre is the breakthrough in-ear headphone youve been waiting for. Engineered to reproduce all the emotion stirring, crowd moving power music lovers crave, Beats By Dre perform at the highest level and look stunning at the same time. Free Shipping on all Dr Dre Headphones, buy now.Welcome to our discount Dr Dre Headphones store online, here you can enjoy free shipping, fast delivery and best price. There are other excellent quality Best Beats By Dr Dre Headphones, such as Monster Beats Studio, Beats By Dr.Dre Solo and so on. Quality is guaranteed and we will send goods tracking service. Welcome to buy Louis Vuitton Bags online. Most ladies customers come from worldwide to buy Louis Vuitton Handbags in our online replicaLouis Vuitton Outlet store. Among our wide Louis Vuitton outlet product line that also includes Louis Vuitton Top Handle, Louis Vuitton Tote Bags are the most competitive collection. You will never find another online shop that provides more fashion handbags of better quality at the same price range than us. Decide on Gucci Bags to direct your life model. ReplicaGucci Outlet will make you yourger and matrue, turn you into fashion and a little adorable. Just trust us, there has to be one vogue to match for yourself. Gucci outlet store, jacquard fabric highlighted with leather trim and silver chiseled Gucci Tote Bags or light gold hardware and therefore the personal Gucci logo plate makes having just Gucci Tote Bags 2011 very desired. Buy Gucci Shoulder Bags online now. We really hope we can do business with you in the near future. Have a great time! Welcome to our replicaChanel Outlet. All the Chanel Bags, with high quality and low price. Replica Chanel Outlet Store selling chanel handbags online Chanel Handbags become more and more popular these days, not only because of its nice design, beautiful looking, but also its well structure and real materails. Chanel Shoulder Bags and Chanel Tote Bags are hot sale online.
http://spotfirecommunity.tibco.com/community/forums/p/2289/6752.aspx
CC-MAIN-2014-15
refinedweb
1,675
64.61
How to: Create a Windows Communication Foundation Client This is the fourth of six tasks required to create a Windows Communication Foundation (WCF) application. For an overview of all six of the tasks, see the Getting Started Tutorial topic. This topic describes how to retrieve metadata from a WCF service and use it to create a WCF proxy that can access the service. This task is completed by using the Add Service Reference functionality provided by Visual Studio . This tool obtains the metadata from the service’s MEX endpoint and generates a managed source code file for a client proxy in the language you have chosen (C# by default). In addition to creating the client proxy, the tool also creates or updates the client configuration file which enables the client application to connect to the service at one of its endpoints. The client application uses the generated proxy class to communicate with the service. This procedure is described in How to: Use a Windows Communication Foundation Client. To create a Windows Communication Foundation client Create a new console application project by right-clicking on the Getting Started solution, selecting, Add, New Project. In the Add New Project dialog on the left hand side of the dialog select Windows under C# or VB. In the center section of the dialog select Console Application. Name the project GettingStartedClient. Set the target framework of the GettingStartedClient project to .NET Framework 4.5 by right clicking on GettingStartedClient in the Solution Explorer and selecting Properties. In the dropdown box labeled Target Framework select .NET Framework 4.5. Setting the target framework for a VB project is a little different, in the GettingStartedClient project properties dialog, click the Compile tab on the left-hand side of the screen, and then click the Advanced Compile Options button at the lower left-hand corner of the dialog. Then select .NET Framework 4.5 in the dropdown box labeled Target Framework. Setting the target framework will cause Visual Studio 2011 to reload the solution, press OK when prompted. Add a reference to System.ServiceModel to the GettingStartedClient project by right-clicking the Reference folder under the GettingStartedClient project in Solution Explorer and select Add Reference. In the Add Reference dialog select Framework on the left-hand side of the dialog. In the Search Assemblies textbox, type in System.ServiceModel. In the center section of the dialog select System.ServiceModel, click the Add button, and click the Close button. Save the solution by clicking the Save All button below the main menu. Next you wlll add a service reference to the Calculator Service. Before you can do that, you must start up the GettingStartedHost console application. Once the host is running you can right click the References folder under the GettingStartedClient project in the Solution Explorer and select Add Service Reference and type in the following URL in the address box of the Add Service Reference dialog: and click the Go button. The CalculatorService should then be displayed in the Services list box, Double click CalculatorService and it will expand and show the service contracts implemented by the service. Leave the default namespace as is and click the OK button. When you add a reference to a service using Visual Studio a new item will appear in the Solution Explorer under the Service References folder under the GettingStartedClient project. If you use the ServiceModel Metadata Utility Tool (Svcutil.exe) tool a source code file and app.config file will be generated. You can also use the command-line tool ServiceModel Metadata Utility Tool (Svcutil.exe) with the appropriate switches to create the client code. The following example generates a code file and a configuration file for the service. The first example shows how to generate the proxy in VB and the second shows how to generated the proxy in C#: You have now created the proxy that the client application will use to call the calculator service. Proceed to the next topic in the series: How to: Configure a Basic Windows Communication Foundation Client
https://msdn.microsoft.com/en-us/library/ms733133.aspx
CC-MAIN-2015-22
refinedweb
675
55.54
Launchpad blueprint: There is an ongoing desire to manage TripleO containers with a set of tools designed to solve complex problems when deploying applications. The containerization of TripleO started with a Docker CLI implementation but we are looking at how we could leverage the container orchestration on a Kubernetes friendly solution. There are three problems that this document will cover: There is an ongoing discussion on whether or not Docker will be maintained on future versions of Red Hat platforms. There is a general move on OCI (Open Containers Initiative) conformant runtimes, as CRI-O (Container Runtime Interface for OCI). The TripleO community has been looking at how we could orchestrate the containers lifecycle with Kubernetes, in order to bring consistency with other projects like OpenShift for example. The TripleO project aims to work on the next version of Red Hat platforms, therefore we are looking at Docker alternatives in Stein cycle. The containerization of TripleO has been an ongoing effort since a few releases now and we’ve always been looking at a step-by-step approach that tries to maintain backward compatibility for the deployers and developers; and also in a way where upgrade from a previous release is possible, without too much pain. With that said, we are looking at a proposed change that isn’t too much disruptive but is still aligned with the general roadmap of the container story and hopefully will drive us to manage our containers with Kubernetes. We use Paunch project to provide an abstraction in our container integration. Paunch will deal with container configurations formats with backends support. The goal of Podman is to allow users to run standalone (non-orchestrated) containers which is what we have been doing with Docker until now. Podman also allows users to run groups of containers called Pods where a Pod is a term developed for the Kubernetes Project which describes an object that has one or more containerized processes sharing multiple namespaces (Network, IPC and optionally PID). Podman doesn’t have any daemon which makes it lighter than Docker and use a more traditional fork/exec model of Unix and Linux. The container runtime used by Podman is runc. The CLI has a partial backward compatibility with Docker so its integration in TripleO shouldn’t be that painful. It is proposed to add support for Podman CLI (beside Docker CLI) in TripleO to manage the creation, deletion, inspection of our containers. We would have a new parameter called ContainerCli in TripleO, that if set to ‘podman’, will make the container provisionning done with Podman CLI and not Docker CLI. Because there is no daemon, there are some problems that we needs to solve: Automatically restart failed containers. Automatically start containers when the host is (re)booted. Start the containers in a specific order during host boot. Provide an channel of communication with containers. Run container healthchecks. To solve the first 3 problems, it is proposed to use Systemd: Use Restart so we can configure a restart policy for our containers. Most of our containers would run with Restart=always policy, but we’ll have to support some exceptions. The systemd services will be enabled by default so the containers start at boot. The ordering will be managed by Wants which provides Implicit Dependencies in Systemd. Wants is a weaker version of Requires. It’ll allow to make sure we start HAproxy before Keepalived for example, if they are on the same host. Because it is a weak dependency, they will only be honored if the containers are running on the same host. The way containers will be managed (start/stop/restart/status) will be familiar for our operators used to control Systemd services. However we probably want to make it clear that this is not our long term goal to manage the containers with Systemd. The Systemd integration would be: complete enough to cover our use-cases and bring feature parity with the Docker implementation. light enough to be able to migrate our container lifecycle with Kubernetes in the future (e.g. CRI-O). For the fourth problem, we are still investigating the options: varlink: interface description format and protocol that aims to make services accessible to both humans and machines in the simplest feasible way. CRI-O: CI-based implementation of Kubernetes Container Runtime Interface without Kubelet. For example, we could use a CRI-O Python binding to communicate with the containers. A dedicated image which runs the rootwrap daemon, with rootwrap filters to only run the allowed commands. The controlling container will have the rootwrap socket mounted in so that it can trigger allowed calls in the rootwrap container. For pacemaker, the rootwrap container will allow image tagging. For neutron, the rootwrap container will spawn the processes inside the container, so it will need to be a long-lived container that is managed outside paunch. +———+ +———-+ | | | | | L3Agent +—–+ Rootwrap | | | | | +———+ +———-+ In this example, the L3Agent container has mounted in the rootwrap daemon socket so that it can run allowed commands inside the rootwrap container. Finally, the fifth problem is still an ongoing question. There are some plans to support healthchecks in Podman but nothing has been done as of today. We might have to implement something on our side with Systemd. Two alternatives are proposed. CRI-O is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. Note that the CLI utility for interacting with CRI-O isn’t meant to be used in production, so managing the containers lifecycle with a CLI is only possible with Docker or Podman. So instead of a smooth migration from Docker CLI to Podman CLI, we could go straight to Kubernetes integration and convert our TripleO services to work with a standalone Kubelet managed by CRI-O. We would have to generate YAML files for each container in a Pod format, so CRI-O can manage them. It wouldn’t require Systemd integration, as the containers will be managed by Kubelet. The operator would control the container lifecycle by using kubectl commands and the automated deployment & upgrade process would happen in Paunch with a Kubelet backend. While this implementation will help us to move to a multi-node Kubernetes friendly environment, it remains the most risky option in term of the quantity of work that needs to happen versus the time that we have to design, implement, test and ship the next tooling before the end of Stein cycle. We also need to keep in mind that CRI-O and Podman share containers/storage and containers/image libraries, so the issues that we have had with Podman will be hit with CRI-O as well. We could keep Docker around and do not change anything in the way we manage containers. We could also keep Docker and make it work with CRI-O. The only risk here is that Docker tooling might not be supported in the future by Red Hat platforms and we would be on our own if any issue with Docker. The TripleO community is always seeking for an healthy and long term collaboration between us and the projects communities that we are interracting with. In Stein: Make Paunch support Podman as an alternative to Docker. Get our existing services fully deployable on Podman, with parity to what we had with Docker. If we have time, add Podman pod support to Paunch In “T” cycle: Rewrite all of our container yaml to the pod format. Add a Kubelet backend to Paunch (or change our agent tooling to call Kubelet directly from Ansible). Get our existing service fully deployable via Kublet, with parity to what we had with Podman / Docker. Evaluate switching to Kubernetes proper. The TripleO containers will rely on Podman security. If we don’t use CRI-O or varlink to communicate with containers, we’ll have to consider running some containers in privileged mode and mount /var/lib/containers into the containers. This is a security concern and we’ll have to evaluate it. Also, we’ll have to make the proposed solution with SELinux in Enforcing mode. Docker solution doesn’t enforce selinux separation between containers. Podman does, and there’s currently no easy way to deactivate that globally. So we’ll basically get a more secure containers with Podman, as we have to support separation from the very beginning. The containers that were managed by Docker Engine will be removed and provisioned into the new runtime. This process will happen when Paunch generates and execute the new container configuration. The operator shouldn’t have to do any manual action and the migration will be automated, mainly by Paunch. The Containerized Undercloud upgrade job will test the upgrade of an Undercloud running Docker containers on Rocky and upgrade to Podman containers on Stein. The Overcloud upgrade jobs will also test. Note: as the docker runtime doesn’t have the selinux separation, some chcon/relabelling might be needed prior the move to podman runtime. The operators won’t be able to run Docker CLI like before and instead will have to use Podman CLI, where some backward compatibility is garanteed. There are different aspects of performances that we’ll need to investigate: Container performances (relying on Podman). How Systemd + Podman work together and how restart work versus Docker engine. There shouldn’t be much impact for the deployer, as we aim to make this change the most transparent as possible. The only option (so far) that will be exposed to the deployer will be “ContainerCli”, where only ‘docker’ and ‘podman’ will be supported. If ‘podman’ is choosen, the transition will be automated. There shouldn’t be much impact for the developer of TripleO services, except that there are some things in Podman that slightly changed when comparing with Docker. For example Podman won’t create the missing directories when doing bind-mount into the containers, while Docker create them. Update TripleO services to work with Podman (e.g. fix bind-mounts issues). SELinux separation (relates to bind-mounts rights + some other issues when we’re calling iptables/other host command from a containe) Systemd integration. Healthcheck support. Socket / runtime: varlink? CRI-O? Upgrade workflow. Testing. Documentation for operators. The Podman integration depends a lot on how stable is the tool and how often it is released and shipped so we can test it in CI. The Healthchecks interface depends on Podman’s roadmap. First of all, we’ll switch the Undercloud jobs to use Podman and this work should be done by milestone-1. Both the deployment and upgrade jobs should be switched and actually working. The overcloud jobs should be switched by milestone-2. We’ll keep Docker testing support until we keep testing running on CentOS7 platform. We’ll need to document the new commands (mainly the same as Docker), and the differences of how containers should be managed (Systemd instead of Docker CLI for example). Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
http://specs.openstack.org/openstack/tripleo-specs/specs/stein/podman.html
CC-MAIN-2019-26
refinedweb
1,852
53.21
Eclipse Community Forums - RDF feed Eclipse Community Forums Child JFrames & JDialogs not showing <![CDATA[I have some classes that I am looking to migrate from VEP. One of these classes is a JFrame class and the main JFrame is passed satisfactorily in Window Builder. However there are other JDialogs & JFrames within this class that are not appearing in the design screen. I have pulled out one such JFrame into a separate class file and it is now successfully parsed and appears. Must I do this for every Window in the Class file? If anyone could offer some tips as to how to go about this it would be much appreciated.]]> Ed Murray 2012-10-12T07:57:23-00:00 Re: Child JFrames & JDialogs now showing <![CDATA[Yes, WindowBuilder parses and shows only one top-level UI.]]> Konstantin Scheglov 2012-10-12T12:33:33-00:00 Re: Child JFrames & JDialogs now showing <![CDATA[Right, I don't think that it will display inner or anonymous classes, non-public classes, or dynamically added content in the design view. Take a look at the support for building composite widgets and using Factories for generating your widgets. That way when you pull them out, you can put them somewhere that will allow you to leverage some of the powerful features of WindowBuilder. Like adding them to your own custom Pallet sections and exposing only the fields or objects that you want to allow changing.]]> Michael Prentice 2012-10-16T21:07:30-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=399874&basic=1
CC-MAIN-2016-44
refinedweb
249
73.17
Shave! Shave, compared to other truncation plugins: ~1.5kbunminified npm npm install shave --save bower bower install shave --save yarn yarn add shave Add dist/shave.js to your html Or as a module import shave from 'shave'; Basic setup ; Shave also provided options only to overwrite what it uses. If you'd like have custom class names and not use .js-shave: ; Or if you'd like to have custom characters (instead of the standard ellipsis): ; Or both: ; Without spaces: ; You can also use shave as a jQuery or Zepto plugin. As of Shave >= v2, use dist/jquery.shave.js for jQuery/Zepto. ; And here's a jQuery/Zepto example with custom options: ; If you're using a non-spaced language, you can support shave by setting an option spaces to false. ; Codepen example with plain javascript. Codepen example with jQuery. Codepen example with a non-spaced language. text-overflow: ellipsis is the way to go when truncating text to a single line. Shave does something very similar but for multiple lines. Shave implements a binary search to truncate text in the most optimal way possible. Shave is meant to truncate text within a selected html element. This means it will overwrite html within an html element with just the text within the selected element. Here are some super basic examples of shave with window resize and click events. 🙌 Shave works in all modern browsers and was tested in some not so modern browsers (like Internet Explorer 8) - it works there too. 🍻 Created and maintained by Jeff Wainwright with Dollar Shave Club Engineering.
https://www.npmjs.com/package/shave
CC-MAIN-2018-05
refinedweb
264
66.64
Project Euler 172: Investigating numbers with few repeated digits How many 18-digit numbers n (without leading zeros) are there such that no digit occurs more than three times in n? Project Euler 172 SolutionRuns < 2 seconds in Python. from Euler import binomial digits, base, max_r = 18, 10, 3 def nd(d,b): if b>1: return sum(nd(d-r, b-1)*binomial(d,r) for r in xrange(min(d+1, max_r+1))) return d <= max_r print "There are", nd(digits, base) * (base-1)/base, print digits, "digit numbers such that no digit occurs more than", max_r, print "times in base", base >. |227485267000992000| Discussion
https://blog.dreamshire.com/project-euler-172-solution/
CC-MAIN-2018-39
refinedweb
106
50.2
This document is a FAQ list for the newsgroup alt.comp.lang.learn.c-c++. It provides readers with a framework and a set of guidelines for posting to the newsgroup, in addition to answering a number of questions newcomers tend to ask. This FAQ is meant to be read in its entirety. Planned distribution: fortnightly to alt.comp.lang.learn.c-c++, and monthly to comp.lang.c, comp.lang.c++, comp.answers, alt.answers and news.answers. This is the HTML version of the FAQ. It is updated with each alteration in the plain text version, the latest release of which can always be obtained via anonymous FTP from: via HTTP from: The following other FAQs may be considered essential reading: C FAQ: document was originally compiled and maintained by Sunil Rao. Revised & updated in 2001 by Rich Churcher - see Appendix B for a list of contributors. Comments, suggestions, corrections, constructive criticism and requests for clarification will be gratefully received. C++ FAQ: The alt.comp.lang.learn.c-c++ regulars would like to thank Sunil for his hard work developing this FAQ, and his generosity in donating it for future use by the newsgroup. Current maintainer: Rich Churcher. Last update: 29 September 2001 alt.comp.lang.learn.c-c++ is a self-moderated newsgroup for the discussion of issues that concern novice to intermediate C and C++ programmers. We ask and answer questions about those languages as defined by their respective standard documents. Please note: in speaking of C and C++ throughout this document, we are referring to the standard versions of each language. We are interested in the core languages, not in their respective compilers, implementations, or add-ons. See 2.5. Exactly what questions are considered to be about the standard C and C++ languages may be difficult for a newcomer to understand. We recommend you read the remainder of this FAQ, which should help you understand what we would consider "off-topic". That probably looks scary. Basically, anybody can post to the group. Provided the questions you post are on-topic and the answers you provide are accurate, you should not have a problem. If your question is off-topic, you will typically be redirected to a newsgroup that is more appropriate (see 8.1); if your answers are inaccurate, you risk being corrected politely but firmly. Back to Contents This newsgroup is intended for discussion related to the practice and process of learning C and C++. The other two groups are primarily intended for general discussion of the features of those languages. Naturally, some overlap does occur. This group does tend to be slightly more informal, though. Most regulars on this group show great patience with many common beginners' questions and will willingly expound on many topics of interest or particular difficulty, referring to appropriate reference material, either in printed or in electronic form, as necessary. Back to Contents Please take the following steps before posting your question to alt.comp.lang.learn.c-c++: If after doing all these things you STILL can't find the answer, post by all means. Note that you might not be the first ever learner to have run into your problems. There is a good chance that your question has been answered before. See also: Google Advanced Search: C FAQ: C++ FAQ: Back to Contents++, as the intended language is not always clear from context. Before posting code here, try and make sure that it at least compiles correctly, even if it does not quite behave the way you intended it to. If you cannot achieve this, include all error messages and mark the lines that they refer to. Essentially, POST THE SMALLEST COMPLETE PROGRAM THAN MANIFESTS THE PROBLEM. This makes it easier for the reader to answer your question. You might find that doing this enables you to answer your question yourself! It usually helps if you set the warning levels to the highest possible for your compiler - let the compiler pick out errors and warn you of any potential problems. Do learn how to use the debugger that came with your compiler. Not only will you learn how to solve this problem, but also many others that you'll encounter as you begin to write more ambitious programs. Back to Contents Please observe basic Netiquette guidelines. If you're not sure of what these are, subscribe to news.announce.newusers, and read *ALL* of the posts there. A good reference is summarise these points very briefly: Please do not make any MIME or UUENCODED posts (this includes HTML). Many newsreaders cannot handle such posts correctly. You will only make it impossible for many to read your posts. In addition, please ensure that your news software is set to post lines of a maximum length of around 60 to 70 characters (and even 70 is getting a bit long). This may seem pedantic, but when source code is being posted it becomes harder to read with each reply if the original lines were long. Make sure that your subject line contains an accurate description of your problem or the topic of your post. Including one of the following tags: [C]in the subject line will assist regular readers in answering your question. [C++] [C,C++] It usually helps if you INDICATE e-mail your questions directly to regulars; you have no insurance against any potential mistakes. Remember that no-one is infallible. And please try not to flame. This is a learners' group. Not everyone who posts here is aware of all the issues involved. A grumpy attitude only makes things difficult for everyone concerned. Back to Contents Any question relating to any aspect of STANDARD C or C++ that you're having trouble understanding is on topic here. By C, what is meant is the standard language and its standard library as defined by ISO/IEC 9899:1999. By C++, what is meant is the standard language and its standard library as defined by ISO/IEC 14882:1998. Although these published standards form the basis for our discussions, we will sometimes address issues related to previous versions of each language. Any questions relating to specific compilers, third-party or non-standard libraries, compiler extensions etc are unwelcome here, and will probably be answered with a redirection to a more appropriate newsgroup (see 8.1-5). If you've been an audience as possible. Back to Contents Sometimes it's tempting to leap right in and start trying to solve problems you see posted to alt.comp.lang.learn.c-c++. Sometimes the answer seems obvious. However, please wait awhile before you do this, especially if you are just beginning to learn the language in question. We recommend that you spend a good deal of time reading the group silently prior to contributing an answer - say, one or two months. A good rule of thumb is to write a few responses but keep them saved on disk. Then, as the responses from newsgroup regulars trickle in, compare your answer with theirs. You'll learn a lot that way, and save yourself some embarrassment! When you feel you're ready to contribute to answering questions in this newsgroup, please remember your target audience - students, often beginners. With that in mind, please take care to be clear and accurate. Please check your code carefully and compile it before posting. And please don't send answers to private e-mail. It's best if your work is checked - even experienced programmers make mistakes. Back to Contents Please do not ask for replies by e-mail. If you haven't got the time or patience to read the newsgroup, that's tough. The answers you receive might benefit other readers of the newsgroup as well, and you yourself might learn more from the discussions your question might generate. In addition, alt.comp.lang.learn.c-c++ operates a kind of informal peer review whereby any answers posted publicly to the group are checked by seasoned programmers for their accuracy and clarity. Obviously, we can't check an answer sent to your private e-mail address. Back to Contents First of all, C and C++ are different languages. C was created by Dennis Ritchie as an efficient language for systems programming. Bjarne Stroustrup later extended C by adding features to support object-oriented programming. C++ can be considered to be a superset of C, but there are real differences between them. It can usually (though not always) be assumed that anybody who talks about "C/C++" as one language is no expert - this extends to book authors too. It is normally unclear whether somebody is referring to "C OR C++" or "C AND C++" when using this expression, so it is probably best avoided. Back to Contents They are indeed similar to a great extent. Incompatibilities do exist, though, and many idiomatic constructs used in C are frowned upon by C++ experts, and vice versa. C++ programmers generally consider C++ code that does not exploit those features of C++ that make it possible to write better programs - programs that are more readable and easier to write and maintain - to be in poor style. The differences between the two languages are significant enough to ensure that one has to be clear about the language being used. However, it must not be forgotten that C++ is a largely a superset of C, and that it is possible (though perhaps not desirable) to write code that works correctly in both languages. A lot of people believe incorrectly that object-oriented programs cannot be written in C; this is not true. What is true is that C++ provides features that make it easier to write in a style that is object-oriented; in other words, C++ supports programming in an object-oriented style. However, don't make the mistake of thinking that object-oriented programming is the only way to program in C++. C++ supports generic programming with its template features (using the same code for different types); object-based programming; procedural programming (as often found in C programs); and other programming paradigms. See also: The C++ Programming Language, Appendix B [ Discusses incompatibilities between C and C++ ] Back to Contents. Standard C++ as a New Language, a paper by Stroustrup, examines this much-debated issue in great depth. The paper is aimed more at educators than at beginners., others as thoughtful as Stroustrup and Cline disagree. If you believe you will ever need to program in C, then in might make sense to learn C first. Back to Contents The answer to this depends on your own inclinations. C is a smaller, less complex language than C++, but that does not necessarily make it easier to master. There are C++ features which accomplish tasks that would have to be coded "by hand" in C. The more extensive C++ standard library provides features which can lead students to writing useful programs fairly quickly. However, C++ syntax can be daunting and its behaviour mysterious at times. Some find C to be more elegant than C++, others think it to be too "unsafe". Some C++ programmers feel that it has features that make it easier to write good, robust, readable and maintainable code in than in C. But there are a vast number of programs written in idiomatic C, and some C++-only programmers find them difficult to read easily, providing an argument that you might like to become fluent in both languages, and this might influence the order in which you learn them. Back to Contents Newcomers to the newsgroup often ask this without realising that Visual C++ isn't a language at all. Visual C++ is the Microsoft C++ implementation for Windows, one vendor's C++ distribution with extensions that are intended to aid programming in the MS Windows environment. If the word "implementation" is confusing, just think "compiler". The words are sometimes used interchangeably, although strictly speaking a compiler is only one part of an implementation. One of the tasks new C and C++ programmers have to accomplish is learning the difference between the standard languages and their various implementations. This task is often made more difficult due to bugs or incompatibilities in the implementation, as well as the many extensions and additional facilities that vendors make available. On alt.comp.lang.learn.c-c++, we try to point out the differences wherever we can. Newcomers to the group often complain that this is unnecessarily pedantic or harsh, but in actual fact we're doing you a favour - if you understand the difference, you will find the process of moving between operating systems in your programming career is far easier! You will even be able to move more smoothly between compilers on the same platform. C and C++ are languages. They exist as an idea, a set of rules, a grammar which is documented in their respective standard documents. In order to do anything useful, they must be IMPLEMENTED. Get used to this word, you'll be hearing it a lot more. Simply put, implementation means to take an idea or a set of instructions, and make it concrete. Make it happen. Languages: C, C++See the difference? This newsgroup deals with the languages, not the implementations. Implementations: MS Visual C++, Borland C++ Builder, GCC See also: Back to Contents often. Our classic example is one which we are confronted with many times on the newsgroup: "How do I clear the screen?" (see 4.2) Most beginners think that this seemingly simple task should be easily accomplished. However, it's important to remember that C and C++ compilers exist on a huge number of different hardware architectures and operating systems. What works on Windows won't work on an iMac. The hardware call which wipes the screen of a Point Of Sale terminal is different to that which removes objects from air traffic control screens. Because there is no "one size fits all" solution for clearing the screen, a good programmer might try to restrict all screen clearing code to one part of her program, then call that part repeatedly from elsewhere. When she came to port the program to another operating system, only that small part would need to be replaced. Back to Contents You cannot obtain copies of the standards for free. This is because the standards organisations earn a large part of their revenue from selling printed copies. The C standard (ISO/IEC 9899:1999) can be purchased online directly from the American National Standards Institute (ANSI) Electronic Standards store. After registering yourself for free, you can download the document in Adobe PDF format on payment of $20.00 (US) by credit card. The standard is also available from the International Standards Organisation (ISO) website, but it will cost you more. ANSI ISOThe C++ standard (ISO/IEC 14882:1998) can be downloaded from the ANSI store for $18 (US), or you can order through ISO (again, at a higher cost). ISOThe standard documents can be daunting at first sight. They are intended to be as formal and precise as possible. They are NOT suitable for learning from, but are to be used as the ultimate authority with regard to any language issue. Your country might have a standards organization from which the language standards can be obtained. See the comp.std.c++ FAQ for a brief list of some of those organizations. See also: The comp.std.c++ FAQ Back to Contents (First part of answer adapted from a March 1998 comp.lang.c post by Kaz Kylheku on Why Has C Proved To Be Such A Successful Language) C has always been a language that never attempts to tie a programmer down - it allows for easy implementation, it comes with a genuinely useful standard library that can itself be implemented in C, and it is both efficient and portable. C has always appealed to systems programmers who like the terse, concise manner in which powerful expressions can be coded. C was widely distributed with an Operating System (Unix) that was actually largely written in C itself. Also, C allowed programmers to (while sacrificing portability) have direct access to many machine-level features that would otherwise require the use of Assembly Language. As Dennis Ritchie writes in his paper, The Development of the C Language, C is quirky, flawed, and an enormous success. While accidents of history surely helped, it evidently satisfied a need for a system implementation language efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.A somewhat different argument is made in Worse is Better, by Richard Gabriel in section 2.1. has its basis in C - extending it by providing features meant to encourage and support the development of large programs. One of its most appealing attributes is its multi-paradigmed nature - it supports a variety of different programming styles, especially object oriented programming, generic programming, procedural programming. In The C++ Programming Language, Bjarne Stroustrup writes: By supporting several programming paradigms, C++ supports productive programming at several levels of expertise. Each new style of programming adds another tool to your toolbox, but each is effective on its own and each adds to your effectiveness as a programmer. C++ is organized so that you can learn its concepts in a roughly linear order and gain practical benefits along the way. This is important because it allows you to gain benefits roughly in proportion to the effort expended.[ Special edition, 1.2, p7 ] Back to Contents Get out of the habit of thinking that one language is "better" than another. Better for what? You will find that most alt.comp.lang.learn.c-c++ regulars advise picking the right language for the right job. It pays to become conversant in more than one language, and use different techniques to solve different problems. Many real-world programming tasks are accomplished using a variety of languages and techniques together. Besides, C++ is not solely an object-oriented language. It provides facilities for a number of different styles (see 2.8). Again, it's a matter of choosing the best method for the task at hand. Most programmers will find that to become truly effective they need to learn more than one language. Many professionals know and use both C and C++, although most express a preference for one or the other. Back to Contents Wrong. The C language is in widespread use throughout the world. In fact, it's quite likely that you have some device in your household programmed in C, whether you realise it or not. There is a massive amount of C legacy code which will need to be maintained for many years to come. Don't buy in to those arguments that claim C is dead or dying. C is one of the most successful languages ever designed. It will be around for some time to come for two simple reasons: first, there exist compilers for virtually every architecture you care to name; second, for many tasks no-one has come up with a viable alternative. Furthermore, the C standard was revised in 1999. Its popularity shows little sign of waning. Back to Contents Yes. Many games are written using these languages. One of the more well-known examples is the Quake series, written in C. However, writing games often involves using platform-specific extensions, and writing good games takes good programmers. For many beginners it is much better to first learn standard C or C++, learn programming, and only then move on to writing that sequel for Carmageddon. See also: 4.6 How do I create and display graphics? Back to Contents You name it. Although perhaps best suited for systems-level programming, both C and C++ are also used in high-level applications development. Common areas of use include: Back to Contents 8.4.. Back to Contents Many students post requests for us to do their homework for them - attempts to disguise this usually do not work particularly well. You will typically receive no help unless you can demonstrate that you have made an honest attempt to solve the problem yourself, by posting some code you have trouble with, for instance. Questions about homework assignments are generally welcome, as long as you show some effort and have bothered to think about the problem prior to posting There is little point in a regular supplying you with code to fulfil an assignment if you are going to pass the course and come out and work on real-world projects without knowing how to even tackle a basic homework problem. See also: Back to Contents First of all, that uppercase "LEARN" is a cliche, so don't think you're being clever. We see it approximately once a fortnight. Please see 2.5. Note the emphasis on LANGUAGE vs. IMPLEMENTATION. Implementation-related questions (such as those regarding the Microsoft or Borland compilers, or the MS Windows operating system) are better answered in newsgroups designed specifically for such tasks - see 4.2-7 for examples of such questions, and 8.1-5 for the newsgroups in which they belong. Newcomers to alt.comp.lang.learn.c-c++ frequently want information on these topics or help with other system-specific issues. We often receive requests for help with compilers, debuggers, linkers, libraries, and so on. However, most of the regulars believe that learning the core language without all those other bits and pieces the implementation provides (graphics, sound, lights, magic) is the best way to begin. We foster that approach by discussing ONLY the C and C++ languages as defined by their respective standards. See also: Back to Contents Because the return type of the main() function must be int in both C and C++. Anything else is undefined. Bottom line - don't try to start a thread about this in alt.comp.lang.learn.c-c++ as it has already been discussed many, many times and generates more flamage than any other topic. See also: Back to Contents <iostream>and <iostream.h>? <iostream> is the correct standard C++ header name. <iostream.h> does not exist according to the ISO standard, but many compilers still provide it because it remains common in legacy code. It should not be used where an implementation provides the header <iostream>. Standard C++ headers do not have the `.h' suffix. They look like this: #include <iostream> #include <string> #include <vector> #include <algorithm> These headers can be implemented in any fashion, as long as they meet the standard requirements. Theoretically, they could be a block of memory, a strip of magnetic tape, or stored on a Babbage Difference Engine for all we know. Of course, the actual representation of the header is likely to be an ordinary text file under most implementations, but it does not need to be named "iostream.h". For this reason, and to avoid name clashes with C headers, the `.h' has been dropped. See also: 3.6 What's the difference between <string>and <string.h>? Back to Contents <string>and <string.h>? <string> is the C++ header which defines the std::string class. <string.h> is a C header which defines such functions as strcpy(), strlen(), and strcmp(). Although standard C++ does provide a <string.h> header, the use of it is deprecated in favour of <cstring>. Confused? Well, when C++ was standardised it was decided to retain the C standard library. If the `.h' suffix was dropped from the C headers, this would cause name clashes with C++ headers. To avoid this, the letter `c' was added to the beginning of each C header name. Thus: and so on.and so on. <string.h>became <cstring> <stdio.h>became <cstdio> <stdlib.h>became <cstdlib> What does this mean for your own programs? Well, you should always prefer the C header names without the suffix. These provide functions defined in namespace std, as opposed to the old headers whose contents are defined in global space. In addition, <string.h> and its kin are deprecated by the C++ standard, which means that they might disappear in a later version. See also: 3.5 What's the difference between <iostream>and <iostream.h>? Back to Contents This can sometimes occur when you are developing programs under Microsoft Windows using an IDE. A command prompt window opens and displays the output, and control is passed back immediately to the IDE. To get around this, you can look through the various menus to find a "View Output Screen" option. Alternatively, you could open a command prompt window, change directory to the one your containing your executable, and run it from there. Back to Contents Short answer - this is not possible using standard C or C++. This is the issue newcomers to our newsgroup have the most trouble with. We're sorry, really we are, but there is just NO WAY of clearing the screen in standard C or C++. Truly. Why? Well, to begin with, define "clear the screen". Do you mean, clear one window of a windowed operating system? Clear every pixel currently displayed on a monitor? Remove all text from a command line interface? Scroll dots off an LED display? There are so many different combinations of hardware and software which C and C++ must act on that it is not possible to provide one solution that fits them all. For this reason, you'll need to ask the question in a newsgroup which deals with your particular operating system, hardware, or compiler. Because there is no standard way to clear the screen, we can't provide you with an appropriate answer in alt.comp.lang.learn.c-c++. Please don't ask us how to clear the screen. The question has been asked so many times it has passed through the realms of cliche and out the other side into mythology. See also: Back to Contents Because the means of doing this differs between operating systems, you'll need to ask this question in a forum for your C or C++ implementation or operating system. We simply can't give you an answer that will work on all systems - directories are not part of the C and C++ standards. Back to Contents C and C++ provide stream-based I/O. In simple terms, this means that characters accumulate until they are read by a program or are destroyed - usually, this process involves a buffer of some kind so that data is made available one line at a time, not character by character. In addition, there can be multiple layers of buffering, some of which are part of the underlying operating system and are therefore outside the direct control of your program. The program does not have to know about where the characters come from. They could be in a file, typed by someone sitting at a keyboard, or sketched on a touchpad for all we care. This approach has the effect of distancing C and C++ programs from the hardware they run on. Now, in practice we realise that there are ways to read an individual key press or other single event such as a switch being thrown or a mouse movement - BUT those events are detected using code specific to the hardware in question, and as such are not a part of standard C or C++. Because there are so many different ways of accomplishing such things, the best place to ask about them is in a newsgroup or mailing list which discusses your particular hardware or operating system. See also: Back to Contents Because the means of doing this differs between operating systems, you'll need to ask this question in a forum for your C or C++ implementation or operating system. We simply can't give you an answer that will work on all systems - serial ports are not part of the C and C++ standards. Back to Contents We lose count of the number of beginners who want to progress immediately from "Hello, world!" to writing the latest greatest turn-based strategy game, with all the trimmings. Usually, our response to such yearnings is to counsel patience and a systematic approach to learning C or C++. First learn the standard language, THEN go nuts with graphics, sound, animation, windows, communications, or whatever takes your fancy. C and C++ are not simple languages. They take time and perseverance to learn. If you make the effort to learn the standard language first, sans graphics or any of the other tempting optional extras, you will find life much easier later on when creating more extensive programs. You should be aware that the process of learning these languages to the point where you are able to write such complex programs may take a year or more, and a lot of hard work will be necessary to acquire the diverse skill set needed. Having said that, let's be quite clear - we don't discuss graphics, sound, animation, or any of that stuff on alt.comp.lang.learn.c-c++ as they are not part of the C or C++ standards. You'll need to find a newsgroup or mailing list that deals with your particular compiler or operating system and ask your questions there. Back to Contents In standard C or C++, you probably can't. You'll need to use whatever additional libraries that came with your compiler to accomplish such tasks. There is one caveat - under certain operating systems, you may be able to get away with opening a file and writing to it, which in turn writes to the printer. However, this is not something we can guarantee for every OS, therefore the question is off-topic in this newsgroup. Back to Contents GCC distributed with native MS Windows32 libraries. Port of GCC and relevant tools providing a UNIX-like API on top of the Win32 API. An IDE distributed with an MS Windows port of GCC and the Mingw32 runtime library. A free version of Borland's C++ compiler is now available for download from: lcc-win32 is a free C compiler available for 32-bit Windows. is based on the retargettable lcc system. The Pacific C compiler is available for free for personal use. You can download it from Back to Contents If you're programming under the Apple Macintosh, you can obtain the Macintosh Programmers' workshop for free. Back to Contents GCC, the Gnu Compiler Collection, includes a free C and C++ compiler from the Free Software Foundation available for most Unix-based systems. It has been ported to many other systems (including Microsoft operating systems.) Back to Contents For the Amiga, BeOS, and pOS, look at the GG port of GCC at A popular GCC port for MS-DOS. Test-compile code using this web interface for Comeau C++. Back to Contents Often C and C++ implementations can be had for surprisingly little, especially if you happen to belong to a major university or college. Academic discount can get you very reasonable deals on "learning editions" of certain major compilers, as well as full versions. We suggest you ask your computer science department or college bookstore for recommendations and specials. It's not really our place to endorse one implementation over another. The regulars use a wide variety of tools in their daily work. You'll need to do a bit of research to find the one that's right for you. Back to Contents Opinions vary widely. Many readers recommend the book(s) they learned from, regardless of whether or not they might actually be suitable for the student. The fact that many commonly recommended books are either full of errors or hopelessly out of date (or even both!) makes matters worse. Fluency of writing style and easy to understand explanations can hide technical inaccuracies, especially to a beginner. Beware of books that claim to teach you both C and C++ - they might end up teaching you a horrible hybrid instead. It is also probably better to stick to books that conform to the C and C++ standards, at least while beginning. It's also valuable to get a recent text, given the changing nature of both languages, so check the copyright date.. It pays to keep more than one good book handy; many books known for their technical accuracy can seem dense and unreadable in places, and you might at times need to back up a primer with a reference. Most alt.comp.lang.learn.c-c++ regulars will recommend that you learn from more than one book, and any serious programmer will quickly accumulate a library of texts.. The Association of C and C++ Users (ACCU) maintains a collection of book reviews taken from its journals. Many of the reviews are fair and excellent in their criticism, though there are a few minor inconsistencies and a number of truly awful books have escaped with favourable reviews. It's a useful starting point, though. C and C++ experts recommend against using ANY book written by a certain Herbert Schildt. To see why, read 6.4. The "Dummies"/"Complete Idiots" series of books are not particularly well-regarded either. Back to Contents If you wish to learn C, the classic text - the "Bible" - is The C Programming Language, 2nd edition, by Brian Kernighan and Dennis Ritchie. This hallowed text describes and explains ANSI C, but does not cover the C99 standard (ISO/IEC 9899:1999). For a FAQ explaining some of the changes contained in C99, please see: is renowned for its brevity, clarity, elegance and completeness; but these very factors can make it heavy going for the beginner. N King's C Programming: A Modern Approach is another text frequently recommended on comp.lang.c. This book is a good, thorough introduction to C that is a lot easier to work with from a beginner's perspective. also: Back to Contents The canonical text for C++ is The C++ Programming Language, 3rd edition, by Bjarne Stroustrup. Experienced C++ programmers love it; however, many beginners seem to find it very hard going indeed. Like K&R2, it assumes basic familiarity with programming concepts and is not really intended for the absolute beginner. It does not assume any previous knowledge of C. are advised to obtain Stroustrup's hardcover "Special Edition", as it contains many corrections and new appendices which will be valuable later on. A good starting point for C++ is Stan Lippman and Josee Lajoie's C++ Primer, a solid text with a strong focus on text processing and standard C++ programming. Both authors were active in development of the C++ standard. The book is eminently readable, and would be a good beginning for those with minimal programming experience. Published in 1998, it remains a valuable resource. recently, Stan Lippman has written a shorter introductory text aimed at experienced programmers. Here's a review: Koenig and Barbara Moo's Accelerated C++ is a valuable addition to the ranks of modern C++ primers. It takes a refreshing approach to the use of the standard library, introducing useful programming techniques early and providing many real-world examples. Absolute beginners to programming may find the pace intimidating, however. Eckel's Thinking in C++ is often recommended in posts to alt.comp.lang.learn.c-c++. The online version is available for download at: a C++ standard library reference, Nicolai Josuttis' The C++ Standard Library is one of the best. Austern's Generic Programming and the STL is also well thought of, although not specifically a standard library reference. could go on. As you progress in your studies, you will want to add Scott Meyers and Herb Sutter (amongst others) to your arsenal of authors. The C++ FAQ also contains some recommendations for C++ books. See also: Back to Contents A good answer to this question could fill a book by itself. While no book is perfect, Schildt's books, in the opinion of many gurus, seem to positively aim to mislead learners and encourage bad habits. Schildt's beautifully clear writing style only makes things worse by causing many "satisfied" learners to recommend his books to other students. Do take a look at the following scathing articles before deciding to buy a Schildt text. above reviews are admittedly based on two of Schildt's older books. However, the language they describe has not changed in the intervening period, and several books written at around the same time remain highly regarded. Back to Contents We often get questions from students who would prefer to learn from a web page instead of buying a book. Our standard warning - while there is some valuable material out there on the web, there is also an enormous amount of nonsense written about both languages. The trouble is, as a beginner you may not have the ability to tell the difference! For this reason, we usually recommend that you obtain at least one good book to supplement your learning. One of the best online C language tutorials is Steve Summit's site, where he makes available class notes for the C courses he teaches: are references to other C tutorials in his C FAQ as well. Vinit Carpenter maintains a list of resources for learning C and C++. Do note, however, that a fair number of the tutorials placed online contain mistakes and/or are out of date. Jensen's tutorial on pointers and arrays in C can be found at Torfs has written an excellent, complete tutorial, meant to complement a good introduction to C. It's not primarily intended for the complete beginner to the language, though. are few, if any, credible tutorials for standard C++ online. Your best bet is a reliable textbook (see 6.3). Back to Contents an excellent resource for C, containing a number of extremely useful links and pointers. For beginners to C and C++, Jack Klein has put up an excellent page with tips, suggestions and expanded answers to a number of commonly asked beginners' questions. Comeau Computing web site features several highly informative and useful resource pages, including Fischer maintains a C++ FAQ: Summit has archived some of his longer and more informative Usenet posts at posters often request information on file formats. A good resource for this can be found here: large C++ link farm is maintained at: and answers from Herb Sutter's popular Guru Of The Week series are archived here: Stroustrup's homepage Back to Contents Because we don't answer questions like the one you asked. You wanted to know about Windows, right? Or comboboxes. Or serial ports. Or something else that has nothing whatsoever to do with the C or C++ standard languages. Please don't be offended when someone asks you to post in another group. All they are saying is that there is another resource that would be more appropriate for your question. If you walk into a delicatessen and ask for roofing tiles, the staff will look at you askance and direct you to the nearest building supplies outlet. It's the same principle. Please read the FAQ of any group prior to posting your query there. Note: where you see an `*' at the end of a name in the lists that follow, it means "hierarchy". You will find more than one newsgroup beginning with that root name. For example, comp.os.ms-windows.programmer.*is a hierarchy containing 10 other groups or sub-hierarchies. See also: Back to Contents The following newsgroups include the "Big Six" of standard C and C++. Some of the best, smartest programmers in the world read them. You will find that the .moderated versions tend to have less traffic, but are free of spam and off-topic posts. The so-called "signal to noise ratio" is higher, making them excellent resources. Try searching the archives of these before asking a question on alt.comp.lang.learn.c-c++. Please note that these forums are not always the most appropriate choice for asking absolute beginner questions, like "Why won't hello.c compile?" The comp.std.* groups are moderated, and tend to discuss language features on a rather esoteric level. comp.lang.cDiscussions on C in other languages include: comp.lang.c.moderated comp.std.c de.comp.lang.c fj.comp.lang.c fr.comp.lang.c han.comp.lang.c comp.lang.c++Discussions on C++ in other languages include: comp.lang.c++.moderated comp.std.c++ de.comp.lang.c++See also: it.comp.lang.c++ es.comp.lang.c++ han.comp.lang.c++ 8.3 General programming groups 8.4 Compilers and libraries 8.5 Operating systems Back to Contents These newsgroups address programming issues without a particular language focus. They are more appropriate for general programming questions. Algorithms: comp.programming Games: alt.games.programming, comp.games.development.programming.algorithms Graphics: comp.graphics.algorithms Object oriented programming: comp.object See also: Back to Contents These newsgroups discuss a particular compiler or development environment - an implementation of C or C++ (or both). They are the place to go when you have a question that does not relate to the language itself, but to facilities provided by the implementation. In addition, listed below are groups which discuss common libraries in use for various environments. Both Microsoft and Borland operate their own public servers to facilitate access to their respective newsgroup hierarchies. Microsoft: msnews.microsoft.com Borland: forums.inprise.com Borland C++: borland.public.cpp, borland.public.cpp.language Borland C++Builder: borland.public.cppbuilder, borland.public.cppbuilder.language CodeWarrior: comp.sys.mac.programmer.codewarrior DJGPP: comp.os.msdos.djgpp GCC: gnu.gcc, gnu.gcc.help, gnu.g++.help LCC: comp.compilers.lcc Visual C++: microsoft.public.vc.ide_general, microsoft.public.vc.language MFC: microsoft.public.vc.mfc, comp.os.ms-windows.programmer.tools.mfc OWL: borland.public.cpp.owl, comp.os.ms-windows.programmer.tools.owl VCL: borland.public.cppbuilder.vcl LEDA: comp.lang.c++.leda Back to Contents Amiga: comp.sys.amiga.programmer DOS: comp.os.msdos.programmer GNU/Linux: comp.os.linux.development.* Macintosh: comp.sys.mac.programmer.* MS Windows: comp.os.ms-windows.programmer.*, microsoft.public.win16.programmer.*, microsoft.public.win32.programmer.* OS/2: comp.os.os2.programmer.misc UNIX: comp.unix.programmer Back to Contents This is because other better, more comprehensive resources exist for this purpose - in particular, the comp.lang.c and comp.lang.c++ FAQs: The comp.lang.c FAQ C++ FAQ lite Back to Contents A list of changes covering the revision from "old" FAQ to "new" can be found at, or via anonymous FTP at. Back to Contents The following people have contributed to this FAQ with helpful comments, suggestions, advice, corrections and constructive criticism, and (in the case of some) for permission to quote from their papers/posts. Hecking, Brody Hurst, Jack Klein, Kaz Kylheku, Martijn Lievaart, Daniel Longest, Bernd Luevelsmeyer, Michael McGoldrick, Chris Newton, Dennis Ritchie, Wieland St Bjarne Stroustrup, Dennis Swanson. Back to Contents Last modified: Sat Sep 29 11:57:24 EST 2001
http://www.comeaucomputing.com/learn/faq/
CC-MAIN-2015-14
refinedweb
7,183
63.8
XHTML - Version 1.1 The W3C has helped move the internet content-development community from the days of malformed, non-standard mark-up into the well-formed, valid world of XML. In XHTML 1.0, this move was moderated by the goal of providing easy migration of existing HTML 4 (or earlier) based content to XHTML and XML. The W3C has removed support for deprecated elements and attributes from the XHTML family. These elements and attributes had largely presentation-oriented functionality that is better handled via style sheets or client-specific default behavior. Now the W3C's HTML Working Group has defined an initial document type based solely upon modules which are XHTML 1.1. This document type is designed to be portable to a broad collection of client devices, and applicable to the majority of internet content. Document Conformance The XHTML 1.1 provides a definition of strictly conforming XHTML documents which MUST meet all the following criteria − The document MUST conform to the constraints expressed in XHTML 1.1 Document Type Definition. The root element of the document MUST be <html>. The root element of the document MUST designate the XHTML namespace using the xmlns attribute. The root element MAY also contain a schema location attribute as defined in the XML Schema. There MUST be a DOCTYPE declaration in the document prior to the root element. If it is present, the public identifier included in the DOCTYPE declaration MUST refer the DTD found in XHTML 1.1 Document Type Definition. Here is an example of an XHTML 1.1 document − <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xmlns: <head> <title>This is the document title</title> </head> <body> <p>Moved to <a href="">example.org</a>.</p> </body> </html> Note − In this example, the XML declaration is included. An XML declaration such as Modules The XHTML 1.1 document type is made up of the following XHTML modules. Structure Module − The Structure Module defines the major structural elements for XHTML. These elements effectively act as the basis for the content model of many XHTML family document types. The elements and attributes included in this module are − body, head, html, and title. Text Module − This module defines all of the basic text container elements, attributes, and their content model − abbr, acronym, address, blockquote, br, cite, code, dfn, div, em, h1, h2, h3, h4, h5, h6, kbd, p, pre, q, samp, span, strong, and var. Hypertext Module − The Hypertext Module provides the element that is used to define hypertext links to other resources. This module supports element a. List Module − As its name suggests, the List Module provides list-oriented elements. Specifically, the List Module supports the following elements and attributes − dl, dt, dd, ol, ul, and li. Object Module − The Object Module provides elements for general-purpose object inclusion. Specifically, the Object Module supports − object and param. Presentation Module − This module defines elements, attributes, and a minimal content model for simple presentation-related markup − b, big, hr, i, small, sub, sup, and tt. Edit Module − This module defines elements and attributes for use in editing-related markup − del and ins. Bidirectional Text Module − The Bi-directional Text module defines an element that can be used to declare the bi-directional rules for the element's content − bdo. Forms Module − It provides all the form features found in HTML 4.0. Specifically, it supports − button, fieldset, form, input, label, legend, select, optgroup, option, and textarea. Table Module − It supports the following elements, attributes, and content model − caption, col, colgroup, table, tbody, td, tfoot, th, thead, and tr. Image Module − It provides basic image embedding and may be used in some implementations of client side image maps independently. It supports the element − img. Client-side Image Map Module − It provides elements for client side image maps − area and map. Server-side Image Map Module − It provides support for image-selection and transmission of selection coordinates. The Server-side Image Map Module supports − attribute ismap on img. Intrinsic Events Module − It supports all the events discussed in XHTML Events. Meta information Module − The Meta information Module defines an element that describes information within the declarative portion of a document. It includes element meta. Scripting Module − It defines the elements used to contain information pertaining to executable scripts or the lack of support for executable scripts. Elements and attributes included in this module are − noscript and script. Style Sheet Module − It defines an element to be used when declaring internal style sheets. The element and attribute defined by this module is − style. Style Attribute Module (Deprecated) − It defines the style attribute. Link Module − It defines an element that can be used to define links to external resources. It supports link element. Base Module − It defines an element that can be used to define a base URI against which relative URIs in the document are resolved. The element and attribute included in this module is − base. Ruby Annotation Module − XHTML also uses the Ruby Annotation module as defined in RUBY and supports − ruby, rbc, rtc, rb, rt, and rp. Changes from XHTML 1.0 Strict This section describes the differences between XHTML 1.1 and XHTML 1.0 Strict. XHTML 1.1 represents a departure from both HTML 4 and XHTML 1.0. The most significant is the removal of features that were deprecated. The changes can be summarized as follows − On every element, the lang attribute has been removed in favor of the xml:lang attribute. On the <a> and <map> elements, the name attribute has been removed in favor of the id attribute. The ruby collection of elements has been added.
http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=xhtml&file=xhtml_version-1.1.htm
CC-MAIN-2016-07
refinedweb
950
57.37
Automatically import namespaces Whenever you use a type from a namespace that hasn’t been added with a using statement, ReSharper will offer you to add the corresponding statement on top of the file you’re in. This is indicated by a blue box shown above the type being used. To add the corresponding reference, simply press Alt+Enter. The above assumes that the project you’re in actually references the corresponding DLL. In case it does not, ReSharper can still help you add both a DLL reference and a using statement provided the necessary DLL is referenced by some other project in your solution and the project you’re in references that project. If that happens to be the case, you won’t get a blue pop-up. Instead, the type being used will be highlighted in red, and ReSharper will offer you a quick-fix to add a reference to the corresponding assembly: Selecting the top option will add a reference to System.Windows.Forms in your current project and will add a using System.Windows.Forms; statement at the top of the file.
https://www.jetbrains.com/help/resharper/2016.1/Automatically_import_namespaces.html
CC-MAIN-2016-50
refinedweb
186
60.35
> On Sept. 2, 2015, 3:07 a.m., Jiang Yan Xu wrote: > > src/slave/containerizer/provisioners/backends/copy.cpp, lines 108-116 > > <> > > > > Wow, I guess I hadn't realized what you meant by "making sure the > > layers have the same basename" and I overlooked the fact that when multiple > > layers are applied every layer other than the 1st one is going to have the > > rootfs dir already created... > > > > I think it's too much of a restriction to impose the same basename for > > all rootfes and it's not the backend's position to know that they should > > all be the same. > > > > Given that there is unfortunately no standard way of doing this, I > > think I'll be OK with either of the following two approaches: > > > > 1) The following two commands can make sure the source overwrites the > > target event if it is a diretory. > > > > GNU cp: > > cp -aT source target // -T makes sure the contents are copied. > > > > OSX cp: > > cp a source/ target // The trailing slash makes sure the contents > > instead of the directory is copied. > > > > So we can do something like > > > > #ifdef __APPLE__ > > source += "/" > > #else > > options += "T" > > #endif > > > > 2) Use your previous approach with "/*" expansion but with quotes > > around the paths. > > > > Neither is ideal but is no worse than the system commands we are > > already invoking. > > > > I am OK with either. What do you think? Advertising hmm, I think I'll go for 1) option. - Timothy ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On Sept. 1, 2015, 12:58 a.m., Timothy Chen wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Sept. 1, 2015, 12:58 a.m.) > > > Review request for mesos, Jie Yu and Jiang Yan Xu. > > > Bugs: MESOS-2968 > > > > Repository: mesos > > > Description > ------- > > Add Copy backend for provisioners. > > > Diffs > ----- > > src/Makefile.am 7b4d9f65506e7fa8425966009401aae73cdb79a5 > src/slave/containerizer/provisioners/backend.cpp > 2f7c335f62fdeb27526ab9a38a07c097422ae92b > src/slave/containerizer/provisioners/backends/copy.hpp PRE-CREATION > src/slave/containerizer/provisioners/backends/copy.cpp PRE-CREATION > src/tests/containerizer/provisioner_backend_tests.cpp > d321850613223a2357ca1646a9d988d05171772c > > Diff: > > > Testing > ------- > > make check > > > Thanks, > > Timothy Chen > >
https://www.mail-archive.com/reviews@mesos.apache.org/msg08625.html
CC-MAIN-2018-05
refinedweb
341
59.5
0 im trying to figure out how to get the max and min of the values that are entered if i run this program. I want the user to enter in a series of numbers until they were finished and to terminate it to type -99. I'm having quite a few issues. First i can only get it to go through the while statement if i use -99 as my first number. Once i get through the while statement i dont know how to come up with the min or max of the values. Help please. import javax.swing.JOptionPane; public class maxandmin { /** * @param args */ public static void main(String[] args) { double number; String input; int cap = -99; input = JOptionPane.showInputDialog("Enter the first number"); number = Integer.parseInt(input); while(number != cap){ JOptionPane.showInputDialog("Enter the next number"); number = Integer.parseInt(input); } JOptionPane.showInputDialog("The smallest of the numbers [INDENT][INDENT][INDENT][/INDENT][/INDENT][/INDENT]entered was : " + // + "/n" /*+*/ "The largest of the numbers [INDENT][INDENT][INDENT][/INDENT][/INDENT][/INDENT]entered was : ")/* + )*/; // TODO Auto-generated method stub } }
https://www.daniweb.com/programming/software-development/threads/304636/max-min-value
CC-MAIN-2018-09
refinedweb
177
54.63
Alexander Myodov <maa_public at sinn.ru> wrote: [snip Alexander Myodov complaining about how Python works] > i = 0 > while i != 1: > i += 1 > j = 5 > print j Maybe you don't realize this, but C's while also 'leaks' internal variables... int i = 0, j; while (i != 1) { i++; j = 5; } printf("%i %i\n", i, j); If you haven't yet found a good use for such 'leakage', you should spend more time programming and less time talking; you would find (quite readily) that such 'leaking' is quite beneficial. > I made several loops, one by one, using the "i" variable for looping. > Then in the latest loop I changed the "i" name to more meaningful > "imsi" name in the "for" declaration and whenever I found inside the loop. > As I use "i" name *for loops exclusively*, I didn't wittingly reuse the > same name for different purposes. The problem was that I missed one > occurance of "i" variable inside the loop code, so it gained the same > value (from the completion of previous loop) throughout all the "imsi" > loop. And the interpreter didn't notice me that "I am using the > undefined variable" (since it is considered defined in Python), as > accustomed from other languages. That's my sorrowful story. So you mistyped something. I'm crying for you, really I am. > But for the "performance-oriented/human-friendliness" factor, Python > is anyway not a rival to C and similar lowlevellers. C has > pseudo-namespaces, though. C does not have pseudo-namespaces or variable encapsulation in for loops. Ah hah hah! Look ladies and gentlemen, I caught myself a troll! Python does not rival C in the performance/friendliness realm? Who are you trying to kid? There is a reason why high school teachers are teaching kids Python instead of Pascal, Java, etc., it's because it is easier to learn and use. On the performance realm, of course Python is beat out by low-level languages; it was never meant to compete with them. Python does what it can for speed when such speed does not affect the usability of the language. What you are proposing both would reduce speed and usability, which suggests that it wasn't a good idea in the first place. > JC> Python semantics seem to have been following the rule of "we are all > JC> adults here". > I always believed that the programming language (as any computer > program) should slave to the human, rather than a human should slave > to the program. Your beliefs were unfounded. If you look at every programming language, there are specific semantics and syntax for all of them. If you fail to use and/or understand them, the langauge will not be your 'slave'; it will not run correctly, if at all. > "for (int i = 0; i < 10; i++)" works fine nowadays. I'm sorry, but you are wrong. The C99 spec states that you must define the type of i before using it in the loop. Maybe you are thinking of C++, which allows such things. > JC> Also: python-dev is a mailing list for the development /of/ Python. > JC> Being that your questions as of late have been in the realm of "why does > JC> or doesn't Python do this?", you should go to python-list (or the > JC> equivalent comp.lang.python newsgroup) for answers to questions > JC> regarding current Python behavior, and why Python did or didn't do > JC> something in its past. > I'm sorry for wasting the time of developers. For "for/while/if" > statements, I just had an idea which (I believed) could be useful for > many peoples, Test your ideas on comp.lang.python first, when more than a handful of people agree with you, come back. - Josiah
https://mail.python.org/pipermail/python-dev/2005-September/056693.html
CC-MAIN-2020-05
refinedweb
629
73.58
Binary Search for Technical Interviews Binary search algorithm is a widely used searching algorithm to find data in a sorted collection. Binary search is also referred to as half-interval search. If you can, roughly, eliminate half of the search area with a condition (invariant), you can use binary search to find the target solution. The algorithm runs in O(log n) in worst and average cases, making it efficient for solving many search-related problems. You can implement it iteratively or recursively. In this article, you will use iterative implementation to have O(1) space complexity. The primary examples of binary search usage show how to implement it and find some data in a sorted collection. However, the algorithm can be used in more complicated scenarios. For example, the algorithm can be used to solve the following problem types: - find the maximum element not greater than x (leftmost element) - find the minimum element not less than x (rightmost element) - find the kth element - minimax problems - maximize/minimize the arithmetic mean of a subset with some properties At the first look, binary search can be straightforward to implement. However, the real-world usage showed the opposite. “Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky”. — Donald Knuth The famous issue with a tricky implementation related to number overflow when calculating a middle point (left + right) / 2. The problem affected many textbooks and programming language implementations. If you are interested to learn more about the overflow issue, check the Google research blog Nearly All Binary Searches and Mergesorts are Broken which shows details of that problem. Also, common issues with using binary search to solve interview problems are related to edge cases such as: - left pointer is pointing to the solution - right pointer is pointing to the solution - pointer is out of the collection range - the invariant doesn’t cover rare inputs It’s critical to understand and be able to implement variations of binary search depending on the problem you need to solve. In this article, you will learn how to implement binary search and use it to solve real interview problems. Binary Search Implementation Before solving problems, you need to understand how binary search works and implement the algorithm. For the explanation, I will use pseudocode and, for problems solution Rust, but the implementation would be similar to other programming languages. First, you need to have a sorted collection or, in more complicated scenarios sorted slice (subcollection). A collection can be sorted in ascending on descending order. The order will impact conditions where you should move left and right pointers. The search starts by comparing elements in the middle of the collection with the target value. To find the middle use middle = left + (right — left) / 2. This formula calculates the middle point and is a safe way of doing it. It will protect you from having an overflow issue. The left pointer initially holds zero as an index and the right pointer takes the collection size. left = 0 right = arr.len() In many problems, you may see that left and right pointers are pointing outside of the collection or represent the search space for the problems not related to arrays. Setting left and right pointers outside the collections can be helpful in some cases and may simplify the code, but you have to be careful when accessing the collection values using left and rightpointers. If they point to values that are not valid indexes(collection bounds), you will get an index out of bounds error. left = -1 right = arr.len() When you calculate the middle value, compare it with the target value, and if it matches, you have found the seeking value. if arr[middle] == target { return middle } If the arr[middle] < target satisfies, the search continues in the right half of the collection; otherwise, it searches in the left half. if arr[middle] < target { left = middle + 1 } else { right = middle } On each iteration, the algorithm eliminates half of the collection in which the target value can’t be found. left = 0 right = arr.len()while left < right { middle = left + (right - left) / 2 if nums[middle] == target { return middle } if arr[middle] < target { //the searched value in the right part left = middle + 1 } else { // the searched value in the left part right = middle } }// In the case when the element is not found return -1 The algorithm steps can be summarised as follows: - The leftpointer points to the 0index (in most cases) - The rightpointer points to the collection size - The whileloop condition is left < right - Move the leftor rightindex to the middleindex Note that the above binary search implementation is one of the many possible implementations. Of course, not every problem can be solved with one common pattern, but when you try to consider the specifics of each problem, you can easily fail with tricky edge cases. The next step is to apply your knowledge to solve frequently-asked binary search problems. Binary Search Problems Many binary search problems are written in a way that, on the first look, it’s not clear when binary search can be used. The complexity can be defining a search condition or even doing unrelated to binary search manipulations to transform the input data before applying binary search. However, pay attention to small details. When you see a problem that requires a solution with O(log n) or sequence is sorted or can be sorted, this can be a good indicator of possible binary search usage. Alright, let’s try to solve some binary search problems. I would encourage you to solve them on your own and then check the solution. Find First and Last Position of Element in Sorted Array Given an array of integers numssorted in non-decreasing order, find the starting and ending position of a given targetvalue. If targetis not found in the array, return [-1, -1]. You must write an algorithm with O(log n)runtime complexity. Solution To solve this problem, you first need to solve two subproblems. The solution to the subproblems will be the solution to the main problem. The subproblems are: - find the leftmost element ( lower_bound) - find the rightmost element ( upper_bound) To find lower_bound, you can use binary search algorithm to return left index. After search completion, it’s guaranteed that the left pointer will point to a seeking target element if the element exists. pub fn lower_bound(nums: &[i32], target: i32) -> i32 { let mut l = 0; let mut r = nums.len(); while l < r { let m = l + (r - l) / 2; if nums[m] < target { l = m + 1; } else { r = m; } } l as i32 } Finding upper_bound can be tricky. The array is sorted in non-decreasing order, meaning that we need to find an element from the right side. You need to change the pointer moving condition to find the rightmost element. If the middle element is greater than the target, you move right to the middle and continue searching on the left part. Otherwise, on the right part. if nums[m] > target { r = m; } else { l = m; } When the loop while l + 1 < r completes, the left and the right pointers will hold the indexes of the adjacent elements, and the left pointer value is the rightmost element’s index. pub fn upper_bound(nums: &[i32], target: i32) -> i32 { let mut l = 0; let mut r = nums.len(); while l + 1 < r { let m = l + (r - l) / 2; if nums[m] > target { r = m; } else { l = m; } } l as i32 } Now, when you have functions to find lower_bound and upper_bound you can use them to find the final result. let l = lower_bound(&nums, target); let r = upper_bound(&nums, target); The left index from the lower_bound function result can be outside the array length or point to a not-target array element. To prevent out of index error and check if the target value exists, you need to check if left is in the array size bounds and nums[l] != target. If any of those conditions are true, return [-1, -1]. if l == size || nums[l as usize] != target { return vec![-1, -1]; } The completed solution code. pub fn lower_bound(nums: &[i32], target: i32) -> i32 { let mut l = 0; let mut r = nums.len(); while l < r { let m = l + (r - l) / 2; if nums[m] < target { l = m + 1; } else { r = m; } } l as i32 }pub fn upper_bound(nums: &[i32], target: i32) -> i32 { let mut l = 0; let mut r = nums.len(); while l + 1 < r { let m = l + (r - l) / 2; if nums[m] > target { r = m; } else { l = m; } } l as i32 }pub fn search_range(nums: Vec<i32>, target: i32) -> Vec<i32> { let size = nums.len() as i32; if size == 0 { return vec![-1, -1]; } let l = lower_bound(&nums, target); let r = upper_bound(&nums, target); if l == size || nums[l as usize] != target { return vec![-1, -1]; } vec![l, r] }if it is in nums, or -1if it is not in nums. You must write an algorithm with O(log n) runtime complexity. Solution In this problem, the nums array possibly can be rotated, meaning you will have two sorted chunks. The adjacent elements, after rotation, will form a peak and drop. If you find the peak, you can apply binary search on the left and the right parts of the peak to find the target value. So how do you find the peak value? If the array is sorted in ascending order, it means the value nums[m] is greater than nums[m — 1], and if the sequence is without rotation, the most left value is less than nums[m]. If the most left value ( nums[l]) is greater than nums[m] means that the peak value is in the left part; otherwise in the right part. let size = nums.len();let mut l = 0; let mut r = size;while l + 1 < r { let m = l + (r - l) / 2; if nums[m] > nums[m - 1] && nums[m] > nums[l] { l = m; } else { r = m; } }let pivot = l; After finding the peak value (pivot), the problem is reduced to finding the element in the subarray from the left part of the pivot and the right part. &nums[0..=pivot] &nums[pivot + 1..] To find the target element, you can implement binary search, but in Rust, you can use a standard implementation, which will simplify the code. if let Ok(idx) = &nums[0..=pivot].binary_search(&target) { return *idx as i32; }if let Ok(idx) = &nums[pivot + 1..].binary_search(&target) { return (*idx + l + 1) as i32; } If the target element is found on the left or right part, return it index. The completed solution code. pub fn search(nums: Vec<i32>, target: i32) -> i32 { let size = nums.len(); let mut l = 0; let mut r = size; while l + 1 < r { let m = l + (r - l) / 2; if nums[m] > nums[m - 1] && nums[m] > nums[l] { l = m; } else { r = m; } } let pivot = l; if let Ok(idx) = &nums[0..=pivot].binary_search(&target) { return *idx as i32; } if let Ok(idx) = &nums[pivot + 1..].binary_search(&target) { return (*idx + l + 1) as i32; } -1 } Wrapping up Binary search is an efficient search algorithm and can be used to solve various problems. Still, you need to be able to evaluate possible edge cases and adapt binary search implementation to find the desired solution. Luckily, you have a lot of free resources to practice binary search problems. Leetcode provides the study plan and the collection of binary-search problems. You also can try to solve competitive programming problems from Codeforces related to binary search. Remember that Codeforces problems can be tough to solve and require knowledge from different topics that are not covered in this article. Don’t be discouraged when you cannot solve any of the problems. Leetcode problems with an easy tag don’t mean they are easy. Keep practicing, and eventually, the problems you saw hard to solve become obvious to you.
https://medium.com/@almmiko/binary-search-for-technical-interviews-27861a823101
CC-MAIN-2022-21
refinedweb
1,994
60.95
Extensible OLE Property Pages in .NET Introduction I've written a lot of COM code over the years. One of things I used quite liberally were OLE property pages. It was a handy way to configure an otherwise invisible COM component in a safe, reliable way. I know, most people associate property pages with ActiveX controls, but property pages are based on interfaces, and those interfaces are completely independent of the ActiveX control architecture. One of the best things about OLE property pages is that you don't have to know anything about the object you've just instantiated to use them. You can just query for the supported interface (ISpecifyPropertyPages) and if it exists, call the GetPages() method to retrieve the CLSID's of the property pages for the object. From there, it's just a little jump to OleCreatePropertyFrame() and voilà, you've just given that COM component the ability to configure itself rather than you having to do all the work. The Problems The first obvious problem was porting the various interfaces and structs to .NET. There are three primary interfaces required to implement this fully. They are ISpecifyPropertyPages, IPropertyPage, and IPropertyPageSite. When scanning all the methods of those interfaces, I quickly narrowed down what structs I would have to implement. Fortunately for me, .NET already knows about RECT (Rectangle), MSG (Message), SIZE (Size), and POINT (Point) structs and how to marshal them, so all we really needed to implement was the PROPPAGEINFO struct and the CAUUID struct. Both structures allocate and free COM memory in ways that are counter-intuitive for straight conversion to strings and Guid arrays. I opted for the IntPtr method for the three strings in the PROPPAGEINFO struct and for the Guid array in the CAUUID structure. From there, I just implemented a few helpers within the struct to assign and extract the data out of the IntPtr's. A Sanity Check After I had successfully mapped the structs and interfaces, it was time to test this baby out! Quickly, I hashed together a few lines of code: Type typ; object obj; Guid[] g; typ = Type.GetTypeFromProgID("MSComDlg.CommonDialog"); obj = Activator.CreateInstance(typ); ActiveXMessageFormatter.InitStreamedObject(obj); ISpecifyPropertyPages pag = (ISpecifyPropertyPages)obj; CAUUID cau = new CAUUID(0); pag.GetPages(ref cau); g = cau.GetPages(); // The method below was added in a base class mentioned later in // the article PropertyPage.CreatePropertyFrame(IntPtr.Zero, 100, 100, "Hello World", new object[] {obj}, g); I ran it, and voilà! The Common Dialog control's property pages popped up on the screen. Encouraged, it was time to get down to implementing my own property pages in my existing .NET apps. Implementing Property Pages in .NET The first thing I needed was a base class to handle the various aspects of implementing a property page. It would originally derive from UserControl, but for reasons I will explain later, I instead inherited from Form. This base class would implement IPropertyPage and handle the management of the underlying objects whos properties are being exposed along with providing a few virtual methods for a derived class to receive events on. This base class also had to do a little bit of magic to get the UserControl to display inside the requested container (IPropertyPage::Activate passes a parent window inside which this control needs to reside). The Derived Classes I created two pages, derived from my PropertyPage base class. I threw a couple controls on them, gave them both [Guid("")] attributes, and made them public. I also opted to register the class library for COM interop because we're writing essentially a COM-callable set of classes. I chose also to give them the [ClassInterfaceType(ClassInterfaceType.None)] attribute. I then created a class that would expose the ISpecifyPropertyPages interface. The single method, GetPages(), is really simple: public void GetPages(ref CAUUID pPages) { Guid[] g = new Guid[2]; g[0] = typeof(MyPropertyPage1).GUID; g[1] = typeof(MyPropertyPage2).GUID; pPages.SetPages(g); } I didn't think it merited any kind of base class. Okay! So, after giving this class a Guid() attribute and setting its ClassInterfaceType to none, it was time to get cracking! Interop Blues I re-ran my test application, replacing the MSComDlg.CommonDialog ProgID with the ProgID of my new class (the one that exposed ISpecifyPropertyPages). To my horror, it failed to cast the resultant object to ISpecifyPropertyPages. It seems that because I had compiled both the class library and the test application using the common source containing the interfaces and structs, each of those declarations became specifically unique to each assembly. Therefore, I couldn't do a straight cast to the interface even though they had the same Guid. This was totally understandable, of course, so I quickly regrouped. Because I was using the Activator class to activate the object, what would happen if I instead used CoCreateInstance() directly to make a COM object? Would the .NET marshaler realize I was creating a .NET object in spite of my using the API directly? Turns out the answer to that question is YES. On the successful return of CoCreateInstance, .NET helpfully unwrapped my class and handed me a .NET object back. It turns out that, no matter how hard I tried, or how indirectly I attempted to marshal this object, .NET would always give me back the base .NET class object. I tried getting an IntPtr of the object's IUnknown, calling Marshal.QueryInterface on it to get the interface and putting it back into object form. I tried every conceivable method I could think of, and none of them worked. This was a major stumbling block. There was no way I could guarantee that any arbitrary .NET class would use a common assembly that implemented a "common" version of the interfaces I wanted. Furthermore, there was no way I could guarantee that one person's implementation of the interfaces would be identical to mine. I simply had to use .NET's marshaling code to work with these objects through COM—COM was the only commonality I had, so I had to use it regardless of what language the actual object was written in. Because .NET would insist on giving me a .NET object no matter how I cast, cajoled, or CoCreateInstance'd the .NET class, I had to resort to drastic measures. I'd have to do something unmanaged. Reborn in Managed C++ C++ is the only place I can mix managed and unmanaged code, so this is the only place where my unorthodox requirements could be met. I quickly threw the PropertyPage base class, interfaces, and structs into C++ (when I say "quickly," I mean I hacked at it endlessly until I managed to get the clean C# code into clunky C++ managed format), and created an unmanaged global function to retrieve the ISpecifyPropertyPages interface given a CLSID and retrieve the pages from it. After all that, I revisited my original C# class library. A quick reference to my C++ managed DLL, a recompile later, and my test application was working perfectly! I'm home free! Wait! Uh oh ... UserControls, Events, & Hosting Environments Once my property page showed up on the screen, I was convinced my work was done. However, as soon as I began typing keys, I realized I was far from finished. The tab key didn't work properly, the UserControl on the page wasn't receiving events properly, and things were Not At All Right. I thought the tab key was a quick fix. I ran the SPY++ program and analyzed the windows of the property pages. I noticed that the UserControl upon which my PropertyPage base class was derived wasn't given the WS_EX_CONTROLPARENT extended window style. I modified the base class to give the property page this style when I show the page. After making that change, the tab key started working, albeit badly. I noticed that none of the controls received an OnFocus event, and the tab key seemed to randomly go to the controls on the page, ignoring my tab order completely. It wasn't so random, actually. It was using the Z-Order of the controls on the page rather than the tab order. It seemed to me that Windows was cycling through the controls rather than using the form's inner logic. I started looking for ways to properly apply the TranslateAccelerator() messages I was receiving from the page's site. In the end, I had to rip out the code that set WS_EX_CONTROLPARENT (it was interfering with .NET's tabbing internals) and write my own tabbing logic in the base class. From UserControl to Form Once I got tabbing right, I again thought that I was home free! It wasn't until I started adding mnemonics to controls that things once again went south. In spite of my improved TranslateAccelerator() handling code, I was still unable to use mnemonics. In desperation, I decided to change my inheritance from UserControl to Form. To transition to Form, I had to radically alter the window styles using the API before showing the window, set the form's parent, and pray. I wasn't confident this would work. I'm delving deep into the "unsupported" section of Windows.Form, so who knows what kind of mess I was creating? Fortunately for me, however, things started looking up. I recompiled the class library and re-ran my test application. To my delight, all was working beautifully! The tab key was tabbing properly, mnemonics were working, and I sat down to start writing this article. COM and .NET Interop Gotchas Because OLE property pages are a COM animal, I decided to see just how well native COM could deal with these property pages. I wrote a UserControl in C# for use as an ActiveX control to see what would happen if I added property page support to it. ActiveX controls in .NET aren't officially supported, and you're about to find out one of the reasons why! The ActiveX Control Test Container and .NET After plugging in all the code I'd need, it was time to break open the test container and see what happens. I successfully added my control to the container, and it did show properly. I right-clicked the control, and lo and behold, a Properties menu option was there and enabled! I clicked it, and was quite pleased to find my property pages flashing up on the screen before me. Success! Not. Although the test container calls the standard OleCreatePropertyFrame API just like my .NET test application did, the objects it passed in as parameters were to my horror, inherited from __ComObject—which meant that .NET for some reason wasn't able to unwrap my objects to their native .NET form. As a result, when a page tried to access or modify any member of the underlying object, it failed. It was time to dig out the source for the test container and start scraping. I quickly found out that the test container aggregates the ActiveX control if it can. Meaning, when it creates the ActiveX control, it creates it with an outer unknown. This outer unknown intercepts IUnknown, IDispatch, and the container's own extended interface, but delegates the remaining queries to the inner object. Because the control container (and many other ActiveX control hosting environments) create the controls as aggregates, it means that .NET has no way of properly marshaling and unwrapping the object before giving it to native .NET code! This meant that none of my property pages could talk to the underlying control properly. I tried a lot of different hacks to make this work, and finally came across an easy fix. I added a new interface, called IProvideObjectHandle, to my base C++ class library. It has a single property, ObjectHandle, that returns (what else) and ObjectHandle! Because ObjectHandle is a MarshalByRef object that wraps up other objects for remoting, I thought this would be the perfect medium to pass my base .NET object to my property pages. I found that even though my property page received __ComObject's from the control test container, I could still query for public interfaces and receive pointers (still __ComObject-based, but valid pointers nonetheless). Therefore, by implementing IProvideObjectHandle in my control, my property pages could query for it and then call the property ObjectHandle. Here's how my control implemented it: public ObjectHandle ObjectHandle { get { return new ObjectHandle(this); } } If, during the COM->.NET interop, the original aggregated version of the control could not be passed properly, the solution is to pass an object that could be marshaled and translated to a native object properly! ObjectHandle would take care of our interop transition for us, and once we were in all-native land, provide its Unwrap() method to safely provide us with a native object. There was one other issue, left unresolved regarding the ActiveX Control Test Container—it never released the ActiveX control entirely. There was one open reference on the control itself. This seemed to be the case even if I created an empty control and instantiated it in the test container. The refcount on the control never reached zero, and I assume this is because of the aggregation occurring and the references being passed into and out of the control. It was beyond the scope of this author and this article to try and resolve and is just one of the many reasons authoring ActiveX controls are not officially supported in .NET. What's Left There are a few things I left out of this implementation—persistance being a major one. I'm leaving that alone because persisting objects in .NET is a pretty well-covered topic. You can decide for yourself how to persist the objects you instantiate and configure via property pages. The other thing I left out was implementing custom property page containers. If the standard OleCreatePropertyFrame() function isn't your cup of tea and you want to roll your own, there are a few gotcha's in store for you along the way—all having to do with .NET native objects talking to other .NET native objects and how they pass interfaces around. Most of the hoops I had to jump through had to do with that subject. The Source Code The source code associated with this article has three projects: - The PropertyPages project, which is a managed C++ project complete with a strong name. It defines the interfaces and the base class the other projects use. - The ControlWithPropSheet project is a C# class library that exposes three public classes, registered for COM interop. The MyUserControl control implements the ISpecifyPropertyPages and IProvideObjectHandle interfaces, both of which only have a single method and are trivial to implement yourself. The other two classes are the PropertyPage-derived classes that provide property pages to the control. They are intentionally simple so you can get an understanding of what's been done. - The final project is DisplayPropSheets, a C# console application that displays the property sheets of any COM object you give it, provided the COM object implements ISpecifyPropertyPages. Property pages displayed shiftedPosted by mfagadar on 02/22/2006 11:41am First I would like to congratulate you for this great article, which helped me a lot. Second, I noticed that when using the property pages written in C# in an external client written in C++, they are displayed shifted to the right, and refresh is very poor (even when enabling double-buffering). Do you have any hint on what the problem might be ? The client displays pages written in C++ and ATL correctly. Thanks and best regards, Mike easy fixPosted by nbarbosa on 10/10/2006 06:43pm
http://www.codeguru.com/cpp/cpp/cpp_managed/nfc/article.php/c8545/Extensible-OLE-Property-Pages-in-NET.htm
CC-MAIN-2014-15
refinedweb
2,612
63.7
Lab 2: Lambdas and Higher-Order Functions Due at 11:59pm on 06/28/2016.. - Questions 1, 2, 3, and 4 must be completed in order to receive credit for this lab. Starter code for questions 3 and 4 is in lab02.py. - Question 5 (What Would Python Display?) is optional. It is recommended that you work on this should you finish the required section early, or if you are struggling with the required questions. - Questions 6, 7, 8, and 9 (Coding) are optional. It is recommended that you complete these problems on your own time. Starter code for questions 7, 8, and 9 is in lab02_extra.py. Topics Consult this section if you need a refresher on the material for this lab. It's okay to skip directly to the questions and refer back here should you get stuck. Lambdas Lambda expressions are one-line functions that specify two things: the parameters and the return value. lambda <parameters>: <return value> While both lambda and def statements are related to functions, there are some differences. A lambda expression by itself is not very interesting. As with any values such as numbers, Booleans, strings, we usually: - assign lambda to variables ( foo = lambda x: x) - pass them in to other functions ( bar(lambda x: x)) Higher Order Functions A higher order function is a function that manipulates other functions by taking in functions as arguments, returning a function, or both. We will be exploring many applications of higher order functions. Required Questions What Would Python Display? Question 1: WWPD: Lambda the Free Use OK to test your knowledge with the following "What Would Python Display?" questions: python3 ok -q lambda -u Hint: Remember for all WWPD questions, input Functionif you believe the answer is <function...>, Errorif it errors, and Nothingif nothing is displayed. >>> lambda x: x______<function <lambda> at ...>>>> a = lambda x: x >>> a(5) # x is the parameter for the lambda function______5>>> b = lambda: 3 >>> b()______3>>> c = lambda x: lambda: print('123') >>> c(88)______<function <lambda> at ...>>>> c(88)()______123>>> d = lambda f: f(4) # They can have functions as arguments as well. >>> def square(x): ... return x * x >>> d(square)______16 >>> t = lambda f: lambda x: f(f(f(x))) >>> s = lambda x: x + 1 >>> t(s)(0)______3>>> bar = lambda y: lambda x: pow(x, y) >>> bar()(15)______TypeError: <lambda>() missing 1 required positional argument: 'y'>>> foo = lambda: 32 >>> foobar = lambda x, y: x // y >>> a = lambda x: foobar(foo(), bar(4)(x)) >>> a(2)______2>>> b = lambda x, y: print('summer') # When is the body of this function run?______# Nothing gets printed by the interpreter>>> c = b(4, 'dog')______summer>>> print(c)______None >>> a = lambda b: b * 2______# Nothing gets printed by the interpreter>>> a______Function>>> a(a(a(2)))______16>>> a(a(a()))______TypeError: <lambda>() missing 1 required positional argument: 'b'>>> def d(): ... print(None) ... print('whoo') >>> b = d()______None whoo>>> b______# Nothing gets printed by the interpreter >>> x, y, z = 1, 2, 3 >>> a = lambda b: x + y + z >>> x += y >>> y -= z >>> a('b')______5 >>> z = 3 >>> e = lambda x: lambda y: lambda: x + y + z >>> e(0)(1)()______4 Question 2: WWPD: Higher Order Functions Use OK to test your knowledge with the following "What Would Python Display?" questions: python3 ok -q hof -u Hint: Remember for all WWPD questions, input Functionif you believe the answer is <function...>, Errorif it errors, and Nothingif nothing is displayed. >>> def first(x): ... x += 8 ... def second(y): ... print('second') ... return x + y ... print('first') ... return second >>> f = first(15)______first>>> f______<function ...>>>> f(16)______second 39 >>> def even(f): ... def odd(x): ... if x < 0: ... return f(-x) ... return f(x) ... return odd >>> stevphen = lambda x: x >>> stewart = even(stevphen) >>> stewart______<function ...>>>> stewart(61)______61>>> stewart(-4)______4 >>> def cake(): ... print('beets') ... def pie(): ... print('sweets') ... return 'cake' ... return pie >>> a = cake()______beets>>> a______Function>>> a()______sweets 'cake'>>> x, b = a(), cake______sweets>>> def snake(x): ... if cake == b: ... x += 3 ... return lambda y: y + x ... else: ... return y - x >>> snake(24)(23)______50>>> cake = 2 >>> snake(26)______Error>>> y = 50 >>> snake(26)______24 Coding Practice Question 3: if you're not sure what this means.) Use OK to test your code: python3 ok -q lambda_curry2 Question 4:. You may use the compose1 function from lecture, re-defined below. ***"def identity(x): return compose1(f, g)(x) == compose1(g, f)(x) return identity return lambda x: f(g(x)) == g(f(x)) Use OK to test your code: python3 ok -q composite_identity Optional Questions What Would Python Display? Question 5: Lambda the Environment Diagram Try drawing an environment diagram for the following code and predict what Python will output. You can check your work with the Online Python Tutor, but try drawing it yourself first! >>> a = lambda x: x * 2 + 1 >>> def b(b, x): ... return b(x + a(x)) >>> x = 3 >>> b(a, x)______21 Environment Diagrams Question 6: value. What is the intrinsic name of that function value, and what frame is its parent? - In frame f2, what name is the frame labeled with ( add_tenor λ)? Which frame is the parent of f2? - What value is the variable resultbound to in the Global frame? You can try out the environment diagram at tutor.cs61a.org. -. Coding Practice Note: The following questions are in lab02_extra.py. Question 7: Foldl Write a function that takes in a list s, a function f, and an initial value start. This function will fold s starting at the beginning. If s is [1, 2, 3, 4, 5] then the function f is applied as follows: f(f(f(f(f(start, 1), 2), 3), 4), 5) You may assume that the function f takes in two parameters. from operator import add, sub, mul def foldl(s, f, start): """Return the result of applying the function F to the initial value START and the first element in S, and repeatedly applying F to this result and the next element in S until we reach the end of the list. >>> s = [3, 2, 1] >>> foldl(s, sub, 0) # sub(sub(sub(0, 3), 2), 1) -6 >>> foldl(s, add, 0) # add(add(add(0, 3), 2), 1) 6 >>> foldl(s, mul, 1) # mul(mul(mul(1, 3), 2), 1) 6 >>> foldl([], sub, 100) # return start if s is empty 100 """"*** YOUR CODE HERE ***"total = start for number in s: total = f(total, number) return total Use OK to test your code: python3 ok -q foldl Question 8: Count van Count Consider the following implementations of count_factors and count_primes: def count_factors(n): """Return the number of positive factors that n has.""" i, count = 1, 0 while i <= n: if n % i == 0: count += 1 i += 1 return count def count_primes(n): """Return the number of prime numbers up to and including n.""" counts all the numbers from 1 to n that satisfy condition. def count_cond(condition): """Returns a function with one parameter N that counts all the numbers from 1 to N that satisfy the two-argument predicate function CONDITION. >>> ***"def counter(n): i, count = 1, 0 while i <= n: if condition(n, i): count += 1 i += 1 return count return counter Use OK to test your code: python3 ok -q count_cond Question 9: ***"def switch(i): return [f1, f2, f3][i % 3] def make_cycle(n): def apply_n_times(x): for i in range(n): x = switch(i)(x) return x return apply_n_times return make_cycle Use OK to test your code: python3 ok -q cycle
http://inst.eecs.berkeley.edu/~cs61a/su16/lab/lab02/
CC-MAIN-2018-05
refinedweb
1,251
69.31
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Update of /cvsroot/gc-linux/linux/arch/ppc/platforms In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv29152/arch/ppc/platforms Modified Files: gamecube.c Log Message: include/asm-ppc/io.h expects all platforms that are !APUS to provide definitions for isa_{io,mem}_base, pci_dram_offset. Instead of working around this with even more #ifdef kludge, just provide these definitions in the platform-specific file. Besides, this will be moving to include/asm-powerpc/ RSN, and, having to maintain ~550 lines for a 3 line delta doesn't seem very practical. Index: gamecube.c =================================================================== RCS file: /cvsroot/gc-linux/linux/arch/ppc/platforms/gamecube.c,v retrieving revision 1.34 retrieving revision 1.35 diff -u -d -r1.34 -r1.35 --- gamecube.c 10 Aug 2005 11:54:35 -0000 1.34 +++ gamecube.c 10 Sep 2005 21:18:24 -0000 1.35 @@ -26,6 +26,15 @@ #include "gamecube.h" +/* + * include/asm-ppc/io.h assumes everyone else that is not APUS provides + * these. Since we don't have either PCI or ISA busses, these are only + * here so things actually compile. + */ +unsigned long isa_io_base = 0; +unsigned long isa_mem_base = 0; +unsigned long pci_dram_offset = 0; + static unsigned long gamecube_find_end_of_memory(void) {
http://sourceforge.net/p/gc-linux/mailman/gc-linux-cvs/thread/E1EECk3-00007G-9X@mail.sourceforge.net/
CC-MAIN-2014-10
refinedweb
237
53.58
Red Hat Bugzilla – Full Text Bug Listing Description of problem: The multihost connectahon testsuite is passing for both client and server on rawhide-20060908, but when running as the server I am seeing avc error messages from running portmap. Version-Release number of selected component (if applicable): rawhide-20060908 kernel-2.6.17-1.2630.fc6.i686 nfs-utils-1.0.9-5.fc6.i386 How reproducible: Everytime Additional info: This also happens with the RHEL5 Beta device eth1 left promiscuous mode audit(1157717260.967:99): dev=eth1 prom=0 old_prom=256 auid=4294967295 audit(1157717278.228:100): audit(1157717278.340:101): Is portmap actually trying to write to a tmp file created in init? Or is this tmp file just grabbing STDOUT, and therefor portmap tries to talk to the TTY and this gets caught. If it is the second, it can probably be ignored since this will only happen in the test scenario. Dan, any clue? Yes, read Comment #1 the portmapper is opening /dev/null then dups stdin, stdout, and stderr iff the open is successful... which in this case it appears the open fails... so this is not fatal but its seems a bit over kill to stop process from opening /dev/null... imho... steved: are you sure? I ended up changing the test as follows: service portmap start | cat > $OUTPUTFILE 2>&1 this stops the avc message from appearing. The following routine is called like daemon(0,0) from main(): #define _PATH_DEVNULL "/dev/null" daemon(nochdir, noclose) int nochdir, noclose; { int cpid; if ((cpid = fork()) == -1) return (-1); if (cpid) exit(0); (void) setsid(); if (!nochdir) (void) chdir("/"); if (!noclose) { int devnull = open(_PATH_DEVNULL, O_RDWR, 0); if (devnull != -1) { (void) dup2(devnull, STDIN_FILENO); (void) dup2(devnull, STDOUT_FILENO); (void) dup2(devnull, STDERR_FILENO); if (devnull > 2) (void) close(devnull); } } return(0); } These avc's have little to do with portmapper. They are basically caused by the kernel checking how stdin, stdout, stderr were opened and whether the domain (portmap_t) has the right access to these open file descriptor. If they do not the kernel closes the descriptor and hands the process a file descriptor to /dev/null. If you have redirected the stdout to a file in a random location, the kernel will check if portmap can write to that location (file_context). This is what is generating the AVC, By default the kernel would have checked a open file descriptor to tty_device_t which is dontaudited, writing to /tmp/t however is audited.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=205771
CC-MAIN-2017-22
refinedweb
415
62.17
ESAPI.NET Build Troubleshooting Targeting .NET Framework <v3.5 This doesn't appear to work at this time. The MS Anti-XSS library v3.1 (latest ver) seems to require v3.5 of the .NET framework in order to build properly. As the .NET ESAPI depends on this, it doesn't appear to be possible to use the .NET ESAPI with previous versions of the .NET framework. I tried installing and referencing v1.5 (instead of v3.1) of the ms-antiXSS lib but still get this error when trying to build ESAPI targeting .NET v2 or v3.0. "Error 2 The type or namespace name HashSet could not be found (are you missing a using directive or an assembly reference?) Esapi\Runtime\ContextRuleHandler.cs 15 17 Esapi"
https://www.owasp.org/index.php/ESAPI.NET_Build_Troubleshooting
CC-MAIN-2017-13
refinedweb
128
72.22
I was trying to solve the KFORK problem. I have a solution in C++ that I got AC with, but my identical Python3 solution gets an NZEC. I’m new to using Python on CodeChef, so maybe there is something basic I am missing. Can someone please take a look at the following code and see if something is wrong? def is_X_Y_valid(board, X, Y): return (X >= 0 and X < len(board) and Y >= 0 and Y < len(board[X])) T = int(input()) for _ in range(0, T): N, M = (int(i) for i in input().split()) board = [[False] * N for x in range(N)] xAttacked = [False] * N yAttacked = [False] * N mainDiagAttacked = [False] * (2 * N - 1) offDiagAttacked = [False] * (2 * N - 1) for _ in range(0, M): X, Y = (int(i) for i in input().split()) X -= 1 Y -= 1 board[X][Y] = True xAttacked[X] = True yAttacked[Y] = True mainDiagAttacked[N - 1 + X - Y] = True offDiagAttacked[X + Y] = True forkedKnights = 0 for x in range(0, N): for y in range(0, N): if board[x][y]: continue if (xAttacked[x] or yAttacked[y] or mainDiagAttacked[N - 1 + x - y] or offDiagAttacked[x + y]): continue attacking = 0 offsets = [[[-2, 2], [-1, 1]], [[-1, 1], [-2, 2]]] for a, b in offsets: for i in a: for j in b: if is_X_Y_valid(board, x + i, y + j): if board[x + i][y + j]: attacking += 1 if attacking >= 2: forkedKnights += 1 print(forkedKnights)
https://discusstest.codechef.com/t/python-nzec-on-kfork-but-ac-using-c/5371
CC-MAIN-2021-31
refinedweb
243
57.64
The RooSimWSTool is a tool operating on RooWorkspace objects that can clone PDFs into a series of variations that are joined together into a RooSimultanous PDF. The simplest use case is to take a workspace PDF as prototype and "split" a parameter of that PDF into two specialized parameters depending on a category in the dataset. For example, given a Gaussian PDF \( PDF from \( G_a \) and \( G_b \) from \( G \) with the following commands: From this simple example one can go to builds of arbitrary complexity by specifying multiple SplitParam arguments on multiple parameters involving multiple splitting categories. Splits can also be performed in the product of multiple categories, i.e., splits the parameter \( m \) in the product of the states of \( c \) and \( d \). Another possibility is the "constrained" split, which clones the parameter for all but one state and inserts a formula specialization in a chosen state that evaluates to \( 1 - \sum_i(a_i) \) where \( a_i \) are all other specializations. For example, given a category \( c \) with the states "A","B","C","D", the specification will create the parameters \( m_A,m_B,m_C \) and a formula expression \( m_D \) that evaluates to \( (1-(m_A+m_B+m_C)) \). Constrained splits can also be specified in the product of categories. In that case, the name of the remainder state follows the syntax "{State1;State2}", where State1 and State2 are the state names of the two spitting categories. The examples so far deal with a single prototype PDF. It is also possible to build with multiple prototype PDFs by specifying a mapping between the prototype to use and the names of states of a "master" splitting category. To specify these configurations, an intermediate MultiBuildConfig must be composed with all the necessary specifications. This, for example, configures a build with two prototype PDFs \( G \) and \( F \). Prototype \( G \) is used for state "I" of the master split category mc and prototype \( F \) is used for states "II" and "III" of the master split category mc. Furthermore, the parameters \( m,s \) of prototype \( G \) are split in category \( c \) while the parameter \( a \) of prototype \( F \) is split in the product of the categories \( c \) and \( d \). The actual build is then performed by passing the build configuration to RooSimWSTool, e.g., By default, a specialisation is built for each permutation of states of the splitting categories that are used. It is possible to restrict the building of specialised PDFs to a subset of states by adding a restriction on the number of states to build as follows: The restrictBuild method can be called multiple times, but at most once for each splitting category in use. For simple builds with a single prototype, restriction can be specified with a Restrict() argument on the build command line. Some member functions of RooSimWSTool that take a RooCmdArg as argument also support keyword arguments. So far, this applies to RooSimWSTool::build. For example, the following code is equivalent in PyROOT: Definition at line 37 of file RooSimWSTool.h. #include <RooSimWSTool.h> Constructor of SimWSTool on given workspace. All input is taken from the workspace All output is stored in the workspace Definition at line 150 of file RooSimWSTool.cxx. Destructor. Definition at line 159 of file RooSimWSTool.cxx. Build a RooSimultaneous PDF with name simPdfName from cloning specializations of protytpe PDF protoPdfName. Use the provided BuildConfig or MultiBuildConfig object to configure the build Definition at line 190 of file RooSimWSTool.cxx. Build a RooSimultaneous PDF with name simPdfName from cloning specializations of protytpe PDF protoPdfName. The RooSimWSTool::build() function is pythonized with the command argument pythonization. The keywords must correspond to the CmdArgs of the function. Definition at line 177 of file RooSimWSTool.cxx. Internal build driver from validation ObjBuildConfig. Definition at line 398 of file RooSimWSTool.cxx. Construct name of composite split. Definition at line 635 of file RooSimWSTool.cxx. Validate build configuration. If not syntax errors or missing objects are found, return an ObjBuildConfig in which all names are replaced with object pointers. Definition at line 211 of file RooSimWSTool.cxx. Definition at line 74 of file RooSimWSTool.h.
https://root.cern.ch/doc/master/classRooSimWSTool.html
CC-MAIN-2021-49
refinedweb
680
54.42
Scheme defines a “numerical tower” of numerical types: number, complex, real, rational, and integer. Java has primitive “unboxed” number types (such as int), just like C, and also has some “wrapper” classes that are basically boxed versions of the unboxed number types. Specifically, the standard Java number classes are not organized in any particularly useful hierarchy, except that they all inherit from Number. Kawa implements the full “tower” of Scheme number types, using its own set of sub-classes of the abstract class Quantity, a sub-class of Number we will discuss later. public class Complex extends Quantity { ...; public abstract RealNum re(); public abstract RealNum im(); } Complex is the class of abstract complex numbers. It has three subclasses: the abstract class RealNum of real numbers; the general class CComplex where the components are arbitrary RealNum fields; and the optimized DComplex where the components are represented by double fields. public class RealNum extends Complex { ...; public final RealNum re() { return this; } public final RealNum im() { return IntNum.zero(); } public abstract boolean isNegative(); } public class DFloNum extends RealNum { ...; double value; } Concrete class for double-precision (64-bit) floating-point real numbers. public class RatNum extends RealNum { ...; public abstract IntNum numerator(); public abstract IntNum denominator(); } RatNum, the abstract class for exact rational numbers, has two sub-classes: IntFraction and IntNum. public class IntFraction extends RatNum { ...; IntNum num; IntNum den; } The IntFraction class implements fractions in the obvious way. Exact real infinities are identified with the fractions 1/0 and -1/0. public class IntNum extends RatNum { ...; int ival; int[] words; } The IntNum concrete class implements infinite-precision integers. The value is stored in the first ival elements of words, in 2's complement form (with the low-order bits in word[0]). There are already many bignum packages, including one that Sun added for JDK 1.1. What are the advantages of this one? A complete set of operations, including gcd and lcm; logical, bit, and shift operations; power by repeated squaring; all of the division modes from Common Lisp (floor, ceiling, truncate, and round); and exact conversion to double. Consistency and integration with a complete “numerical tower.” Specifically, consistency and integration with “fixnum” (see below). Most bignum packages use a signed-magnitude representation, while Kawa uses 2's complement. This makes for easier integration with fixnums, and also makes it cheap to implement logical and bit-fiddling operations. Use of all 32 bits of each “big-digit” word, which is the “expected” space-efficient representation. More importantly, it is compatible with the mpn routines from the Gnu Multi-Precision library [gmp]. The mpn routines are low-level algorithms that work on unsigned pre-allocated bignums; they have been transcribed into Java in the MPN class. If better efficiency is desired, it is straight-forward to replace the MPN methods with native ones that call the highly-optimized mpn functions. If the integer value fits within a signed 32-bit int, then it is stored in ival and words is null. This avoids the need for extra memory allocation for the words array, and also allows us to special-case the common case. As a further optimization, the integers in the range -100 to 1024 are pre-allocated. Many operations are overloaded to have different definitions depending on the argument types. The classic examples are the functions of arithmetic such as “ +”, which needs to use different algorithms depending on the argument types. If there is a fixed and reasonably small set of number types (as is the case with standard Scheme), then we can just enumerate each possibility. However, the Kawa system is meant to be more extensible and support adding new number types. The solution is straight-forward in the case of a one-operand function such as “negate”, since we can use method overriding and virtual method calls to dynamically select the correct method. However, it is more difficult in the case of a binary method like “ +,” since classic object-oriented languages (including Java) only support dynamic method selection using the type of the first argument (“ this”). Common Lisp and some Scheme dialects support dynamic method selection using all the arguments, and in fact the problem of binary arithmetic operations is probably the most obvious example where “multi-dispatch” is useful. Since Java does not have multi-dispatch, we have to solve the problem in other ways. Smalltalk has the same problems, and solved it using “coercive generality”: Each number class has a generality number, and operands of lower generality are converted to the class with the higher generality. This is inefficient because of all the conversions and temporary objects (see [Budd91Arith]), and it is limited to what extent you can add new kinds of number types. In “double dispatch” [Ingalls86] the expression x-y is implemented as x.sub(y). Assuming the (run-time) class of x is Tx and that of y is Ty, this causes the sub method defined in Tx to be invoked, which just does y.subTx(x). That invokes the subTx method defined in Ty which can without further testing do the subtraction for types Tx and Ty. The problem with this approach is that it is difficult to add a new Tz class, since you have to also add subTz methods in all the existing number classes, not to mention addTz and all the other operations. In Kawa, x-y is also implemented by x.sub(y). The sub method of Tx checks if Ty is one of the types it knows how to handle. If so, it does the subtraction and returns the result itself. Otherwise, Tx.sub does y.subReversed(x). This invokes Ty.subReversed (or subReversed as defined in a super-class of Ty). Now Ty (or one of its super-classes) gets a chance to see if it knows how to subtract itself from a Tx object. The advantage of this scheme is flexibility. The knowledge of how to handle a binary operation for types Tx and Ty can be in either of Tx or Ty or either of their super-classes. This makes is easier to add new classes without having to modify existing ones. The DSSSL language [DSSSL] is a dialect of Scheme used to process SGML documents. DSSSL has “quantities” in addition to real and integer numbers. Since DSSSL is used to format documents, it provides length values that are a multiple of a meter (e.g. 0.2m), as well as derived units like cm and pt (point). A DSSSL quantity is a product of a dimension-less number with an integral power of a length unit (the meter). A (pure) number is a quantity where the length power is zero. For Kawa, I wanted to merge the Scheme number types with the DSSSL number types, and also generalize the DSSSL quantities to support other dimensions (such as mass and time) and units (such as kg and seconds). Quantities are implemented by the abstract class Quantity. A quantity is a product of a Unit and a pure number. The number can be an arbitrary complex number. public class Quantity extends Number { ...; public Unit unit() { return Unit.Empty; } public abstract Complex number(); } public class CQuantity extends Quantity { ...; Complex num; Unit unt; public Complex number() { return num; } public Unit unit() { return unt; } } A CQuantity is a concrete class that implements general Quantities. But usually we don't need that much generality, and instead use DQuanity. public class DQuantity extends Quantity { ...; double factor; Unit unt; public final Unit unit() { return unt; } public final Complex number() { return new DFloNum(factor); } } public class Unit { ...; String name; // Optional. Dimensions dims; double factor; } A Unit is a product of a floating-point factor and one or more primitive units, combined into a Dimensions object. The Unit may have a name (such as “ kg”), which is used for printing, and when parsing literals. public class BaseUnit extends Unit { ...; int index; } A BaseUnit is a primitive unit that is not defined in terms of any other Unit, for example the meter. Each BaseUnit has a different index, which is used for identification and comparison purposes. Two BaseUnits have the same index if and only if they are the same BaseUnit. public class Dimensions { BaseUnit[] bases; short[] powers; } A Dimensions object is a product and/or ratio of BaseUnits. You can think of it as a data structure that maps every BaseUnit to an integer power. The bases array is a list of the BaseUnits that have a non-zero power, in order of the index of the BaseUnit. The powers array gives the power (exponent) of the BaseUnit that has the same index in the bases array. Two Dimensions objects are equal if they have the same list of bases and powers. Dimensions objects are “interned” (using a global hash table) so that they are equal only if they are the same object. This makes it easy to implement addition and subtraction: public static DQuantity add (DQuantity x, DQuantity y) { if (x.unit().dims != y.unit().dims) throw new ArithmeticException ("units mis-match"); double r = y.unit().factor / x.unit().factor; double s = x.factor + r * y.factor; return new DQuantity (s, x.unit()); } The Unit of the result of an addition or subtraction is the Unit of the first operand. This makes it easy to convert units: (+ 0cm 2.5m) ==> 250cm Because Kawa represents quantities relative to user-specified units, instead of representing them relative to primitive base units, it can display quantities using the user's preferred units, rather than having to use prmitive units. However, this does make multiplication and division a problem The actual calculation (finding the right Dimensions and multiplying the constant factors) is straight-forward. The difficulty is that we have to generate a new compound Unit, and print it out in a reasonable fashion. Exactly how this should best be done is not obvious.
http://www.gnu.org/software/kawa/internals/numbers.html
CC-MAIN-2014-41
refinedweb
1,645
55.24
And here is why: It is against the spirit of web standards The whole reason that web standards exist is so that we don't have to write specific code for specific environments. We should write code that adheres to established standards and software in charge of displaying our code should display it as the standards dictate. It relies on the browser user-agent string ... which has a hilariously disastrous history and is easily spoofable. It can hinder devices Example: you detect for the iPhone and serve is special content. Now the iPhone can never see the web page as other browsers see it, despite it being fully capable of doing so. So why do we do it? We do it because different browsers handle things differently and browser detection can get us out of a pinch and get things working how they should. You can hardly blame us right? Often the situations leading up to us resorting to browser-detection are rage-inducing. But remember it's often not the browser that is at fault. Even in the case of IE 6, it was the most standards-compliant and advanced browser of it's time when it was released. And some of the standards that we have today were not complete at that time. What should we do instead? I'm the first to admit that real-world web design sometimes needs quick fixes, budget-acceptable solutions, and making damn sure features work as intended. This doesn't always allow for altruistic choices that leave out some functionality because it is the "right thing to do." Ideally... ... we would do capability testing. That's the information we really need right? Test if the environment we are in is capable of what we want to do. If it is, do it. Easier said than done, I'm sure, and myself I'd hardly know where to begin. But I'm sure some of ya'll are very smart folks and can get it done (or are already doing it!) More Here is a bit about capability testing from Quirksmode. And here is Dave Shea with a good example of why browser detection ain't good. Chris, Are you for or against browser detection to serve up different style sheets? For example: I currently use an HTC Wizard (O2 Xda Mini II S) and some sites (even those with a mobile style sheet) look abysmal due to resized graphics, fixed sizes that are too wide for the screen etc. I would personally prefer to detect the UA of mobile devices and provide a different style sheet for large screen mobiles (Win Mobile devices, LG Viewty) and smaller screen mobiles (Sony Erricsson C905). Just my 2 cents Chris I don’t use browser detection for that, but screen-width detection. I start with the stylesheet appropiate for a skinny screen and then use js to detect the width and change the stylesheet if required. Then why do yo use <!--[if IE 6]>...<![endif]-->and <!--[if IE 7]>...<![endif]-->? I’d rather do: ha! :) Yeah, that’s much more nice. But since we can’t do things this way browser detection is the only way out. right click, view source Conditional comments are only executed by IE, all other browsers treat them as any other comment. So using them at all is implying IE. In this way they are also an implied capability detection : note that they are nested in a comment tag. This is similar to (but not exactly) saying When we normally do capability detection, there are some techniques that inadvertently can identify the browser such as checking for event listeners. Here we are just taking advantage of the fact that if this conditional executes, then the browser is IE. The browser detection Chris is writing about is done with javascript and using built in methods that are insufficient at actually detecting the client identity. And, those methods require that future changes in the browser mean a complete rewrite of the code. Capability detection allows for future standards compliance in a browser that lacks it today. With conditional comments we can also easily replace a link to a css file, or js file or what have you. So while you have a point that CC’s are detecting a browser, their use is a much lesser evil than using traditional browser detecting methods. Post Title is too generous. You can use browser detection to improve user’s interaction with your site. Simple example. If I have file-upload input on the page, It would be great for user to know that he can drag and drop files on the input. But this is only Chrome and Safari feature. So, I have to user browser detection or browser-specific css-hacks. Exactly the point, wouldn’t it be better to detect if the browser was capable of drag and drop instead of for the browser itself? That way when a new browser that comes out supports it, you are already good to go. I don’t really agree with the title of the post, but I do see Chris’ point. Browser detection and manipulating a user agent string does have it’s upside to things I.E. detecting cloaking and other mischievous acts on the internet. :) User agent detection is bad. Feature-based detection is good. That sums it up pretty well, David. I completely agree. Well said! I think this is what the title should read :) Thats really should be the title of the post! I also think I see the point here, but at this point I still want some sites to give me that streamlined mobile site if it’s available. Maybe a site like Twitter is doing it right when they default the mobile site for phones, but give the option for standard view… Better yet, a site could remember a user’s preference and give each what s/he wanted. I think I understand what you are trying to say and I would agree with you for the most part. I do however believe that the internet is there to access information in the best way possible. I do believe that the right solution for one device is not necessarily the right solution for another device. For example, iPhone vs a wide screen monitor. A site that does this is Amazon. When you first go to the site on the iPhone it takes you to the iPhone page with an option to go to the “PC” page. So I do believe that a capability testing is a great solution as well as device detection. That is just my humble opinion though. …is bad, and inevitable (as fate). Thanks Microsoft! Might be bad, but sometimes its needed. Where can I download IE6 to test a website? If you are on windows, use IETester: For some reason, IE tester was not giving me the appropriate IE6 results. So I i’m using IE6 standalone along with IE7 in WIndows XP here is the link for standalone IE6 My recent experience with the Blackberry Browser is changing my attitude towards browser detection: Sometimes you just need to know when you are dealing with a broken piece of software. What do you think about browser-specific CSS? As maintenance goes on it seems to be a necessary evil. I do think it’s a necessary evil. CSS isn’t really capable of browser detection or capability testing all by itself, so it’s pretty much our only option. I guess I am talking more about JavaScript detection (or any means that could detect user-agent string and do something about it) and redirect or serve up special content or stuff like that. With regard to JS user agent detection, and serving up “special content,” let me propose a situation to you… Say that you were a large company, and a leader in standards progression and web development. Perhaps a company like, I don’t know, Google or something;P And let’s say that you wanted to “do your part” to encourage users to leave IE6 in favor of FF, Chrome, or WebKit. How, exactly, do you propose making the decision to show the “upgrade for better service” notice if not for browser detection? This is just a simple situation, but you get the point. Oh, and it’s true as well. Then again, Google is all for the table-layouts, doesn’t use DOCTYPE-declarations and uses deprecated tags like <center>.I guess the rule goes like this: If you want to reach everyone on any computer (especially old ones with old browsers), use old markup. Design like it’s ’97 all over again… Chris, I’ll have to respectfully disagree with you here. Many times the “capability detection” is exactly done via “browser detection.” I say that you should always serve up the same (X)HTML content, regardless of the access device. The site should be at least usable in Lynx. The decision with regards to CSS (the presentation layer) absolutely SHOULD depend on the browser. Take your example: iPhone. While you CAN access the same web page with the iPhone, being able to offer an interface that caters to the device is most certainly appreciated by heavy mobile users. Navigation comes to mind here. Large fingers + small screens = hard to navigate, even if the user’s device “can” display “real” web pages. Also, with regard to conditional styles, ala IE6, there’s nothing wrong with a little tweak here and there. I wouldn’t go so far as to say people should “cater” to IE6, by any means. But you really need to think about progressive enhancement, and sometimes you can’t really “detect” the capabilities without first detecting the browser. Then, of course, you’ve got your behavior layer. Although with AJAX frameworks like Prototype, jQuery and the like, we’re already abstracting away the differences between IE and, well, everything else with the remote calls. This capability detection with straight JS is pretty easy for the most part: they either have it turned on, or they don’t. However, really, really heavy client-side processing will again suffer on the mobile side. If you know your user is on a mobile browser, and thus a very low-powered mobile device, you probably wouldn’t want to offload a lot of processing from the server to the client. The iPhone is pretty capable, but like all other smartphones, suffers quite a bit in the processor and memory department. I understand what you’re trying to get at; however, it does come across as being pretty strict in theory, but not beneficial in practice. As another poster mentioned, you yourself are using browser detection for IE issues ;P The first JavaScript library to remove browser sniffing is jQuery in 1.3. A funny story is that there was a panel at The Ajax Experience where John Resig was laughed at for mentioning removing browser sniffing. They said it was impossible, the fact is he just did it! I can’t agree more! So a person with Firefox at home should get the exact same webpage for the NY Times as someone on a Blackberry? The screens are completely different sizes. Javascript support on a BB is spotty. I’m pretty sure this is not what Chris is suggesting, and why instead of browser detection you should do capabilities detection. One of the things to detect is the window size, assume your user is going to be using the smallest, crappiest browser ever invented and move up from there. By using javascript you can more or less determine just how good the browser/display of the user is and get the site to change the stylesheet accordingly. Of course, that’s assuming you CAN use JS to detect these things. In the small percentage of cases that a user has JS off, then the next best thing would be the user agent string on the server-side. And in the even smaller case that the user has JS off, and is spoofing their user agent…well, let the chips fall where they may, I suppose. Just make sure all content is accessible, and the whole site basically works with no JS, no CSS, and no images. If you’re talking about detection with Javascript, I imagine most frameworks use capability detection. For instance, Mootools determines browser engine. Example for detecting trident (Internet Explorer): return (!window.ActiveXObject) ? false : ((window.XMLHttpRequest) ? 5 : 4); sets Browser.Engine.trident to trident4/trident5 if in IE. But it feels so good to tell the people on IE that their browser is lousy… Sure I’m against not allowing the people on a bad browser, but still, I can discreetly tell them… On my website () I used a small picture on the top-right to invite my visitors to switch on FireFox insted of being on IE. But if they dan’t want to, they can visit it anyway. It’s what I call my little fight against IE… Nice post… detecting browsers is a pain in the butt! jQuery is no longer using browser sniffing, they are using featured detection as Chris mentioned to avoid headaches and problems in the future. Sorry if someone mentioned this, but I didn’t see it. At handsetdetection.com we use the user-agent plus other headers to provide capabilities on mobile/cell phones. I disagree with your assertion that special content is evil. If a customer wants an iPhone optimized experience then why not give it to them ? Sure the iPhone can display normal content, but why not tailor the experience if you know its an iPhone ? I think when you’re using the browser detection to detect if the user uses a PC or an IPhone thats just fine. Probably you can see it like thats only used as a method to detect the used device or machine and the Browser isn’t the important thing about it. I think browser detection for the iphone is good. Im going to have a full flash site soon enough — never — so Id like to have this as a backup…but what do I know…I suck at this stuff anyway but IE6 is just too bad for me. I’m new in designing world.. IE6 is not my era I could say.. i’m not using anything to solve problem with IE stuff.. just leave it that way.. maybe it is not good thing.. but at least that how give statement”don’t use IE”. I agree that browser detection is bad. And I just wanted to share this example of browser detection gone horribly wrong. The version of the Ubiquity extension is treated as the version of Firefox: I don’t totally understand feature detection. Most of the browser detection code in my scripts are to work around very obscure browser bugs and usually are only for a single browser at a time. I doubt libraries are going to detect all of these little browser bugs and then mark them as features for me to check against. What if multiple browsers have implemented a feature but one is buggy in places and I need to work with it differently? What I see happening is that I will end up having to grab and test multiple features so I can target a specific browser and try to fix the bug. In the end it is still going to be browser detection but a roundabout way of doing it buy looking at features to determine what browser I am working with. I see how it gets us away from trusting the user agent strings to detect browsers. Either way I am still going to want some way to do browser and version detection whether it is done by detecting user agent strings or feature detection. I would love to see libraries keeping browser detection functionality but basing it off of feature detection. Give us a more accurate browser detection and let us use it or feature detection depending on what works best for us. I know this is an old thread but it still appears towards the top of Google. Firefox font rendering is much denser than that of Chrome or Safari for some Serif fonts. Tweaking the css letter-spacing property specifically for Firefox makes a big difference to readability Browser detection is useful in this case, unless anyone can suggest a better method?
https://css-tricks.com/browser-detection-is-bad/
CC-MAIN-2017-51
refinedweb
2,756
71.44
Can mfix be considered as "just a fix point combinator", without any trace of effect? The recent discussion about continuations and implementations of Scheme in Haskell highlighted that question. The point of the discussion is the difference between letrec implemented using the fixpoint combinator, and a letrec implemented via an updateable cell (as in Scheme, according to R5RS). Ashley Yakeley wrote: > > The difference between the Y and set! approaches to letrec *is* > > observable. > I don't believe you. My implementation uses Haskell's "mfix", which > looks like a Y to me. I certainly don't use anything like "set!". That is, Ashley Yakeley claimed that his Haskell implementation of Scheme implements letrec via Y -- and yet his implementation is consistent with the R5RS semantics, which mandates the updateable-cell letrec. Indeed, his implementation passes the tests designed to discern the way letrec is implemented. There seems to be a contradiction in the above statements. The contradiction is resolved by noting that "mfix", although may look like "Y", is fundamentally different. There is a latent "set!" in mfix, which can be pried open with call/cc. There is an effect in mfix. The following is a simple test that shows it. Let us consider the following test (Scheme code first, Haskell code follows). (letrec ((fact (cons #f (lambda (n) (set-car! fact #t) (if (zero? n) 1 (* n ((cdr fact) (- n 1)))))))) (let* ((before (car fact)) (res ((cdr fact) 5))) (list before res (car fact)))) The test has been introduced and discussed in If letrec is implemented via updates, the test returns (#f 120 #t) If letrec is implemented via Y, the test should return (#f 120 #f). It is easy to see that if we use Y defined by (Y f) = f (Y f), the result indeed must be (#f 120 #f). Now, let us implement letrec in Haskell via fix and mfix, and compare the results. First, the implementation of the test via fix > import Data.IORef > import System.IO.Unsafe > import Control.Monad.Fix (mfix) > fix f = f g where g = f g > g = \f -> newIORef > (False, \n -> do > (flag,body) <- readIORef f > writeIORef f (True,body) > res <- if n ==0 then return 1 else > body (n-1) >>= return . (* n) > return res) > > g1 = \f -> unsafePerformIO (g f) > > test = let fact = fix g1 in do > (flag,body) <- readIORef fact > res <- (body 5) > (flag',_) <- readIORef fact > return (flag, res,flag') The code matches the Scheme code in every detail. If we try > *Main> test >>= putStrLn . show > (False,120,False) we obtain the expected result. Now, we bring in mfix: > test2 = do > fact <- mfix g > (flag,body) <- readIORef fact > res <- (body 5) > (flag',_) <- readIORef fact > return (flag, res,flag') And, quite predictably, > *Main> test2 >>= putStrLn . show > (False,120,True)
http://www.haskell.org/pipermail/haskell/2004-January/013375.html
CC-MAIN-2013-48
refinedweb
460
73.68
Glen, I can't see how it could seem more confusing to the end user experience with the way it was, case: I've been working on Axis RPC for sometime and have never needed to list my methods now I start working with messaging and expect pretty much the same behavior. A requirement is added to call a method in my message based service from another package, I now have list those methods? Huh? I've never had to do that with RPC? The whole "*" is now useless? A new design criteria comes up and the best solution is for my service exposed class to extend that some other class. Oh because it so happens to have public methods that DON'T CONFORM TO AXIS's rules I'm now mandated to list my methods? Or if I still want the functionality of not listing them create some scaffolding class? WHY? If I want security and that's my concern then I would be listing the allowed methods from the beginning. Let me please develop/hack without those restriction and when it come time to deploy to real world and I'm interested in security and what methods are being called THEN I'll list them; Thank you. Rick Rineholt "The truth is out there... All you need is a better search engine!" rineholt@us.ibm.com Glen Daniels <gdaniels@macromedia.com> on 09/26/2002 09:38:28 AM Please respond to axis-dev@xml.apache.org To: "'axis-dev@xml.apache.org'" <axis-dev@xml.apache.org> cc: Subject: RE: cvs commit: xml-axis/java/test/MSGDispatch TestService.java In the RPC case, if you specify "*", you still shouldn't have public methods in your class which you don't want exported - the failure case here is that people can remotely call dangerous/inappropriate code. I'm just saying that the same idea should apply for MSG services with "*", except that we should notice the non-matching methods right away. I think the question comes down to this : which is more confusing/difficult to the user? To specify "all methods should be exported" and then have deployment silently ignore non-matching methods for message services, or to get a deployment failure which makes it very clear that you've allowed more than is valid in your class signature. To me, it seems like the former is more opaque and error-prone - to you, it seems like the latter is annoying. Maybe we should [VOTE]? No matter which way this goes, we need to explain VERY CLEARLY in the documentation and the FAQ how this works. Incidentally, I think your case below reminds us that we need two further abilities in WSDD: 1) specify an interface which contains all the allowed methods in the implementation class, and 2) a "no-superclasses" option. --Glen > -----Original Message----- > From: Rick Rineholt [mailto:rineholt@us.ibm.com] > Sent: Thursday, September 26, 2002 8:58 AM > To: axis-dev@xml.apache.org > Subject: RE: cvs commit: xml-axis/java/test/MSGDispatch > TestService.java > > > > Why make it difficult on users! If I have on RPC case "*" I'm done > because all public methods ARE valid signature. If in messge > provider I > specify "*" then all methods that make sense are valid too. > > In addition if I need to inherit from a class that does have > not have a > "valid" signature and I specify "*" I get an error that this is not > a valid signature!!! NO WAY. > > Rick Rineholt > "The truth is out there... All you need is a better search engine!" > > rineholt@us.ibm.com > > > > Glen Daniels <gdaniels@macromedia.com> on 09/26/2002 08:41:03 AM > > Please respond to axis-dev@xml.apache.org > > To: "'axis-dev@xml.apache.org'" <axis-dev@xml.apache.org> > cc: > Subject: RE: cvs commit: xml-axis/java/test/MSGDispatch > TestService.java > > > > > We're now changing the semantics of what "allowedMethods='*'" > means - for > RPC/Doc services, it literally means all public methods should be web > methods. Now for Message service it means "just the ones > that match these > signatures". I think that's confusing. ("why is it saying 'no such > method'?") > > I'm very close to -1 on it. Why is this a good idea instead of having > people just specify the legal methods? > > --Glen > > > -----Original Message----- > > From: dug@apache.org [mailto:dug@apache.org] > > Sent: Thursday, September 26, 2002 8:21 AM > > To: xml-axis-cvs@apache.org > > Subject: cvs commit: xml-axis/java/test/MSGDispatch TestService.java > > > > > > dug 2002/09/26 05:20:39 > > > > Modified: java/test/MSGDispatch TestService.java > > Log: > > Expand the test a little to make sure we don't restrict too much. > > From the comment in the test: > >) > > > > Revision Changes Path > > 1.2 +9 -0 > xml-axis/java/test/MSGDispatch/TestService.java > > > > Index: TestService.java > > > =================================================================== > > RCS file: > > /home/cvs/xml-axis/java/test/MSGDispatch/TestService.java,v > > retrieving revision 1.1 > > retrieving revision 1.2 > > diff -u -r1.1 -r1.2 > > --- TestService.java 24 Sep 2002 20:45:20 -0000 1.1 > > +++ TestService.java 26 Sep 2002 12:20:39 -0000 1.2 > > @@ -75,6 +75,15 @@ > > * @author Glen Daniels (gdaniels@apache.org) > > */ > > public class TestService { > > + //) > > + public void testBody(int t) {} > > + public void testElement(int t) {} > > + public void testEnvelope(int t) {} > > + > > public SOAPBodyElement [] testBody(SOAPBodyElement [] bodies) > > throws Exception { > > > > > > > > > > > > > >
http://mail-archives.apache.org/mod_mbox/axis-java-dev/200209.mbox/%3COF8170DD40.2424A1C9-ON85256C40.004BA831@boulder.ibm.com%3E
CC-MAIN-2018-05
refinedweb
885
58.58
Example 1: Top 3 Occurrences: In this tutorial we will generate 400,000 lines of data that consists of Name,Country,JobTitle Then we have a scenario where we would like to find out the Top 3 Occurences from our dataset. Our Application to Generate Data: #!/usr/bin/python from faker import Factory import time timestart = time.strftime("%Y%m%d%H%M%S") destFile = "dataset-" + timestart + ".txt" print "Generating File: " + destFile numberRuns = 100000 destFile = "dataset-" + timestart + ".txt" file_object = open(destFile,"a") def create_names(fake): for x in range(numberRuns): genName = fake.first_name() genCountry = fake.country() genJob = fake.job() file_object.write(genName + "," + genCountry + "," + genJob + "\n" ) if __name__ == "__main__": fake = Factory.create() create_names(fake) file_object.close() Our PySpark Application: from pyspark import SparkContext, SparkConf conf = SparkConf().setAppName("RuanSparkApp01") sc = SparkContext(conf=conf) lines = sc.textFile("dataset-*") wordCounts = lines.flatMap(lambda line: line.strip().split(",")) \ .map(lambda word: (word, 1)) \ .reduceByKey(lambda a, b: a + b, 1) \ .map(lambda (a, b): (b, a)) \ .sortByKey(1, 1) \ .map(lambda (a, b): (b, a)) output = wordCounts.map(lambda (k,v): (v,k)).sortByKey(False).take(3) for (count, word) in output: print "%i: %s" % (count, word) First, we will generate our data: $ for x in {1..4}; do python generate.py; done Now that we have our dataset generated, run the pyspark app: $ spark-submit spark-app.py Then we will get the output that will more or less look like this: 1821: Engineer 943: Teacher 808: Scientist Example 2: How many from New Zealand: We will use the same dataset and below our pyspark application: #!/usr/bin/python from pyspark import SparkContext, SparkConf logDataset = "dataset*" conf = SparkConf().setAppName("RuanSparkApp01") sc = SparkContext(conf=conf) logActionData = sc.textFile(logDataset).cache() findCountry = logActionData.filter(lambda s: 'New Zealand' in s).count() print("New Zealand has been found: %i " % (findCountry) + "times") And our output will look like this: New Zealand has been found: 178 times Note: This post is a still in progress, so I will add more examples as the time goes by
https://sysadmins.co.za/spark-pyspark-examples/
CC-MAIN-2019-22
refinedweb
337
53.17
While learning and experimenting with SwiftUI, I use the canvas assistant editor to preview SwiftUI views extensively. It is an amazing feature of Xcode 11 and I love it. There is a quirk that gets difficult for me though – the default behavior of the preview provider uses a gray background. I frequently use multiple previews while making SwiftUI elements, wanting to see my creation on a background supporting both light and dark modes. The following little stanza is a lovely way to iterate through the modes and displaying them as previews: #if DEBUG struct ExampleView_Previews: PreviewProvider { static var previews: some View { Group { ForEach(ColorScheme.allCases, id: \.self) { scheme in Text("preview") .environment(\.colorScheme, scheme) .frame(width: 100, height: 100, alignment: .center) .previewDisplayName("\(scheme)") } } } } #endif Results in the following preview: The gray background doesn’t help all that much here. It is perfect when you are viewing a fairly composed element set, as you are often working over an existing background. When you are creating an element to stand alone, or moving an element. In those cases, I really want a background for the element. And this is exactly what PreviewBackground provides. I made PreviewBackground into a SwiftPM package. While I could have created this effect with a ViewModifier, I tried it out as a ViewBuilder instead, thinking it would be nice to wrap the elements I want to preview explicitly. The same example, using PreviewBackground: import PreviewBackground #if DEBUG struct ExampleView_Previews: PreviewProvider { static var previews: some View { Group { ForEach(ColorScheme.allCases, id: \.self) { scheme in PreviewBackground { Text("preview") } .environment(\.colorScheme, scheme) .frame(width: 100, height: 100, alignment: .center) .previewDisplayName("\(scheme)") } } } } #endif The code is available on Github, and you may include it within your own projects by adding a swift package with the URL: Remember to import PreviewBackground in the views where you want to use it, and work away! Explaining the code There are not many examples of using ViewBuilder to construct a view, and this is a simple use case. Here is how it works: import SwiftUI public struct PreviewBackground<Content>: View where Content: View { @Environment(\.colorScheme) public var colorSchemeMode public let content: () -> Content public init(@ViewBuilder content: @escaping () -> Content) { self.content = content } public var body: some View { ZStack { if colorSchemeMode == .dark { Color.black } else { Color.white } content() } } } The heart of using ViewBuilderis using it within a View initializer to return a (specific but) generic instance of View, and using the returned closure as a property that you execute when composing a view. There is a lot of complexity in that statement. Allow me to try and explain it: Normally when creating a SwiftUI view, you create a struct that conforms to the View protocol. This is written in code as struct SomeView: View. You may use the default initializer that swift creates for you, or you can write your own – often to set properties on your view. ViewBuilder allows you to take a function in that initializer that returns an arbitrary View. But since the kind of view is arbitrary, we need to make the struct generic – since we can’t assert exactly what type it will be until the closure is compiled. To tell the compiler it’ll need to do the work to figure out the types, we label the struct as a being generic, using the <SomeType> syntax: struct SomeView<Content>: View where Content: View This says there is a generic type that we’re calling Content, and that generic type is expected to conform to the View protocol. There is a more compact way to represent this that you may prefer: struct SomeView<Content: View>: View Within the view itself, we have a property – which we name content. The type of this content isn’t known up front – it is the arbitrary type that the compiler gets to infer from the closure that will provided in the future. This declaration is saying the content property will be a closure – taking no parameters – that returns some an arbitrary type we are calling Content. public let content: () -> Content Then in the initializer, we use ViewBuilder: public init(@ViewBuilder content: @escaping () -> Content) { self.content = content } In case it wasn’t obvious, ViewBuilder is a function builder, the swift feature that is enabling this declarative structure with SwiftUI. This is what allows us to ultimately use it with in that declarative syntax form. The final bit of code to describe is using the @Environment property wrapper. @Environment(\.colorScheme) public var colorSchemeMode The property wrapper is not in common use, but perfect for this need. The property wrapper uses exposes a specific part of the existing environment as a local property for this view. This is what enables PreviewBackground to choose the color for the background appropriate to the mode. By reading the environment it chooses an appropriately colored background. It then uses that property to assemble a view by invoking the property named content (which was provided by the function builder) within a ZStack. By using ViewBuilder, we can use the PreviewBackground struct like any other composed view within SwiftUI: var body: some View { PreviewBackground { Text("Hello there!") } } If we had created this code as a ViewModifier, then using it would look different – instead of the curly-bracket syntax, we would be chaining on a method. The default set up for something like that looks like: var body: some View { Text("Hello there!") .modify(PreviewBackground()) } I wanted to enable the curly-bracket syntax for this, hence the choice of using a ViewBuilder. A side note about moving code into a Swift package When I created this code, I did so within the context of another project. I wanted to use it across a second project, and the code was simple enough (a single file) to copy/paste – but instead I went ahead and made it a Swift package. Partially to make it easier for anyone else to use, but also just to get a bit more experience with what it takes to set up and use this kind of thing. The mistake that I made immediately on moving the code was not explicitly making all the structs and properties public. It moved over, compiled fine, and everything was looking great as a package, but then when I went to use it – I got some really odd errors: Cannot call value of non-function type 'module<PreviewBackground>' In other instances (yes, I admit this wasn’t the first time I made this mistake – and it likely won’t be the last) the swift compiler would complain about the scope of a function, letting me know that it was using the default internal scope, and was not available. But SwiftUI and this lovely function builder mechanism is making the compiler work quite a bit more, and it is not nearly as good at identifying why this mistake might have happened, only that it was failing. If you hit the error Cannot call value of non-function typewhen moving code into a package, you may have forgotten to make the struct (and relevant properties) explicitly public.
https://rhonabwy.com/2020/03/17/introducing-and-explaining-the-previewbackground-package/?shared=email&msg=fail
CC-MAIN-2021-21
refinedweb
1,176
58.62
> FIrst: What do you get when you make > man Top No manual entry for Top. > And I assume that this returns the manpage for Top, which is here on my > system the same as man top. Sounds right to me. I'm just guessing, since I still can't reproduce this bug any more, but how about this (checked in to cvs): --- nodes.c.~1.6.~ 2006-02-23 16:29:45.000000000 -0800 +++ nodes.c 2006-02-25 15:04:41.000000000 -0800 @@ -141,4 +141,5 @@ { NODE *node = NULL; + int implicit_nodename = 0; /* If we are unable to find the file, we have to give up. There isn't @@ -153,5 +154,8 @@ /* If NODENAME is not specified, it defaults to "Top". */ if (!nodename) - nodename = "Top"; + { + nodename = "Top"; + implicit_nodename = 1; + } /* If the name of the node that we wish to find is exactly "*", then the @@ -172,7 +176,7 @@ /* If the file buffer is the magic one associated with manpages, call the manpage node finding function instead. */ - else if (file_buffer->flags & N_IsManPage) + else if (!implicit_nodename && file_buffer->flags & N_IsManPage) { - node = get_manpage_node (file_buffer, nodename); + node = get_manpage_node (file_buffer, nodename); } #endif /* HANDLE_MAN_PAGES */
http://lists.gnu.org/archive/html/bug-texinfo/2006-02/msg00078.html
CC-MAIN-2015-32
refinedweb
189
73.27
Suppose we have a string s of lowercase alphabet characters, and another number k, we have to find the minimum number of required changes in the string so that the resulting string has at most k distinct characters. In this case The change is actually changing a single character to any other character. So, if the input is like s = "wxxyyzzxx", k = 3, then the output will be 1, as we can remove the letter "w" to get 3 distinct characters (x, y, and z). To solve this, we will follow these steps − count := a map of each character in s and their frequency sv := sorted list of frequency values ans := 0 for i in range 0 to (size of count) - k - 1, do ans := ans + sv[i] return ans Let us see the following implementation to get better understanding − from collections import Counter class Solution: def solve(self, s, k): count = Counter(s) sv = sorted(count.values()) ans = 0 for i in range(len(count) - k): ans += sv[i] return ans ob = Solution() s = "wxxyyzzxx" k = 3 print(ob.solve(s, k)) "wxxyyzzxx",3 1
https://www.tutorialspoint.com/program-to-find-minimum-required-chances-to-form-a-string-with-k-unique-characters-in-python
CC-MAIN-2021-49
refinedweb
186
58.96
Hi Guys, Love the Zen desk. Thanks for setting this up for your users. Having an issue with RVIO and TGA support. Firstly we really, really, really, need accelerated support for TGA files in RVIO and RV. To compond the slowness of TGA support in RVIO on Windows the rthread flag crashes rvio. Attached is a sample python script and a screen grab of the crash import os inPath = " P:/Desktop/testFootage/testFootage.#.tga" outPath = "P:/Desktop/ testFootage/testFootage.mov" cmd = "rvio" cmd = cmd + " -rthreads " + "2" cmd = cmd + " " + inPath cmd = cmd + " -o "+ outPath os.system( cmd ) If I remove the rthread flag, RVIO works but its pretty dog slow. If I swap the footage for a dpx sequence the rthread works and RVIO is still slow. I can do some performance comparisons, but honestly RVIO is entirely unreasonably slow compared to Nuke, DJV_Convert, After Effects on our Windows XP 64bit systems. Please Help! -Romey rvioError.jpg
https://support.shotgunsoftware.com/hc/en-us/community/posts/209495608-RVIO-and-rthread-and-tga-support
CC-MAIN-2019-51
refinedweb
156
74.79
I have the following situation where a client class executes different behavior based on the type of message it receives. I'm wondering if there is a better way of doing this since I don't like the instanceof and the if statements. One thing I thought of doing was pulling the methods out of the client class and putting them into the messages. I would put a method like process() in the IMessage interface and then put the message specific behavior in each of the concrete message types. This would make the client simple because it would just call message.process() rather than checking types. However, the only problem with this is that the behavior contained in the conditionals has to do with operations on data contained within the Client class. Thus, if I did implement a process method in the concrete message classes I would have to pass it the client and I don't know if this really makes sense either. public class Client { messageReceived(IMessage message) { if(message instanceof concreteMessageA) { concreteMessageA msg = (concreteMessageA)message; //do concreteMessageA operations } } if (message instanceof concreteMessageB) { concreteMessageb msg = (concreteMessageB)message; //do concreteMessageB operations } } The simple way to avoid instanceof testing is to dispatch polymorphicly; e.g. public class Client { void messageReceived(IMessage message) { message.doOperations(this); } } where each message class defines an appropriate doOperations(Client client) method. EDIT: second solution which better matches the requirements. An alternative that replaces a sequence of 'instanceof' tests with a switch statement is: public class Client { void messageReceived(IMessage message) { switch (message.getMessageType()) { case TYPE_A: // process type A break; case TYPE_B: ... } } } Each IMessage class needs to define an int getMessageType() method to return the appropriate code. Enums work just as well ints, and are more more elegant, IMO.
https://codedump.io/share/FJlLgIoAwiCP/1/avoiding-instanceof-when-checking-a-message-type
CC-MAIN-2017-13
refinedweb
293
53.81
F riDAY, N O V E MBE r 26, 2010 • 50¢ SpOrTS BLACK Friday By The Associated Press SHOWDOWN TiME USM on CBS College Sports at 5:30 tonight B1 WEATHEr Tonight: Partly cloudy; lows in the 20s Saturday: Partly cloudy; highs in the 60s Mississippi River: 10.8 feet Rose: 0.5 foot Flood stage: 43 feet A9 DEATHS • Theadore C. Bowman • Willie Joe Guise • Kathy Simmons Patty • Betty O. Wood KATIE CARTER•The Vicksburg PosT Velma Johnson of Jackson leans on a stack of kitchen appliances as she waits in the checkout line at JC Penney at 5:30 this morning. A9 TODAY iN HiSTOrY 1789: This is a day of thanksgiving set aside by President George Washington to observe the adoption of the Constitution of the United States. 1825: The first college social fraternity, the Kappa Alpha Society, is formed at Union College in Schenectady, N.Y. 1842: The founders of the University of Notre Dame arrive at the school’s present-day site near South Bend, Ind. 1933: A judge in New York decides the James Joyce book “Ulysses” was not obscene and could be published in the U.S. 1973: President Richard Nixon’s personal secretary, Rose Mary Woods, tells a federal court that she’d accidentally caused part of the 18-1/2Rose Mary minute gap Woods in a key Watergate tape. iNDEX Classifieds............................ B8 Comics.................................. B5 Puzzles.................................. B7 Dear Abby ........................... B7 Editorial................................A4 People/TV............................ B6 Advertising ...601-636-4545 Classifieds...... 601-636-SELL Circulation.....601-636-4545 News................601-636-4545 See A2 for e-mail addresses ONLiNE VOLUME 128 NUMBER 330 2 SECTIONS N. Korea says area on brink of war Shoppers line up across Vicksburg By Manivanh Chanprasith mchan@vicksburgpost.com Dropping temperatures and a heavy, blowing rain did not keep deal-seekers from Vicksburg’s shopping centers for the annual Black Friday retail frenzy that officially kicked off the holiday shopping season. While some stores were open through throughShoppers line up out Thanksgiv Thanksgivacross the nation ing Day, many opened later — some near mid midnight — to usher in the time of year when retailers hope the ringing cash registers will soar profits. Newly opened Carter Jewelers on Pemberton Square Boulevard opened at 8 a.m. Thursday, offering free earrings to the first 100 customers as an incentive for shopping on a day that usually sees a day off for retailers. “This was our grand opening,” store manager Ginger Richards said. On A9 See Shopping, Page A9. Gymboree Outlet assistant manager Debi Prickett, left, checks out Stephanie Boyte of Oak Grove, La., during the sale at the Outlets at Vicksburg. YEONPYEONG ISLAND, South Korea — North Korea warned today today named a former chairman of the Joint Chiefs of Staff to the post. The heightened animosity between the Koreas is taking place as the North undergoes a delicate transition of power See Korea, Page A9. State one of few missing museum for civil rights By Shelia Byrd The Associated Press JACKSON — Mississippi bred some of the worst violence of the civil rights era, yet nearly a half-century after a barrage of atrocities pricked the conscience of the nation, it’s one of Owen the few Brooks civil rights battleground states with no museum to commemorate Oral historian and civil rights veteran Owen Brooks works in his Jack Jackson State University office. a white sniper in 1963. And three young voter registration activists were murdered by the Ku Klux Klan during the Freedom Summer of 1964. Such events forced the nation’s eyes on the upheaval RogElIo SolIS•The •The associa associaTed Press See Civil, Page A2. Continuing the Tradition � QUALITY SERVICE WITH AFFORDABLE CHOICES Frank J. FISHER FUNERAL HOME (601) 636-7373 1830 Cherry Street Vicksburg, Mississippi A2 Friday, November 26, 2010 busINEss community calendar cluBS Store owner Pauli Dhawan stands in his new business, Halls Ferry Liquor Store at 14135 N. Frontage Road. The store hours are from 12 p.m. to 10 p.m.; the phone number is 601638-9020. DavID JaCkSoN•The Vicksburg PosT City woman charged in ex-boyfriend’s shooting A Vicksburg man was taken to a Jackson hospital Thanksgiving night after being shot several times by his ex-girlfriend, Vicksburg police Sgt. Sandra Williams said. Jackie Wesley, 35, 100 Sherwood Drive, was at his home around 7:45 when his ex-girlfriend, Canary Doss, 28, 2619 Togo St., drove into his driveway and asked to see him, Williams said. Wesley came out with several other people to speak to Doss. Doss pulled out a handgun and chased Wesley around the house to the backyard where he was shot, Williams said. Wesley was taken to River Region Medical Center, then transferred to University Medical Center in Jackson, where a hospital official said this morning he was in stable condition. Doss left the house before police arrived but was arrested just before 10 p.m. when she waved a gun in the 1200 block of Jefferson Street. Police were responding to a report of attempted crime ParkView Regional Medical Center building on McAuley Drive and Grove Street. All three were arrested during a traffic stop on Clay and Hope streets after a police officer on patrol saw the pickup they were in leave the hospital parking lot, Williams said. All three were being held without bond at the Warren County Jail this morning pending an initial court hearing. from staff reports suicide. After being taken into custody, police found the gun Doss was holding had been stolen from a car in Vicksburg. No details were available about when and where the gun had been stolen. Doss was charged with aggravated assault and possession of a stolen weapon and was being held without bond in the Warren County Jail pending an initial court hearing. City man arrested for felony DUI Three jailed in theft from old hospital Shots were fired into an unoccupied vehicle outside the American Legion at 1712 Monroe St. early this morning, Vicksburg police Sgt. Sandra Williams said. A teenager who said he was in a fight when the shots were fired at about 1:40 could not say if the other person in the fight fired the gun, which was not recovered, Williams said. Two thefts reported on Thanksgiving A Vicksburg man was charged Thursday morning with DUI third offense, a felony, Vicksburg police Sgt. Sandra Williams said. Antonio Peters, 24, 304 Enchanted Drive, was pulled over for careless driving in the 4000 block of U.S. 61 South, Williams said. Peters was being held in the Warren County Jail this morning pending an initial court hearing. Two men and one woman were arrested at 12:12 this morning and charged with business burglary, Vicksburg police Sgt. Sandra Williams said. Rory Beard, 42, 34 Redhawk Road, William Carrway, 38, and Tiffeny Monk, 32, both of 5770 North Washington St., are accused of stealing copper wire from the vacant Shots fired into car parked on Monroe Two burglaries were reported in the city on Thanksgiving Day, Vicksburg police Sgt. Sandra Williams said. At 6:33 a.m., a small amount of change was reported missing from Ace Muffler Shop, 1300 Clay St. Around 6:30 p.m., a laptop computer was reported missing from the Vicksburg-Warren County Chamber of Commerce at 2020 Mission 66. Vicksburg child dies in Arkansas motel room From staff reports Political advertising payable in advance Periodicals Postage Paid At Vicksburg, Mississippi The Vicksburg Post A 3-year-old child placed in foster care following his mother’s arrest for shoplifting last week was found dead in a Little Rock motel room early Wednesday. Shacolby Savell was pronounced dead of unknown causes after foster parents found the child unresponsive at a Comfort Inn where the family was staying, Pulaski County Coroner Garland Camper told KTHV-TV in Little Rock. The child was taken to a hospital at 6 a.m., Camper said. An autopsy was being conducted. Savell’s mother, Amber Savell, 22, 1702 Martin Luther King Jr. Blvd., was in the Warren County Jail on a $20,000 bond after her arrest a week ago today at Blockbuster Video on Pemberton Square Boulevard along with Charles Johnson, 22, same address. Police said both were charged with grand larceny after two Blu-ray disc players and 30 video games were reported taken. Camper said the child didn’t have a medical history, and officials were working to rule out foul play. Published reports said the foster parents, not identified, are from Mississippi and were with their older child, who was receiving treatment at Arkansas Children’s Hospital.. Former Mississippi Gov. William Winter, noted for his work to improve race relations in the state and a member of Barbour’s museum study commission, disagreed with the suggestion that leaders aren’t interested in a museum. “The problem has not been resistance to the concept of having a civil rights museum,” Winter said. “But I do think it’s important that those who are interested get together on where it would be located.”. While the museum project languished, Barbour and law- makers approved $2.1 million for a trail of markers describing significant civil rights events. The move didn’t please everyone. “If this is the alternative to the museum, that’s horrid. That’s shameful. You can’t store artifacts out in the street,” said Owen Brooks, 82, a Boston native who came to Mississippi in 1965 and participated in civil rights projects. The trail project has hit a snag. State Bond Commission members said the projects weren’t presented. However, Hank Holmes, director of the state Department of Archives and History, said his agency was preparing to seek grants to fund trail projects when he learned the projects wouldn’t be on the agenda. “When we found that out we stopped working on it because there would be no money for grants,” Holmes said, adding a museum is years away. Civil Continued from Page A1. in the segregated South, and were pivotal in passing the Civil Rights Act of 1964 and the Voting Rights Act of 1965. The absence of a state museum leads some to question whether Mississippi estimate of $73 million. Tougaloo was a hub of civil � � � � � � � Bead’em to the Punch! � � � � � � � � � � For each $20 Art and Soul purchase, we punch one . After ten punched , you receive a free $25 Art and Soul Gift Certificate! � � � � � � � � � � � Ten punched gifts earn one free $25 Gift Certificate! 1312 Washington � � (601) 629-6201 � Mon.-Sat. 10a-6p � Brown Family Birthday Celebration — 9 tonight; DJ Reo; $5; Loving Place, 1622 Clay St. MXO Girls — 10:30 a.m. Saturday; ASU Vicksburg branch, 1514 Cherry St. Port Gibson Class of 1976 — 3:30 p.m. Sunday; reunion planning; Claiborne County Patient’s Choice conference room, 123 McComb Ave.; Barbara Warner, 601-618-1452; Delbra Jones, 601-437-8038. AKA Sorority, Mu Xi Omega Chapter — 4 p.m. Sunday, Greater Grove Multipurpose Building, 2715 Alcorn Drive. Exchange Club — Noon Monday; Shoney’s. Vicksburg Kiwanis — Noon Tuesday, Jacques’; Katrina Shirley, Riverfest board president, speaker. Vicksburg Packers — 6 p.m. Dec. 3; seating begins at 5:30; James Jones, former Dallas Cowboy, speaker; Tasha Jones, 601-291-1370, or Danielle Williams, 601-218-9553, for tick tickets; Funtricity Building at Rainbow Casino. cHurcHeS Taking It Back Outreach Ministry — 9-5 today, 8 a.m.-5 p.m. Saturdays; $5 bags of clothes; newborn and free clothes; 1314 Fillmore St.; 601638-0794 or 601-831-2056. Mount Givens M.B. — Choir rehearsal, 6:30 tonight; 210 Kirkland Road. Travis Chapel A.M.E. — Family and Friends Day, 4 p.m. Saturday; the Rev. Michael Reed, speaker; the Rev. Beverly Baskin, pastor; 745 Hutson St. Sunday School Lesson and Bible Teaching — 9:30-10:30 a.m. Saturday; adults and children; the Rev. R.L. Miller, moderator; E.D. Straughter Building. PuBlic ProGramS Levi’s — A Gathering Place; 7-10 p.m. Saturday, music by Desperados; donations appreciated. Christmas Open House — 1-5 p.m. Sunday; 1:30-3:30, Santa at The Valley; 4, Holiday Express Train; downtown.. River City Mended Hearts — 5 p.m. Tuesday; Warren Doiron, vice-president of Mended Hearts, speaker; River Region Medical Center, Room C and D. this weekend Saturday • Legacy Luncheon — 11:30 a.m.; City Auditorium; honoring men role models; $30; reservations required, 601-636-1088. Sunday • Old Fashioned Christmas Open House — 1-5 p.m.; Washington Street; downtown kickoff to holidays. • Wyatt Waters calendarsigning — 1-5 p.m.; Lorelie Books on Washington. • Holiday Express — 4-8 p.m.; Levee Street station; free. • Santa at The Valley — 1:30-3:30 p.m.; Washington Street. � � � � � � � Friday, November 26, 2010 The Vicksburg Post A3 After the spill Gulf Coast towns hoping snowbirds bring in the money OrANG r rANG e BeACh, Ala. (Ap) — For all the oil spill claims and cleanup work by BP, retirees from the North may be the best survival bet for some Gulf Coast resort towns this winter. After a disastrous summer tourism season and a slowerthan sufa- The associa associaTed press Dick MacDonald of Prince Edward Island, Canada, takes in some sun at Orange Beach, bama-Florida state line. Recovery executive dies in plane crash DestiN, fla. (Ap) — An executive helping to guide BP’s recovery from the Gulf of Mexico oil spill, a top Texas lawyer and his mother-in-law were killed in a small plane crash in waters off northern Florida, officials said. James Patrick Black, 58, died about a mile from the Destin airport in the Florida Panhandle on Tuesday night, said BP spokeswoman Hejdi Feick. The three were said to be bound for a Thanksgiving holiday gathering in Florida. Get INSTANT Rewards, Ala., while an oil cleanup crew works behind him. in neighboring Gulf Shores, workers didn’t know whether snowbirds would be scared off by images of oil hitting beaches during the summer. Would they go elsewhere this year, perhaps to the East Coast or further if there was a problem somewhere, we could easily find a nice spot. It just didn’t look that bad, and it isn’t.” Snowbirds are big business in Florida Panhandle communities such as. E-mails: Size is anyone’s guess..” Lehr said the calculations made public represented “our best guess,” adding, “Yes, it is a guess.” $ Dillard’s Cardholders only 50 $ *or 10 SPEND more GET Get a $10 Reward Certificate instantly for every $50 or more single transaction on your Dillard’s Card.* *Excludes UGG® Australia. Subject to credit approval. To qualify for the $10 Reward Certificate, you must make a $50 or more single transaction (merchandise, less tax, adjustments, and returns) on your Dillard’s Card, November 26-27, 2010, in-store at Dillard’s only. The entire qualifying single transaction must be made on your Dillard’s Card (no split tender). This offer is not valid for purchases made on Dillards.com. The Reward Certificate will print with your receipt. Reward Certificate expires 11:59 PM CT December 11, 2010 and may be used towards a Dillard’s in-store purchase made with the Dillard’s Card. You will forfeit any unused part of the certificate if your purchase amount is less than d, altered or defaced. Lost or stolen Reward Certifi Certificate the amount of the certificate. Certificate may not be used as payment to an account or redeemed for cash. Certificate has no cash value. Void if copied, transferred, cate will not be replaced. $ 99 ladies’ coats • AK ANNE KLEIN • JESSICA SIMPSON • PRESTON & YORK • LONDON FOG Consignment Item Clearance Up to 50% off • Wool • Casual • Leather • Juniors A Speciality Gift & Fine Consignment Shop Tues-Sat 10-6 • 601-636-5355 • 3040 Halls Ferry Road NEW TOMS, JUST ARRIVED! CHRiSTMAS STMAS OpEN HOu uSE SuNdAy, NdAy, NOvEMbER 28 NdA 28th 1:00 - 5:00 pm Fresh Cut Trees Have Arrived! FLOWER CENTER ACCESSORIES 40%off Ladies’ Knits Reg. to $28. Preston & York hats, gloves & scarves. 3150 S. Frontage Road • 601-636-5810 Mon. - Sat. - 8am - 5:30pm • Sun. - 12:30pm - 5pm JUST ARRIVED!! Western Shirts All sizes including talls and Western Denim Jackets We Have Diabetic Socks Abraham Bros. 59 Fall 40%off Men’s & Women’s Isotoner Gloves Reg. to $56. Various styles & colors. HANDBAGS 99 $ $ Handbags Executivee Totes Orig. to $119. From some of your favorite brands! Orig. $139. Kate Landry y croco embossed totes. Red, bronze, more. e. SHOES 5999 Naturalizer Casuals “Clio” Brown, black, navy, 6-10M. “Nominate” Black, 6-10M. “NOMINATE” 99 29 Girls’ & Boys’ Coats Assorted styles & colors. Girls’ in 2-16, boys’ in 4-20. 1105 Washington St. •601-636-2622 win a 42" flat screen t.v.!!! Now through December 21st, each work order ticket will be entered for a chance to win! Winner will be announced December 22nd. certified professional dry cleaners LADIES 29 I.N. Studio $ 69 Prive $ Cashmere Sweaters Superior 2-ply yarn, hand wash. Assorted styles, colors in s-xl. Removable Fur Vests Quilted, red, black, ivory. Women, $34 “CLIO” 9999 Johnston & Murphy “Hayes” Tassel or penny in black, burgundy, 8-13M. CHILDREN 99 29 Girls’ Holiday Dresses Bonnie Jean holiday dresses. Festive styles, sizes 2-16. MEN Take an EXTRA 50%off Entire Stock of Men’s Permanently Reduced Apparel Shirts, pants, sweaters, more! 99 79 Men’s Wool Jackets Roundtree & Yorke and more, in assorted styles, m-xl. 3 locations to serve you. 1905 Cherry Street • 601-638-3001 Delchamps Plaza • 601-638-9351 3442 Halls Ferry Road • 601-636-6404 SHOP FRIDAY, 8 A.M. - 9 P.M. Vicksburg Mall • 601-638-8853 : USE YOUR DILLARD’S CHARGE. WE ALSO ACCEPT VISA, MASTERCARD, AMERICAN EXPRESS, DINER’S CLUB, DISCOVER CARD. A4 Friday, November 26,: Spend, baby, spend. Help the economy. OLD POST FILES 120 YEARS AGO: 1890 The Nos family gives a performance at the opera house. 110 YEARS AGO: 1900 D.J. Laughlin dies. • Mr. and Mrs. George A. Jones go to Kansas City. 100 YEARS AGO: 1910 Thomas Benton and Jennie Buchanan are married. • Sister Mary Alphonsus Hoey dies at the convent. • Mable Marshall of Greenville is visiting Miss Graves on Marshall Street. 90 YEARS AGO: 1920 Tacitus Bucci will come here from Italy to join his brothers. • Joe Hillhouse writes from California that he will again open his Vicksburg studio. 80 YEARS AGO: 1930 B.F. Nichols, county engineer, is ill. • The Business and Professional Women’s Club is organized. • Rabbi Sol Kory is in Birmingham. • Playgrounds are being constructed for the YMCA. 70 YEARS AGO: 1940 James R. McConaghie succeeds Dr. J.W. Coleman as superintendent of the Vicksburg National Military Park. • Jackson Tigers beat Carr Central 20 to 6. 60 YEARS AGO: 1950 W.W. Broome is named Man of the Year by the Junior Chamber of Commerce. • The Carr Central band goes to Greenwood for the Delta Band Festival. • Mississippi College Coach Stanley Robinson speaks at Carr Central’s football banquet. 50 YEARS AGO: 1960 The Vicksburg Y’s Men’s Club honors past presidents at the club’s 35th anniversary dinner. • Mrs. Grace Chambliss dies from injuries received in an automobile accident. 40 YEARS AGO: 1970 OUR OPINION Walt Disney’s “Love Bug” is showing at the Joy Theatre. • Mr. and Mrs. Charles Williams and children are visiting relatives in New Orleans. Ethics 30 YEARS AGO: 1980 Warren Central’s Lady Vikes advance to the semi-finals of the WC Classic with a 68-32 victory over Murrah. • Mr. and Mrs. Van Edward Hinman are the parents of a son, Matthew Edward James, on Nov. 28. A knuckle-rapping wrong for Rangel It is always a little sad to see prominent and powerful people brought low, especially when the fall is due to their own weaknesses and a certain sense of invulnerability that powerful people sometimes acquire. It is also the case that the line can be dif difficult to draw between business-asusual in Congress and outright criminal corruption 20 YEARS AGO: 1990 choose to ask the full House to censure Rangel, which, if approved, will require Rangel. A six-member panel discusses the pros and cons of riverboat gambling to about 50 people at a forum. • An extension on a bid to start building a new airport near Mound is being sought so that an updated environmental impact permit from the U.S. Army Corps of Engineers can be obtained. 10 YEARS AGO: 2000 Ted and Emily Long donate an 18-foot Christmas tree to the Old Court House Museum. • Chad Barrett and John Storey are new partners at Battlefield Discount Drugs. • Virginia Clark wins the Old Fashioned Christmas Open House drawing offered by Vicksburg Main Street. Pope’s condom trial balloon a welcome, necessary shift WASHINGTON — During the 1930s, as Protestants began their swift retreat from opposition to birth control, Anglican Bishop Albert Augustus David of Liverpool, England, spoke for the holdouts. The sexual relationship, he said, “Even in marriage must be regarded as a regrettable necessity. ... Except where children are desired, married persons should remain celibate after marriage, as before.” There is no recorded response of Mrs. David (if there was one) or of his flock, who doubtlessly nodded piously, went home and promptly ignored him. There is a difference between seeking the improvement of human behavior and declaring war on human nature. In that conflict, human nature is likely to win. This example came to mind when Pope Benedict XVI recently said that condom use might be permissible, or at least morally understandable, under some circumstances to prevent the transmission of HIV/AIDS. In the course of an extended book interview, the pope insisted that the “sheer fixation on the condom implies a banalization of sexuality.” But he continued: “There may be a basis MICHAEL GERSON There is a difference between seeking the improvement of human behavior and declaring war on human nature. In that conflict, human nature is likely to win. in the case of some individuals, as perhaps when a male prostitute uses a condom, when this can be the first step in the direction of moralization, a first assumption of responsibility.” Vatican leaders later downplayed the statement but did not retract the argument. This condom trial balloon is a welcome and necessary shift. African Catholic leaders of my acquaintance have long understood that a complete prohibition of condom use is unrealistic. Among discordant couples — one HIV-positive, one negative — the use of condoms is a requirement. It is not reasonable, along with Bishop Albert, to expect abstinence within marriage. And the regular use of condoms by sex workers is essential to public health.. Religion deals with ideals of human behavior. Public health deals with likely human behavior — a very different category. Both should respect the role played by the other. Public health officials are paid to assume that men and women will follow their passions and to mitigate the consequences. They put a bowl of condoms on the table, just in case. But the prevention of disease always involves some element of ethical behavior, even when it comes to condoms. Their use during high-risk sexual activity is always good for an individual, since it is about 90 percent effective in preventing HIV transmission. But the effectiveness of condoms as a social strategy is determined by the rate and consistency of their use. Studies have found condoms to be successful in preventing the spread of HIV in brothels and among men who have sex with men. But for the general public in Africa, the consistent use of condoms has been more difficult to achieve. Progress in reducing the prevalence of HIV has often come from reductions in the number of concurrent sexual partners and from delaying the sexual debut of young people, especially girls. No effective AIDS prevention strategy can ignore the role of condoms — or the role of behavior change that is often related to reli- gion. Both are necessary because human beings are neither angels nor beasts, as Christian theology would attest. People need institutions that oppose the banalization of sexuality, as well as institutions that recognize and accommodate the realities of sexuality and disease. During a visit to South Africa, I asked a very conservative Christian pastor engaged in an HIV/ AIDS ministry how he views the condom issue. “When I’m dealing with 10- and 12-year-old girls, I tell them to respect themselves and delay sex. When I’m dealing with sex workers, I give them condoms, because their lives are at stake.” The best AIDS prevention programs are idealistic about human potential and realistic about human nature. This seems where the pope is heading. Given his unquestioned standing as a theological conservative, perhaps only he could make the trip. • Michael Gerson’s e-mail address is michaelgerson@washpost.com. Friday, November 26, 2010 The Vicksburg Post A5 # #includes $1000 Ally/Americredit down payment assistance 2010 CHEVY SILVERADO EXT. CAB #5186 2010 CHEVY SILVERADO EXT. CAB 4x4 #5318 2010 CHEVY SILVERADO CREW CAB #8134 2010 CHEVY SILVERADO CREW CAB 4x4 #5298 $19,988 $26,988 $27,888 $27,988 # # # # #includes $1000 Ally/Americredit down payment assistance, rebate to dealer plus tax and title 2010 CHEVY HHR #8138 2011 CHEVY EQUINOX#8146 2010 CHEVY TRAVERSE #5256 2010 CHEVY TAHOE #5399 2010 CHEVY AVALANCHE #5293 # $16,488 $23,588 $33,488 $33,988 $39,988 * 2011 CHEVY COLORADO 2011 CHEVY COLORADO EXT. CAB #8148 * CREW Z71 #8153 * 2011 CHEVY SILVERADO 2500HD CREW CAB 4x4 WITH THE NEW DURAMAX DIESEL ENGINE AND LEATHER! $20,988 $24,788 Only...$55,388 # * # # #5403 #includes $1000 Ally/Americredit down payment assistance, rebate to dealer plus tax and title. * Rebate to dealer plus tax and title VICKSBURG’S NEW HOME FOR 2ND CHANCE FINANCING. PLEASE SEE DEBBIE BERRY OR ANY OF OUR PROFESSIONAL SALES STAFF FOR MORE INFORMATION. ## includes AARP members rebate. * Rebate to dealer plus tax and title 2010 CHEVY 2010 CHEVY $12,788* $19,999 AVEO #5442 IMPALA #5218 ## 2011 CHEVY 2011 CHEVY $19,988* $66,988* MALIBU #5471 With approved credit. Plus tax, titile & license. Pictures for illustrational purposes only. See dealer for details. CORVETTE #5472 GRAND SPORT A6 Friday, November 26, 2010 The Vicksburg Post ‘Critical habitat’ set aside Dinosaur die-off cleared way for huge mammals for Alaska polar bears Making room WASHINGTON (AP) — They just needed some leg room: New research shows the great dinosaur die-off made way for mammals to explode in size — some more massive than several elephants put together. The largest land mammal ever: A rhinoceros-like creature, minus the horn, that stood 18 feet tall, weighed roughly 17 tons and grazed in forests in what is now Eurasia. It makes the better known woolly mammoth seem a bit puny. Tracking such prehistoric giants is more than a curiosity: It sheds new light on the evolution of mammals as they diversified to fill habitats left vacant by the dinosaurs. Within 25 million years of the dinosaurs’ extinction — fast, in geologic terms — overall land mammals had reached a maximum size and then leveled off, an international team of scientists reports today in the journal Science. And while different species on different continents reached their peaks at different points in time, that pattern of evolution was remarkably similar worldwide. “Evolution can happen very quickly when ecology permits,” said paleoecologist Felisa Smith of the University of New Mexico, who led the research. “This is really coming down to ecology allowing this to happen.” The associated press This diagram shows the largest land mammals that ever lived, from left, IndricothAnyone who frequents natural history museums knows that the end of the dinosaurs 65 million years ago ushered in the age of mammals, and that some of them were gigantic. But the new study is the first comprehensive mapping of these giants in a way that helps explain how and why their size evolved. “We didn’t have a clear idea of how the story went after the extinction of the dinosaurs,” explained Nick Pyenson, a curator at the Smithsonian Institution’s National Museum of Natural History, who wasn’t involved with the new research. Previous theories suggested that species diversity erium and Deinotherium, that would have towered over the living African Elephant. drove increases in size, but the new study didn’t find that connection. “It suggests there’s a deeper explanation of how large body size evolves in mammals,” he said. Mammals did coexist with dinosaurs, but small ones, ranging from about the size of a mouse to a maximum of a small dog. “We were pretty much the varmints scurrying around the feet of the dinosaurs,” is how New Mexico’s Smith puts it.looking. WASHINGTON (AP) — The tat does not in itself block ecoObama administration is set- nomic activity or other develting aside 187,000 square miles opment, but requires federal officials to conin Alaska as a “critical habi- Alaska Gov. Sean Parnell sider whether tat” for polar and the state’s oil and gas a proposed action would bears, an action that could add industry had complained adversely affect restrictions to that the preliminary plan the polar bear’s habitat and future offshore drilling for oil released last year was too interfere with and gas. large and dramatically its recovery. Nearly 95 perThe total, underestimated the cent of the deswhich includes ignated habitat large areas of potential economic is sea ice in the sea ice off the Alaska coast, impact. The designation Beaufort and is about 13,000 could result in hundreds Chukchi seas off Alaska’s square miles, of millions of dollars in northern coast. or 8.3 million a c r e s , l e s s lost economic activity and Polar bears spend most of than in a pretax revenue, they said. their lives on liminary plan frozen ocean released last where they year. Tom Strickland, assistant hunt seals, breed and travel. Alaska Gov. Sean Parnell and secretary for fish, wildlife and parks at the Interior Depart- the state’s oil and gas indusment, said the designation try had complained that the would help polar bears stave preliminary plan released last off extinction, recognizing that year was too large and drathe greatest threat is the melt- matically underestimated the ing of Arctic sea ice caused by potential economic impact. The designation could result in hunclimate change. “This critical habitat desig- dreds of millions of dollars in nation enables us to work with lost economic activity and tax federal partners to ensure their revenue, they said. In response to the Obama actions within its boundaries do not harm polar bear popu- administration’s action, Parlations,” Strickland said. “We nell said Wednesday that the will continue to work toward state is pleased that existing comprehensive strategies for manmade structures will be the long-term survival of this exempted from critical habitat considerations. iconic species.” Designation of critical habi- Critics: Obama lagging on endangered species WASHINGTON (AP) — Environmental groups are criticizing the Obama administration for what they say is a continuing backlog of plants and animals in need of protection under the Endangered Species Act. The Fish and Wildlife Ser- vice says 251 species are candidates for endangered species protection, four more than a similar review last year found. Environmental groups say that shows the Obama administration has done little to improve on what they con- sider a dismal record on endangered species under President George W. Bush. Nearly two years after taking office, Obama has provided Endangered Species Act protection to 51 plants and animals, an average of 25 a year. In Celebration of 150 Years of Catholic Education in Vicksburg – 1860-2010 You Could Win A 2010 Dodge Challenger for Christmas! Raffle Tickets $2500 or 5 / $10000 GRAND OPENING Storewide SALE70 % Off save up to Thanksgiving Day • 8am - 5pm Friday • 8am - 6pm • Saturday • 8am - 6pm Free Pearl Earrings to the First 100 Adults! Register To Win Thousands In Jewelry & Cash! Purchase ticket at the school offices of St. Francis or St. Aloysius and at Blackburn Motor Company! Tickets On Sale Now thru December 9. “Same Staff You’ve Known for Years.” Located in Pemberton Kroger Plaza 3412 Pemberton Blvd., Suite 3 Vicksburg, MS 39180 601-631-0700 711 High Street Jackson, MS 39201 601-354-3549 Friday, November 26, 2010 The Vicksburg Post DeLay convicteD Ex-lawmaker faces jail time or probation aUStin, texas (aP) — Former U.S. House Majority Leader Tom DeLay argued throughout his trial that the deck was stacked against him by a politically motivated prosecutor and a jury from the most Democratic city in one of the most Republican states. But following DeLay’s conviction Wednesday on money laundering and conspiracy charges, some legal experts say the edge might now shift to the Republican who represented a conservative Houston suburb for 22 years. Before DeLay’s. “It is absolutely impossible he would get anywhere near life,â€? said Philip Hilder, a Houston criminal defense attorney and former federal prosecutor. “It would be a period of a few years, if he gets prison.â€? Barry Pollack, a Washington-based lawyer who represents clients in white-collar and government corruption cases, said the judge might not feel the need to throw the book at DeLay, figuring the conviction itself is severe punishment for someone who once ascended to the No. 2 post in the House of Representatives. For example, as a convicted felon, DeLay won’t be able to run again for public office or even be able to cast a vote until No link found with e-mail, Rep. Waters ethics case W SHinGton (aP) — A Wa recently discovered e-mail, which forced postponement of Rep. Maxine Waters’ ethics trial, appears to bring the House ethics committee no closer to proving she tried to obtain a U.S. bailout — during the financial crisis —’ grandson. It said Waters was closely watching the writing of bailout legislation that included a provision to help minority-owned banks. But OneUnited Bank — where The associa associaTed press Tom DeLay leaves the courtroom Wednesday in Austin, Texas. he completes his sentence. “I think in a lot of cases a judge wants to make an example, but I don’t see that happening here,â€?’t go directly to political campaigns. The money helped Republicans take control of the Texas House in 2002, and once there, they were able to push through a DeLay-engineered congressional redistricting plan that sent more Texas Republicans to Congress in 2004, strengthening DeLay’s political power. While the string of alleged events might — including the most serious ones. Although prosecutors argued Blagojevich wanted to enrich himself by trying to sell the Senate seat that once belonged to President Barack Obama, Turner said a “corrupt motiveâ€? was tougher to prove in that case. Murkowski seeks voice in Alaska election lawsuit ancHoRa R Ge, alaska (aP) Ra — U.S. Sen. Lisa Murkowski is arguing that Alaska will be harmed if she isn’t€” a total that includes 8,159 ballots contested by Miller observers. Miller sued this week in Fairbanks Superior Court, claiming that elections officials illegally accepted improperly marked write-in ballots that benefited A7 Sen. Lisa Murkowski The Republican incumbent, who mounted a write-in bid after losing the primary to Joe Miller, declared victory after the ballot count showed her with a 10,328-vote lead — a total that includes 8,159 ballots contested by Miller observers. Murkowski. Miller said a strict interpretation of state law bans any ballot that does not include a candidate’s name as it appears on a declaration of candidacy, or simply the last name of the candidate. Alaska elections officials have accepted minor misspellings on GROW LONGER AND STRONGER AS YOU EXPLORE 60 MINUTES OF YOGA AND PILATES MOVEMENTS. Shape Up Sisters' Certified Instructors will enable you to center your energy, reduce stress and even smile! Positive Uplifting Music! Call or stop by for a class schedule. 3215 Plaza Dr. 601-619-7277 write-in ballots. Attorneys for Murkowski said her seat will be vacant and Alaska will have only one senator if she’s not seated Jan. 3. “There are numerous critical issues facing our nation and Alaskans deserve to have full representation in the United States Senate,â€? attorney Scott M. Kendall wrote in a motion to intervene in the lawsuit. He warned that Murkowski would have a gap in service if she’s not seated and she would lose her seniority. “She would go from her current rank of 43rd to 100th,â€? he. We Bring Smiles To Your Family. • New Patients Welcome • Children and Adults • Full Time Hygienist Available 60% OFF seconds & discontinued items 20% OFF entire stock Ëœ ,*#) .#52#050# 3403 Pemberton Blvd. ÂĽ 601-638-2833 ÂĽ Our unique selection of ORNAMENTS make the perfect addition to any tree. Joe Durst, R. Ph. "3012"'1!-3,2"03%1 $-":453&&5t)"--4'&33:30"% Long Sleeve T-shirts with Inspirational Designs! G I F T & B R I DA L R E G I S T R Y 1 3 2 2 Wa s h i n g t o n • 6 0 1- 6 3 6 - 6 5 2 5 ST. ALOYSIUS CLASS OF 2011 _âÅ|ÇtĂœ|t fxĂœĂ¤|vx Wednesday, December 1 • Balzli Stadium / Farrell Field Those honored and memorialized will be announced during the service Honorary or Memorial Luminarias Luminaries can be purchased from Senior students or at the St. Aloysius School Office for $10 each. St. Francis Xavier • St. Aloysius 601-636-4824 / 1900 Grove Street / Vicksburg, Mississippi The Delta Planter's Room - The Nogales House %*//&3.&/6 '3*%":"/%4"563%": / "563%" 07&.#&3"/% "563%": / TU $PVSTFt#655&3/65426"4)4061 .BQMF$IBOUJMMZ$SFBN 1601-C North Frontage Road • Vicksburg Phone: (601) 638-2900 speediprint@cgdsl.net 3 Off Ladies Take A Christmas $ 00 Shopping Break, Let Us Pamper You... Haircut Service! 601-636-5806 • 919 Clay • Vicksburg up to Walnut Hills Restaurant E V E RY T H I N G T H AT M E A N S B U S I N E S S A Tradition of Quality Service Since 1935 NOVEMBER 26 - DECEMBER 17 General Dentistry Member of the American Dental Association Member of the Mississippi Dental Association Member of the Vicksburg/Warren Chamber of Commerce (601) 636-5321 • 1212 Mission 66 • Vicksburg, MS 39183 &OFFICE SUPPLY Whirlpool Electric Range # WFE371LV CHRISTMAS SALE! Most Insurance and Most Major Credit Cards Accepted Medicaid, CHIPS Program SPEEDIPRINT SALES • PARTS • SERVICE provision. A key question is whether Waters instructed Moore to get assistance for OneUnited, when her husband’s investment in the Boston-based institution was in danger of becoming worthless during the near-financial collapse of late 2008. Waters has contended she was simply trying to help all minority banks in trouble — and specifically those, like OneUnited, that were hurt by their investments in the then-collapsing mortgage giants Fannie Mae and Freddie Mac. Dr. Janet S. Fisher Customer Service VOTED #1 APPLIANCE DEALER! Rep. Maxine Waters Waters’ husband is a stockholder — wasn’t mentioned, even though it was among the institutions that could have benefited from the *Regular $15.00 service, must present coupon prior to service, one per customer, exp. 12-15-10 Pemberton Hairstylist llc Located inside Vicksburg Mall Appointments Call 601-636-6611 Walk-ins Always Welcome! Mon-Sat 9am-7pm • Sun 1pm-6pm OE$PVSTFt("3%&/4"-"% 'SFTI.JYFE(SFFOT(BSEFO7FHFUBCMFTXJUI )FSC#VUUFSNJML%SFTTJOH SE$PVSTFt6 OZ. FILET MIGNON 3FE8JOF4BVDF #FBSOBJTF4BVDF(BSOJTIFEXJUI5PCBDDP 0OJPOT4FSWFEXJUI'JOHFSMJOH1PUBUPFTXJUIB 5IZNF$PNQPVOE#VUUFSBOE"TQBSBHVT PS 1 4&"3&%4&"4$"--014 1"/ 4FSWFEXJUI3JTPUUPXJUI3PBTUFE$PSO (SBQF5PNBUPFT "TQBSBHVTBOE$IBSSFE5PNBUP#VFSSF#MBOD UI$PVSTFt$)0*$&0'%&44&35 'SFODI4JML1JF -FNPODFMMP1JF PS $SFBNZ$PDPOVU1JF QFSQFSTPO3FTFSWBUJPOTGPS%JOOFS3FDPNNFOEFE SUPPORT OUR CITY; EAT AND SHOP DOWNTOWN. HOURS - MONDAY - FRIDAY 11 AAM M TO 9 PM; SUNDAY 11AM - 2 PM A8 Friday, November 26, 2010 The Vicksburg Post Lawyer: Killing suspect not expert rescuer Gabe Watson returning to Alabama to face charges in wife’s death BIRMINGHAM, Ala. (AP) — An accomplished diver charged with murder in Alabama in the honeymoon death of his wife had been certified in rescues, but had little formal training, his attorney said today. Gabe Watson served 18 months in an Australian prison for not doing enough to save his wife in 2003, but now faces more serious charges in Alabama. Prosecutors believe he hatched Tina the plan to kill Watson his wife, Tina Watson, 26, in Alabama before the trip. Watson’s attorney said his client had only taken a short rescue certification class two years before the newlyweds’ scuba dive along the Great Barrier Reef. “It was a half-day class,” attorney Joseph Basgier told CBS’s “Early Show.” “He had never participated in a rescue dive before. He wasn’t an expert rescuer. He had never done it, and he was scared, too. This was his new wife.” The associa associaTed press Gabe Watson arrives at Brisbane Airport in Australia on Thanksgiving Day after his release from a Queensland state jail. Watson’s life insurance policy. A $33,000 insurance payment was made to Tina Watson’s father, not her husband. Tina Watson’s father said his daughter told him before she died that Gabe Watson wanted her to increase the value of her policy and name her husband as the beneficiary. Watson, 33, pleaded guilty to manslaughter, a punishment Alabama Attorney General Troy King said was too lenient. He arrived in Los Angeles on Thursday after he was deported from Melbourne, Australia. He is expected back in Alabama early next week after a court appearance in California, King said. another Watson attorney, Brett Bloomston. Watson was indicted by an Alabama grand jury on capital murder in the course of kidnapping, and capital murder for pecuniary gain, prosecutors said. The charges were sealed until Watson reached the United States, and King refused to discuss the evidence in the case in detail. He said prosecutors believe Watson came up with a plan to kill his wife while they were in Alabama, which gives the state jurisdiction over her death. “We’re obviously anxious to get him back to Alabama,” King said. Second drug tunnel found near San Diego SAN DIEGO (AP) — U.S. authorities Thursday. were to release further details of the probe this for shipping loads of illegal drugs. Man, 78, accused of Obama threat COLUMBIA, S.C. (AP) — A 78-year-old South Carolina man with more than a dozen weapons in his home has been arrested after federal authorities say he told a nurse he was thinking about killing President Barack Obama. Michael Stephen Bowden was being held today weapons. Airport protest never takes flight it's not a penalty to pile on! CHICAGO (AP) — The big Opt-Out looked like a big bust as most of the Thanksgiving travelers selected for full-body scans and pat-down searches chose to submit to them rather than create havoc on one of the busiest flying days of the year. In fact, in some parts of the U.S., bad weather was a bigger threat. For days, activists had waged a loosely organized campaign on the Internet to encourage airline passengers to refuse full-body scans and insist on a pat-down in what was dubbed National Opt-Out Day. But on Wednesday, scanner after a golf ball marker set off the metal detector. His wife, Marti Hancock, 58, said that ever since she was in The associa associaTed press A TSA screener pats down a traveler Wednesday in Orlando. the air on Sept. 11, 2001, and feared there was a bomb on her plane, she’s been supportive of Speedo-style bathing suit, and others carrying signs.. “The TSA now talks about re-evaluating everything,” said James Babb, an organizer for one of the protest groups, We Won’t Fly. “That is a tremendous victory for a grassroots movement.” come see our new sectional! •lights •storage •cup holders •4 full recliners •heat & massage •rich chocolate padded microfiber •lifetime warranty 2,199 $ 95 Lay Aways Welcomed We Finance Our Own Accounts Just Say “Ch rge It” “ChA 1210 Washington St. 601-636-7531 In Downtown Vicksburg Since 1899 Friday, November 26, 2010 The Vicksburg Post A9 Bargain shoppers crowd nation’s stores for deals By The Associated Press.” The fierce battle for shoppers’ wallets promises savings for those willing and able to buy amid an economy that’s still worrying many. The good news is that retailers are heading into the season with some momentum after a solid start to Novem- ber.. Shopping Continued from Page A1. “We had a lot of early birds. People were lined up before we opened.” Thee store saw a steady stream of customers throughout the day, she said. That evening as temperatures fell from 80 to 40 as a hard-blowing rain was dumped on the city, a Black Friday veteran was the first person in line awaiting opening of the Gap at the Outlets at Vicksburg at 11 p.m. “I do this every year, and I was first in line last year, too,” said Senetta Brown of Vicksburg, who arrived with a group of relatives at 10 p.m. “I’m here for the big sales. You can get a $69 sweater for half off.” A couple of the outlet stores were open at 10 p.m., just as the rain started. Half an inch of rain had fallen through the early morning shopping hour. “We got a jump start on it,” said Ethan Walker, manager of Gymboree, a children’s clothing store. “The turnout is better than last year’s.” Walker believes customers flock to stores on Black Friday for the sizeable, time-sensitive sales that are offered for that one day, when almost all retailers turn a profit. Stephanie Boyt, a resident of Oak Grove, La., about 125 miles west of Vicksburg, drove the extra miles to save as much as half on children’s clothing. “I spent about $20, and I saved about $30,” she said after checking out at Gymboree. “I come here just because of the outlets.” Jackson residents Velma Johnson and Mary Edwards made an overnight trip to Vicksburg because, they said, the deals start much earlier here. “The midnight shopping is something that Vicksburg started,” Johnson said as she pushed along a 5-foot stack of boxed kitchen appliances while waiting to check out around 5:30 this morning at JC Penney, which opened at 4 a.m. The pair had been shopping since 10 p.m. Thursday and continued into the early morning hours. “We’ve been up all night,” Johnson and Edwards said. Along with retailers, some of Vicksburg hotels and motels saw increased bookings for the weekend. Holly Pendelton, front desk clerk at Jameson Inn, which is across the street from the outlets, said the motel saw reservations from customers who were in town for Black Friday shopping, as well as about 15 walk-in customers Thursday night. While the 60-room hotel was only half booked today, Pendelton said that number was higher than any recent weekends. Theresa Clay, assistant general manager of Courtyard By Marriott, also near the outlets, said this morning that 84 out of 111 rooms were occupied. “I think that’s pretty good for this weekend, considering last year at this same time, we were only 25 percent occupied.” Deals on power tools and appliances also yanked people out of bed a few hours earlier than usual. “We had in mind what we wanted before we came,” said James Davis, who, along with nephew Mike Davis, were some of the first customers at The Home Depot this morning. The Vicksburg pair had piled into a shopping cart items such as Christmas trees, an electric skill saw and name-brand flashlights they said don’t usually go on sale any other time of the year. “That’s just a deal you can’t pass up,” Mike Davis said of the electric skill saw set that was priced at 50 percent off. Black Friday deals will continue through the day, but some stores have said new deals will be released for “Black Saturday.” “We’ll have some more new deals on Saturday to keep customers coming back,” Home Depot store manager Jeff Woods said. Merchants downtown did not participate in early Black Friday shopping hours, but will have two shopping events this weekend. A national initiative called Small Business Saturday is calling for the community to support small businesses, and Vicksburg Main Street, which promotes downtown businesses, was touting the event this morning. The Old Fashioned Christmas Open House special shopping event will be from 1 to 5 p.m. Sunday, and kick off extended shopping hours during the holiday season. Shops will stay open until 7 p.m. Mondays through Saturdays and 1 to 5 p.m. Sundays until Christmas. Korea Continued from Page A1. from leader Kim Jong Il to his young, inexperienced son Kim Jong Un, who is in his late 20s and is expected to in the Yellow Sea starting Sunday. The North, which sees the drills as a major military provocation, unleashed its anger over the planned exercises in a dispatch today. “The situation on the Korean peninsula is inching closer to the brink of war,” extends 230 miles (370 kilometers). Theadore C. Bowman Theadore C. Bowman died Wednesday, Nov. 24, 2010, at River Region Medical Center. He was 73. W.H. Jefferson Funeral Home has charge of arrangements. Willie Joe Guise CHICAGO — Willie Joe Guise, formerly of Vicksburg, died Friday, Nov. 19, 2010, in Chicago. He was 54. Mr. Guise had been employed as a heavy-equipment operator and was a member of Evergreen Baptist Church. He was preceded in death by his parents George Guise Sr.; his mother, Mary Lee Guise; three brothers, Robert Lee Guise, Jimmy Lee Guise and Tom Walker Guise; and a sister, Jessie Lee Banks. He is survived by three brothers, Lee Arthur Jackson of Yazoo City, George Guise Jr. of Vicksburg and William McKinley Guise of Chicago; two sisters, Addie Chatman of Chicago and Angela Guise of Redwood; and nieces, nephews, cousins and other relatives. Services will be at 2 p.m. Saturday at W.H. Jefferson Funeral Home with the Rev. Melvin Bolden officiating. Burial will follow at Evergreen Cemetery. Visitation will be from 6 to 7 tonight at the funeral home. Kathy Simmons Patty TALLULAH — Kathy Simmons Patty, died Wednesday, Nov. 24, 2010, at River Region Medical Center. She was 51. Mrs. Simmons was born in Delhi and was a life-long resident of Tallulah. She was a secretary and a member of Parkview Baptist Church. Survivors include her husband, Joe F. Patty Jr. of Tallulah; a son, Joseph Patty of Tallulah; her parents, Newtie and Nellie Simmons of Tallulah; a sister; Judy Lynn Toney of Tallulah; three grandchildren; nieces, nephews and other relatives and friends. Services will be at 2 p.m. Saturday at Parkview Baptist Church with the Rev. Clifton Wheat officiating. Burial, directed by Crothers-Glenwood Funeral Home, will follow at Silver Cross Cemetery. Visitation will be Saturday at the church from 10 a.m. until the service. Betty O. Wood PORT GIBSON — Betty O. Wood died Tuesday, Nov. 23, 2010, at Jeff Anderson Hospital in Meridian. She was 57. She was preceded in death by a sister, Peggy Goff; and two brothers, Ozell and Dewain Alexander. Survivors include her husband, Mac M. Wood Sr. of Port Gibson; one son, Mac M. “Meritt” Jr. of Ellisville; and GLENWOOD FUNERAL HOMES • VICKSBURG • ROLLING FORK • PORT GIBSON • UTICA • TALLULAH, LA • Port Gibson • Mrs. Betty Diane Wood Service 3:30 p.m. Saturday, November 27, 2010 Stepping Stone Baptist Church of Port Gibson Interment Wintergreen Cemetery Visitation 1 p.m. Saturday until the hour of service at the church • Vicksburg • Mrs. Mary Ruth Pritchett Service 2 p.m. Monday, November 29, 2010 Glenwood Chapel Interment Cedar Hill Cemetery Visitation Noon Monday until the hour of service 601-636-1414 45 Highway 80 BY CHIEF METEOROLOGIST BARBIE BASSSETT TONIGHT saturday 28° 60° Partly cloudy tonight, lows in the 20s; partly cloudy Saturday, highs in the 60s WEATHER This weather package is compiled from historical records and information provided by the U.S. Army Corps of Engineers, the City of Vicksburg and The Associated Press. LOCAL FORECAST saturday-sunday Partly cloudy, lows in the 20s, highs in the 60s STATE FORECAST TONIGHT Partly cloudy lows in the 20s saturday-sunday Partly cloudy, lows in the 20s, highs in the 60s Almanac Highs and Lows High/past 24 hours............. 82º Low/past 24 hours............... 40º Average temperature......... 61º Normal this date................... 53º Record low....24º before 1885 Record high............81º in 1896 Rainfall Recorded at the Vicksburg Water Plant Past 24 hours.............. 0.52 inch This month..............3.90 inches Total/year.............. 41.28 inches Normal/month......3.28 inches Normal/year........ 45.84 inches today.. a brother, Charles Alexander of Duffee. Services will be at 3:30 p.m. Saturday at Stepping Stone Baptist Church with the Rev. Kenneth Garland officiating. Burial will follow at Wintergreen Cemetery with Milling Funeral Home of Union in charge of arrangements. Visitation will be Saturday at the church from 1 p.m. until the service. Pallbearers will be Tommy Thomas, Tony Ory, Will Thomas, Eugene Alexander, Jeremy Pinson, Curtis Lambert and Gordon Lambert. Honorary pallbearers will be deacons of Stepping Stone Baptist Church. Stages Mississippi River at Vicksburg Current: 10.8 | Change: +0.5 Flood: 43 feet Yazoo River at Greenwood Current: 16.0 | Change: +1.2 Flood: 35 feet Yazoo River at Yazoo City Current: 12.3 | Change: -0.7 Flood: 29 feet Yazoo River at Belzoni Current: 15.0 | Change: -1.1 Flood: 34 feet Big Black River at West Current: 2.7 | Change: -0.2 Flood: 12 feet Big Black River at Bovina Current: 6.8 | Change: +0.1 Flood: 28 feet Frank J. StEELE BAYOU Land......................................NA River....................................57.9 FUNERAL HOME MISSISSIPPI RIVER Forecast Continuing the Tradition of Quality Service with Affordable Choices Cairo, Ill. Saturday................................. 22.1 Sunday.................................... 23.6 Monday.................................. 25.2 Memphis Saturday....................................3.9 Sunday.......................................4.5 Monday.....................................5.4 Greenville Saturday................................. 18.0 Sunday.................................... 17.8 Monday.................................. 17.9 Vicksburg Saturday................................. 11.2 Sunday.................................... 11.0 Monday.................................. 10.9 deaths The Vicksburg Post prints obituaries in news form for area residents, their family members and for former residents at no charge. Families wishing to publish additional information or to use specific wording have the option of a paid obituary. PRECISION FORECAST e|Äxá FUNERAL HOME • VICKSBURG • Mr. Jack Keller 1923 – 2010 Memorial Service to be announced at a later date Mr. Wilson H. McClain Arrangements to be announced FISHER 5000 INDIANA AVENUE 601-629-0000 601-636-7373 1830 CHERRY STREET Solunar table Most active times for fish and wildlife Saturday: A.M. Active............................ 9:46 A.M. Most active................. 3:33 P.M. Active...........................10:12 P.M. Most active.................. 3:59 Sunrise/sunset Sunset today........................ 4:58 Sunset tomorrow............... 4:58 Sunrise tomorrow.............. 6:43 RIVER DATA A10 Friday, November 26, 2010 The Vicksburg Post Teens lost in Pacific for 50 days get ‘miracle’ rescue. The associated press One of three teens rescued after drifting in the Pacific for 50 days arrives in Suva, Fiji, today. But they were picked up Wednesday by a fishing trawler — undernourished, severely dehydrated and badly sunburned, but otherwise well. The ship’s first mate said the area they were in is way off any normal commer- cial they devoured that, Fredricsen said. The rescue came not a moment too soon: Fredricsen said they had begun to drink sea water. Two arrested in plot on Pakistan’s capital ISLAMABAD (AP) — Police arrested two suicide bombers in Pakistan’s capital today who they said were planning to attack a mosque and a government building. Al-Qaida and Taliban militants seeking to topple the U.S.-allied government have carried out scores of attacks in recent years, killing thousands. The state has responded by launching offensives in the remote northwest where the insurgents are based. Police officer Bin Yamin said the detained men were linked to the Pakistani Taliban in the South Waziristan region, where the Pakistani army has been fighting the militants since last year. He said one of the arrested men was wearing an explosives vest and was on his way to attack an Islamabad mosque during Friday prayers when officers seized him. He did not say why the militants would target the mosque. Most attacks have been on government, security or Western targets, though there have been seemingly indiscriminate blasts in public places presumably to spread terror and undermine confidence in the government. Questioning of the suspects indicated that in addition to the mosque, they were also planning to hit government buildings in the capital, possibly even Parliament, Yamin added. Interior Minister Rehman Malik confirmed the arrests, saying authorities learned about a possible suicide bombing at Parliament or nearby buildings Thursday night, after which they quickly increased security. “We took all the required measures without creating a panic,” Malik told the staterun Pakistan Television. Earlier this month, a bomb killed 67 people at a mosque frequented by anti-Taliban elders in the northwest. Militants also penetrated a high-security area of the southern city of Karachi this month, detonating a car bomb that leveled the building, killing 15. The last terrorist attack in Pakistan’s capital was in October last year, when a suicide bomber dressed as a security guard killed five U.N. staffers at the World Food Program’s office in Islamabad. Meanwhile, suspected U.S. missiles hit a vehicle carrying three alleged militants in Pakistan’s northwest today, the latest in a barrage of strikes by unmanned planes on the Taliban stronghold, officials said. THIS FRIDAY ONLY! Snag our best deals of the year— the best gifts need the best network. FREE 100 VALUE Two killed in riot over Cairo church CAIRO — Another Christian has died from gunshot wounds in clashes with NEW! Samsung Continuum™ a Galaxy S™ phone 1.8" up-to-the-minute ticker display 19999 NOW $ 9999 $ $199.99 2-yr. price – $100 mail-in rebate debit card. NEW! LG Vortex™ • Android™ 2.2 with full suite of Google™ apps • Facebook® and Twitter® for LG built-in • Skype mobile™ 7999 $ SPEEDIPRINT 4999 $ Verizon Exclusive world Color Copies $ $149.99 2-yr. price – $100 mail-in rebate debit card. BY THE ASSOCIATED PRESS police over construction of a Cairo church, raising the death toll to two, a security official said. Milad Malak, 24, was shot in the stomach and died after surgery today.. 9999 NOW $ Yemen car bomb kills at least 2 SANAA, Yemen — A Shiite rebel group spokesman said a suicide car bomber struck Shiite mourners heading to a funeral in northern Yemen, killing at least two people today. Mohammed Abdel-Salam said the bomber attacked the convoy that was traveling to Saada province to attend the funeral of Badr al-Hawthi, the father of the Shiite rebel group’s leader. Eight people were wounded. It was the second suicide bombing against Yemeni Shiites this week. On Wednesday, a suicide car bomb struck a convoy of Shiites on their way to a religious ceremony, killing 17. BlackBerry® Bold™ 9650 Push to Talk world phone with trackpad Buy any of these phones and get a Jawbone® ICON™ Hero NOW FREE NO REBATE REQUIRED All phones require new 2-yr. activation & data pak. While supplies last. 1.800.256.4646 • VERIZONWIRELESS.COM/HOLIDAY • VZW.COM/STORELOCATOR Boston College at Syracuse|11 a.m.|ESPN Northwestern at Wisconsin|2:30 p.m.|ABC Michigan State at Penn State|11 a.m.|ESPN2 Oregon at Stanford|5:30 p.m.|Versus Mississippi State at Ole Miss|6 p.m.|ESPNU Kentucky at Tennessee|11:21 a.m.|WJTV South Carolina at Clemson|6 p.m.|ESPN2 South Florida at Miami|11 a.m.|ESPNU Georgia Tech at Georgia|6:45 p.m.|ESPN LSU at Arkansas|2:30 p.m.|CBS on tv college THE VICKSBURG POST SPORTS Friday, no vember 26, 2010 • SE C T I O N B PUZZLES B7 | CLaSSifiEdS B8 Steve Wilson, sports editor | E-mail: sports@vicksburgpost.com | Tel: 601.636.4545 ext 142 COLLEGE FOOTBALL USM faces Tulsa tonight By The Associated Press Pats, Jets win New England thrashes Detroit while the Jets scream over the Bengals/B4 SChEduLE PREP BASKETBALL WC hosts NW Rankin Saturday, 6 p.m. ON TV 5:30 p.m. CBS College Sports - Southern Miss still has a shot at the CUSA title game, but the Golden Eagles will need a win tonight at Tulsa and help from SMU and Memphis. ThE aSSoCiaTE ia d PrESS iaTE Southern Miss quarterback Austin Davis is tackled by Houston defensive back Nick Saenz last week. OKLAHOMA CITY — Entering the final week of the regular season, both Tulsa and Southern Miss still have a chance to advance to the Conference USA championship game. However, that will change for one of the teams before the kickoff in Tulsa tonight. One team will be playing for the possibility of a titlegame berth, the other simply for pride and bowl-berth improvement. Which team is which depends on what happens this afternoon when SMU visits East Carolina. To win the West Division, Tulsa (8-3, 5-2) needs SMU to lose and they need to beat Southern Miss. Southern Miss (8-3, 5-2) needs SMU to win to keep alive the Golden Eagles’ slim hopes of capturing a share of the East Divi- On TV 5:30 p.m. CBS College Sports Southern Miss at Tulsa sion crown and advancing to the title game. If that happens, Southern Miss still would need to beat Tulsa, then hope last-place Memphis can somehow upset current East front-runner Central Florida on Saturday. Both Tulsa coach Todd Graham and Southern Miss coach Larry Fedora insist their focus won’t be on scoreboard-watching, but on the game at hand. “I take the approach — and I am being sincere — I am not worried about that one bit,” Graham said. “I believe the challenge ahead of us is a big one. Southern Miss is PREP BASKETBALL the best team we have played in our conference. We better worry about that one and beat them. I do believe, if we do that, it’s going to take care of itself. I just believe that. I don’t think it will do any other way. We have to win.” For his part, Fedora said “we have not even talked about the East Division lately and have not in a long time. We are more concerned with winning this next game and getting (win) number nine under our belts. That would be huge for this teams.” Tulsa is 5-0 at home this season while Southern Miss has a 4-1 road record and both teams are rolling entering the game. Since a 3-3 start that included narrow losses to East Carolina and SMU, Tulsa has won five straight See USM, Page B3. nFL WhO’S hOT BENJARVUS GREEN-ELLIS Former Ole Miss and New England Patriot running back scored two touchdowns in a win over the Detroit Lions on Thursday. New Orleans Saints defensive end Alex Brown celebrates a missed field goal by Dallas Cowboys placekicker David Buehler in the final minute Thursday. The Saints won 30-27. SIdELINES Aggies slam Longhorns AUSTIN, Texas (AP) — After a season of miserables losses, Texas had one last chance to do something right. Beat No. 17 Texas A&M and the Longhorns could avoid their first losing season since 1997, end the regular season on a winning streak and qualify for a bowl game. Cyrus Gray and the Aggies weren’t about to let any of that happen. Gray ran for 223 yards and two long touchdowns and Von Miller snagged a key interception in the final minutes to carry the Aggies to a 24-17 win Thursday night that left Texas with its first losing season under coach Mack Brown. Gray’s first touchdown covered 84 yards and pulled the Aggies out of an early 7-0 hole. The second, a 48-yarder, put them up 24-14 in the third. LOTTERY Weekly results: B2 Saints hold off Youth movement bears fruit Cowboys DaviD Jackson•The Vicksburg PosT Warren Central players cluster around coach Jesse Johnson, left, during Wednesday’s game against Crystal Springs. Warren Central makes drastic improvement By Steve Wilson swilson@vicksburgpost.com In years past, it usually took Warren Central’s boys’ basketball program until near Christmas to get to three wins. But one day removed from Thanksgiving, the Vikings (3-3) have already reached the plateau and look to be making a serious move upward after dwelling in the Region 4-6A cellar for the past several years. What’s the difference? “It’s all about having the players,” WC coach Jesse Johnson said. “We’re very young and very talented.” And they’re doing it with youth. Out of the 20-man roster, 11 of them are sophomores or freshman. A big part of that is the development of sophomore do-it-all guard Kourey Davis, who scored 30 points in a 98-94 double-overtime loss to Crystal Springs on DaviD Jackson•The Vicksburg PosT Warren Central’s Eric Howard makes a layup against Crystal Springs. The Vikings lost 98-94 in double overtime. Wednesday. A 6-foot-5 wing player, Davis can play all five positions, hit the 3-pointer, handle the ball in the open court, block shots, rebound and is strong finishing around the basket. He is averaging 18.4 points per game and 5.4 rebounds this season. “That’s what we’re looking from him,” Johnson said. By The Associated Press “He’s going to lead us offensively and defensively. He’s doing an outstanding job of rebounding from the guard position.” His emergence has made life easier for the returnees. Senior guard Jeremy Harper, one of the few returnees, is not having to carry the scoring load like he did last season. He’s averaging 16 points per contest on 49 percent shooting. Louis Carson, a junior, has helped solidify the Vikings at the two-guard position and on the wing, averaging 7.3 points per contest. Another youngster helping the effort is Gerald Glass, who is 6-foot-5 despite only being a freshman. He leads the team in rebounding and gives the Vikings another long-armed defender down low. The Vikings like to run and gun. With all of the ARLINGTON, Texas — Saints safety Malcolm Jenkins admittedly took a bad angle at the same time one of the cornerbacks slipped. That left Roy Williams wide open and sprinting down the field seemingly about to seal an incredible Thanksgiving comeback for the Dallas Cowboys. “Honestly, it could have been a catastrophe,” Jenkins said. “A bad play turned good for us.” Jenkins caught Williams from behind, stripping the ball away so forcefully at the 11-yard line that it wound up in the defender’s arms. Five plays later, Drew Brees threw a touchdown pass to Lance Moore that gave New Orleans a 30-27 victory. So aware that cornerback Tracy Porter was in pursuit, Williams switched the ball from his right to left hand. But Jenkins was charging hard from that side. See WC, Page B3. See Saints, Page B3. B2 Friday, November 26, 2010 on tv BY THE ASSOCIATED PRESS COLLEGE FOOTBALL 1:30 p.m. CBS - Auburn at Alabama 2:30 p.m. ABC - Colorado at Nebraska 2:40 p.m. FSN - UCLA at Arizona St. 6 p.m. ESPN - Arizona at Oregon 9:15 p.m. ESPN - Boise St. at Nevada COLLEGE BASKETBALL 1:30 p.m. ESPN2 -Preseason NIT, Virginia Commonwealth vs. UCLA 1:30 p.m. ESPNU - Old Spice Classic, Texas A&M vs .Manhattan 4 p.m. ESPN - Preseason NIT, Tennessee at Villanova 4 p.m. ESPN2 - Old Spice Classic, Notre Dame vs. California 6:30 p.m. ESPNU - Old Spice Classic, Georgia vs. Temple NBA 6 p.m. ESPN2 - Houston at Charlotte 8:30 p.m. ESPN2 - Golden State at Memphis sidelines from staff & AP reports CYCLING Contador reiterates his innocence in doping MADRID — Alberto Contador reiterated his innocence over his failed doping test at the Tour de France and slammed the Astana team for abandoning him once the news broke. The 27-year-old Spanish cyclist is facing a two-year ban and risks losing his third Tour title after testing positive for the banned drug clenbuterol, which he claims came from contaminated meat. Contador labeled the charges “ridiculous” and felt the entire episode had discredited him. NFL Heimerdinger will coach despite cancer treatment NASHVILLE, Tenn. — Tennessee offensive coordinator Mike Heimerdinger is back at work and will call the plays Sunday when the Titans visit the Houston Texans before starting his treatment for cancer. Tennessee coach Jeff Fisher said Heimerdinger met with his doctors Wednesday night and will begin treatment Monday. Rookie quarterback Rusty Smith will be making his first NFL start against the Texans. Black Eyed Peas will perform at Super Bowl ARLINGTON, Texas — The Black Eyed Peas will be the featured halftime performer at the Super Bowl. The Grammy award-winning group will perform Feb. 6 at Cowboys Stadium. The announcement was made during the Dallas Cowboys’ game against the New Orleans Saints. NBA Lee cleared after treatment for infection OAKLAND, Calif. — Golden State Warriors forward David Lee was medically cleared to resume light conditioning a week after undergoing a second procedure on his left elbow to treat an infection. He has had stitches and a catheter for IVs removed from his right arm, the team said. flashback BY THE ASSOCIATED PRESS Nov. 26 1967 — Sonny Jurgensen of the Washington Redskins passes for 418 yards and three touchdowns in a 42-37 loss to the Cleveland Browns. 1988 — For the first time in series history, Notre Dame and Southern Cal enter the game undefeated and occupying college football’s top two spots in the nation. The top-ranked Fighting Irish win 27-10. 1997 — Charles Jones scores a school record 53 points and Long Island University beats Division III Medgar Evers 179-62, breaking the NCAA record for margin of victory. The 117-point difference eclipses the mark of 97 set by Southern University in a 154-57 victory over Patten on Nov. 26, 1993. 2005 — Marek Malik ends the NHL’s longest shootout in the 15th round, fooling goalie Olie Kolzig with a trick shot to give the New York Rangers a 3-2 victory over the Washington Capitals. With only two healthy skaters left on the Rangers’ bench, Malik, a defenseman, takes a shot with his stick between his skates and beats Kolzig for the victory. The Vicksburg Post scoreboard NFL AMERICAN CONFERENCE W New England...... 9 N.Y. Jets............. 9 Miami.................. 5 Buffalo................ 2 W Indianapolis........ 6 Jacksonville........ 6 Tennessee.......... 5 Houston.............. 4 W Baltimore............ 7 Pittsburgh........... 7 Cleveland............ 3 Cincinnati............ 2 W Kansas City........ 6 Oakland.............. 5 San Diego.......... 5 Denver................ 3 East L 2 2 5 8 T 0 0 0 0 South L 4 4 5 6 T 0 0 0 0 North L 3 3 7 9 T 0 0 0 0 West L 4 5 5 7 T 0 0 0 0 Pct .818 .818 .500 .200 PF 334 264 172 213 PA 266 187 208 276 Pct .600 .600 .500 .400 PF 268 220 257 244 PA 216 270 198 287 Pct .700 .700 .300 .182 PF 233 235 192 225 PA 178 165 206 288 Pct .600 .500 .500 .300 PF 243 238 274 217 PA 207 223 211 287 Pct .700 .600 .500 .273 PF 284 253 202 256 PA 226 220 245 301 Pct .800 .727 .700 .100 PF 256 265 209 117 PA 192 197 206 252 Pct .700 .700 .300 .182 PF 191 252 172 258 PA 146 146 226 282 Pct .500 .400 .300 .300 PF 185 177 188 160 PA 233 198 292 219 NATIONAL CONFERENCE W Philadelphia........ 7 N.Y. Giants......... 6 Washington......... 5 Dallas.................. 3 W Atlanta................ 8 New Orleans...... 8 Tampa Bay......... 7 Carolina.............. 1 W Chicago.............. 7 Green Bay.......... 7 Minnesota........... 3 Detroit................. 2 W Seattle................ 5 St. Louis............. 4 Arizona............... 3 San Francisco.... 3 East L 3 4 5 8 T 0 0 0 0 South L 2 3 3 9 T 0 0 0 0 North L 3 3 7 9 T 0 0 0 0 West L 5 6 7 7 T 0 0 0 0 Thursday’s Games New England 45, Detroit 24 New Orleans 30, Dallas 27 N.Y. Jets 26, Cincinnati 10 Sunday’s Games Tennessee at Houston, noon Green Bay at Atlanta, noon Minnesota at Washington, noon Jacksonville at N.Y. Giants, noon Pittsburgh at Buffalo, noon Carolina at Cleveland, noon Kansas City at Seattle, 3:05 p.m. Miami at Oakland, 3:05 p.m. St. Louis at Denver, 3:15 p.m. Philadelphia at Chicago, 3:15 p.m. Tampa Bay at Baltimore, 3:15 p.m. San Diego at Indianapolis, 7:20 p.m. Monday’s Game San Francisco at Arizona, 7:30 p.m. Dec. 2 Houston at Philadelphia, 7:20 p.m. Dec. 5. Dec. 6 N.Y. Jets at New England, 7:30 p.m. SAINTS 30, COWBOYS 27 New Orleans Dallas 17 3 3 7 — 30 0 6 14 7 — 27 First Quarter NO—Ivory 3 run (Hartley kick), 13:09. NO—FG Hartley 50, 9:11. NO—Ivory 6 run (Hartley kick), 4:33. Second Quarter Dal—FG Buehler 21, 5:13. NO—FG Hartley 45, :43. Dal—FG Buehler 53, :00. Third Quarter Dal—Austin 60 run (Buehler kick), 14:01. NO—FG Hartley 28, 9:30. Dal—Barber 1 run (Buehler kick), 4:30. Fourth Quarter Dal—Choice 1 run (Buehler kick), 5:51. NO—Moore 12 pass from Brees (Hartley kick), 1:55. A—93,985. ——— NO Dal First downs................................21........................24 Total Net Yards.......................414......................457 Rushes-yards.......................21-81.................32-144 Passing....................................333......................313 Punt Returns............................1-0.....................1-13 Kickoff Returns.......................1-22...................5-110 Interceptions Ret......................1-4.....................1-10 Comp-Att-Int..................... 23-39-1............... 30-42-1 Sacked-Yards Lost.................2-19.......................1-0 Punts...................................2-60.0..................2-55.0 Fumbles-Lost............................2-1.......................7-2 Penalties-Yards......................4-30.....................4-19 Time of Possession.............25:19...................34:41 ——— INDIVIDUAL STATISTICS RUSHING—New Orleans, Jones 10-45, Ivory 7-38, Bush 1-1, Brees 3-(minus 3). Dallas, Austin 1-60, Jones 13-44, Kitna 5-20, Barber 10-19, Choice 1-1, Bryant 1-0, McBriar 1-0. PASSING—New Orleans, Brees 23-39-1-352. Dallas, Kitna 30-42-1-313. RECEIVING—New Orleans, Colston 6-105, Moore 5-39, Henderson 4-97, Graham 3-23, Jones 3-21, Meachem 1-55, Bush 1-12. Dallas, Witten 10-99, Jones 7-69, R.Williams 5-83, Austin 3-25, Bennett 2-17, Barber 2-8, Hurd 1-12. MISSED FIELD GOALS—Dallas, Buehler 59 (WL). Cincinnati N.Y. Jets JETS 26, BENGALS 10 0 7 0 3 — 10 0 3 14 9 — 26 Second Quarter NYJ—FG Folk 27, 9:01. Cin—Shipley 5 pass from C.Palmer (Pettrey kick), :43. Third Quarter NYJ—B.Smith 53 run (Folk kick), 14:13. NYJ—Holmes 13 pass from Sanchez (Folk kick), 4:09. Fourth Quarter Cin—FG Pettrey 28, 12:33. NYJ—B.Smith 89 kickoff return (Folk kick), 12:18. NYJ—Pryce safety, 6:52. A—78,903. ——— Cin NYJ First downs................................13........................18 Total Net Yards.......................163......................319 Rushes-yards.......................20-46.................37-170 Passing....................................117......................149 Punt Returns............................5-6.....................4-10 Kickoff Returns.......................5-97...................4-129 Interceptions Ret....................1-11.....................2-11 Comp-Att-Int..................... 17-39-2............... 16-28-1 Sacked-Yards Lost.................3-18.....................2-17 Punts...................................7-41.4..................8-44.3 Fumbles-Lost............................1-1.......................0-0 Penalties-Yards......................2-25.....................8-64 Time of Possession.............26:31...................33:29 ——— INDIVIDUAL STATISTICS RUSHING—Cincinnati, Benson 18-41, C.Palmer 2-5. N.Y. Jets, Greene 18-70, B.Smith 3-55, Tomlinson 13-49, Sanchez 3-(minus 4). PASSING—Cincinnati, C.Palmer 17-38-2-135, Ochocinco 0-1-0-0. N.Y. Jets, Sanchez 16-28-1166. RECEIVING—Cincinnati, Shipley 5-38, Ochocinco 4-41, Owens 3-17, Gresham 2-36, Leonard 2-3, Benson 1-0. N.Y. Jets, Holmes 5-44, Keller 4-49, Edwards 2-20, Tomlinson 2-14, B.Smith 1-23, Greene 1-11, P.Turner 1-5. MISSED FIELD GOALS—Cincinnati, Pettrey 27 (WL). N.Y. Jets, Folk 44 (WL). PATRIOTS 45, LIONS 24 New England Detroit 3 7 14 21 — 45 7 10 7 0 — 24 First Quarter NE—FG Graham 19, 5:00. Det—C.Johnson 19 pass from Sh.Hill (Rayner kick), :00. Second Quarter Det—Morris 1 run (Rayner kick), 5:58. NE—Green-Ellis 15 run (Graham kick), :45. Det—FG Rayner 44, :00. Third Quarter NE—Welker 5 pass from Brady (Graham kick), 10:58. Det—Morris 1 run (Rayner kick), 6:50. NE—Branch 79 pass from Brady (Graham kick), 5:12. Fourth Quarter NE—Branch 22 pass from Brady (Graham kick), 13:45. NE—Welker 16 pass from Brady (Graham kick), 6:42. NE—Green-Ellis 1 run (Graham kick), 3:14. A—60,965. ——— NE Det First downs................................20........................25 Total Net Yards.......................447......................406 Rushes-yards.....................25-109.................27-129 Passing....................................338......................277 Punt Returns..........................3-47.......................1-8 Kickoff Returns.......................4-69...................7-194 Interceptions Ret....................2-73.......................0-0 Comp-Att-Int..................... 21-27-0............... 27-46-2 Sacked-Yards Lost...................1-3.......................2-8 Punts...................................3-51.0..................3-47.0 Fumbles-Lost............................0-0.......................0-0 Penalties-Yards......................5-50.....................8-66 Time of Possession.............28:55...................31:05 ——— INDIVIDUAL STATISTICS RUSHING—New England, Green-Ellis 12-59, Woodhead 8-32, Tate 1-17, Brady 4-1. Detroit, Morris 9-55, A.Brown 13-36, Sh.Hill 4-23, C.Johnson 1-15. PASSING—New England, Brady 21-27-0-341. Detroit, Sh.Hill 27-46-2-285. RECEIVING—New England, Welker 8-90, Gronkowski 5-65, Branch 3-113, Woodhead 2-13, Crumpler 1-27, Hernandez 1-18, Morris 1-15. Detroit, Pettigrew 5-67, Morris 5-20, C.Johnson 4-81, A.Brown 4-29, Burleson 3-35, B.Johnson 2-26, Felton 2-7, Heller 1-13, D.Williams 1-7. MISSED FIELD GOALS—Detroit, Rayner 46 (WR). College football SOUTHEASTERN CONFERENCE East Conference All Games W L W L South Carolina..............5 3 8 3 Florida............................4 4 7 4 Georgia..........................3 5 5 6 Kentucky........................2 5 6 5 Tennessee.....................2 5 5 6 Vanderbilt......................1 7 2 9 West Conference All Games W L W L Auburn...........................7 0 11 0 LSU................................6 1 10 1 Alabama........................5 2 9 2 Arkansas........................5 2 9 2 Mississippi St..............3 4 7 4 Ole Miss.......................1 6 4 7 Today’s Game Auburn at Alabama, 1:30 p.m. Saturday’s Games Kentucky at Tennessee, 11:21 a.m. LSU vs. Arkansas, at Little Rock, Ark., 2:30 p.m. Florida at Florida St., 2:30 p.m. Mississippi St. at Ole Miss, 6 p.m. South Carolina at Clemson, 6 p.m. Wake Forest at Vanderbilt, 6:30 p.m. Georgia Tech at Georgia, 6:45 p.m. CONFERENCE USA East Division Conference All Games W L W L UCF...............................6 1 8 3 Southern Miss.............5 2 8 3 East Carolina.................5 2 6 5 Marshall.........................3 4 4 7 UAB...............................3 4 4 7 Memphis........................0 7 1 10 West Division Conference All Games W L W L Tulsa..............................5 2 8 3 SMU...............................5 2 6 5 Houston.........................4 4 5 6 UTEP.............................3 5 6 6 Tulane............................2 5 4 7 Rice...............................2 5 3 8 Today’s Games SMU at East Carolina, 1 p.m. Southern Miss at Tulsa, 5:30 p.m. Saturday’s Games UCF at Memphis, 11 a.m. Tulane at Marshall, 11 a.m. UAB at Rice, 2:30 p.m. Houston at Texas Tech, 7 p.m. SOUTHWESTERN ATHLETIC CONFERENCE Eastern Conference All Games W L W L Jackson St...................6 3 8 3 Alabama St....................6 3 7 3 Alcorn St......................4 5 5 6 Alabama A&M...............2 7 3 8 MVSU............................0 9 0 10 Western Conference All Games W L W L Texas Southern.............8 1 8 3 Grambling......................7 1 8 2 Prairie View...................6 3 7 4 Ark-Pine Bluff................4 5 5 6 Southern U....................1 7 2 8 Thursday’s Game Tuskegee 17, Alabama St. 10 Tank McNamara Saturday’s Game Grambling St. vs. Southern, at N. Orleans, 1 p.m. Top 25 Basketball Schedule TEXAS A&M 24, TEXAS 17 Texas A&M Texas 0 7 17 0 — 24 7 0 7 3 — 17 First Quarter Tex—Goodwin 31 pass from Gilbert (Tucker kick), 1:21. Second Quarter TAM—Gray 84 run (Bullock kick), 5:08. Third Quarter TAM—FG Bullock 50, 12:06. TAM—Fuller 2 pass from Tannehill (Bullock kick), 10:35. Tex—Gilbert 1 run (Tucker kick), 4:39. TAM—Gray 48 run (Bullock kick), 4:23. Fourth Quarter Tex—FG Tucker 24, 9:46. A—100,752. ——— TAM Tex First downs................................15........................19 Rushes-yards.....................36-238.................38-140 Passing....................................128......................219 Comp-Att-Int..................... 14-30-0............... 20-37-2 Return Yards.............................34........................13 Punts-Avg............................9-33.3..................9-39.3 Fumbles-Lost............................4-2.......................3-2 Penalties-Yards......................8-74.....................4-25 Time of Possession.............26:19...................33:41 ——— INDIVIDUAL STATISTICS RUSHING—Texas A&M, Gray 27-223, R.Swope 4-16, Tannehill 5-(minus 1). Texas, C.Johnson 14-107, Whittaker 10-35, Monroe 1-2, Goodwin 1-0, Gilbert 12-(minus 4). PASSING—Texas A&M, Tannehill 14-30-0-128. Texas, Gilbert 20-37-2-219. RECEIVING—Texas A&M, McNeal 6-64, Fuller 3-24, R.Swope 2-20, Nwachukwu 1-11, Prioleau 1-5, Gray 1-4. Texas, Kirkendoll 7-52, Whittaker 5-59, Chiles 2-33, Goodwin 2-32, Davis 2-14, M.Williams 1-23, G.Smith 1-6. Prep Football MHSAA Playoffs Semifinals Southeast Division L 4 7 7 9 10 Central Division W Chicago.........................8 Indiana...........................7 Cleveland.......................6 Milwaukee......................5 Detroit............................5 L 5 6 8 9 10 L 1 3 4 9 10 L 5 5 6 6 12 GB — 3 1/2 5 6 8 GB — 1 2 1/2 3 1/2 4 Pacific Division W L.A. Lakers....................13 Golden State.................7 Phoenix..........................7 Sacramento...................4 L.A. Clippers..................3 L 2 8 8 10 13 Pct .688 .667 .571 .571 .250 GB — 2 3 7 1/2 9 GB — 1/2 2 2 7 Pct GB .867 — .467 6 .467 6 .286 8 1/2 .188 10 1/2 Thursday’s Games Atlanta 116, Washington 96 L.A. Clippers 100, Sacramento 82 Today’s Games Houston at Charlotte, 6 p.m. Cleveland at Orlando, 6 p.m. Toronto at Boston, 6:30 p.m. Milwaukee at Detroit, 6:30 p.m. Philadelphia at Miami, 6:30 p.m. Oklahoma City at Indiana, 7 p.m. Dallas at San Antonio, 7:30 p.m. Chicago at Denver, 8 p.m. L.A. Clippers at Phoenix, 8 p.m. L.A. Lakers at Utah, 8 p.m. Golden State at Memphis, 8:30 p.m. New Orleans at Portland, 9 p.m. Saturday’s Games Atlanta at New York, noon Orlando at Washington, 6 p.m. Memphis at Cleveland, 6:30 p.m. New Jersey at Philadelphia, 6:30 p.m. Golden State at Minnesota, 7 p.m. Miami at Dallas, 7:30 p.m. Charlotte at Milwaukee, 8 p.m. Chicago at Sacramento, 9 p.m. OT 2 2 1 2 5 Pts 32 28 25 16 13 W 14 12 10 8 8 L 7 6 11 9 12 OT 1 2 1 3 3 Pts 29 26 21 19 19 W 15 13 10 9 9 L 6 7 9 10 11 OT 2 2 3 2 0 Pts 32 28 23 20 18 W 13 14 12 11 9 L 4 6 5 11 7 OT 2 0 3 2 4 Pts 28 28 27 24 22 W 11 12 10 8 6 L 7 9 8 11 11 OT 3 1 2 2 4 Pts 25 25 22 18 16 Pacific Division Pct .615 .538 .429 .357 .333 Northwest Division W Utah...............................11 Oklahoma City...............10 Denver...........................8 Portland.........................8 Minnesota......................4 L 6 8 10 13 12 Northwest Division GP Vancouver.......21 Colorado..........22 Minnesota........20 Calgary............21 Edmonton........21 GB — 2 2 1/2 5 5 1/2 Pct .929 .786 .714 .400 .286 W 15 13 12 7 4 Central Division Pct .714 .563 .533 .357 .333 Southwest Division Atlantic Division GP Philadelphia.....23 Pittsburgh........23 N.Y. Rangers...23 New Jersey.....22 N.Y. Islanders..21 GP Detroit..............19 Columbus........20 St. Louis..........20 Chicago...........24 Nashville..........20 WESTERN CONFERENCE W San Antonio...................13 New Orleans.................11 Dallas.............................10 Memphis........................6 Houston.........................4 EASTERN CONFERENCE GF GA 84 56 70 59 68 65 43 66 44 72 GF GA 57 43 58 39 53 69 47 55 58 69 GF GA 77 66 70 68 70 71 65 71 53 51 WESTERN CONFERENCE NBA W Orlando..........................10 Atlanta...........................9 Miami.............................8 Washington....................5 Charlotte........................5 NHL Southeast Division EASTERN CONFERENCE Pct .733 .500 .400 .333 .200 Today’s Games Mississippi Valley St. vs. Liberty, 2:30 p.m., at South Padre Island, Texas Tougaloo vs. Freed-Hardman, 5 p.m., at Jackson, Tenn. Penn St. at Ole Miss, 6 p.m. Troy at Mississippi St., 6 p.m. Saturday’s Games William Carey at Southern Polytechnic St., 3 p.m. Texas Lutheran at Mississippi College, 3 p.m. Tougaloo at Union University, 6 p.m. Spring Hill at Southern Miss, 7:30 p.m. Delta St. at Alabama-Huntsville, 7:30 p.m. GP Washington......23 Tampa Bay......22 Atlanta.............22 Carolina...........21 Florida..............20 At Mississippi College Class AA Today, 5:30 p.m. Leake Academy (11-2) vs. River Oaks (12-1) Class A Today, 12:30 p.m. Tri-County (14-0) vs. Trinity (14-0) L 4 8 9 10 12 Mississippi Schedule GP Montreal...........22 Boston.............20 Ottawa.............22 Toronto............20 Buffalo.............23 MAIS Playoffs Championship games Atlantic Division Today’s Games No. 3 Ohio State vs. Miami (Ohio), 3 p.m. No. 4 Kansas State vs. Texas Southern, 7 p.m. No. 6 Kansas vs. Ohio at Orleans Arena, Las Vegas, 7 p.m. No. 7 Villanova vs. No. 24 Tennessee at Madison Square Garden, 4 p.m. No. 9 Syracuse vs. Michigan at Boardwalk Hall, Atlantic City, N.J., 7 p.m. No. 10 Purdue vs. Southern Illinois at Sears Centre Arena, Hoffman Estates, Ill., 7:30 p.m. No. 18 San Diego State at San Diego Christian, 9 p.m. No. 21 Temple vs. Georgia at HP Field House, Orlando, Fla., 6:30 p.m. No. 23 BYU vs. South Florida at the South Padre Island (Texas) Convention Center, 5 p.m. th Carolina vs. College of Charleston, 4:30 p.m. Northeast Division All games today at 7 p.m. Class 6A South Panola (13-0) at Madison Central (12-1) Meridian (13-0) at Oak Grove (8-4) Class 5A Ridgeland (13-0) at West Point (12-1) West Jones (12-0) at Brookhaven (8-4) Class 4A Lafayette (14-0) at Noxubee County (13-1) Mendenhall (10-4) at North Pike (13-1) Class 3A Aberdeen (13-1) at Winona (12-2) Forest (13-0) at Tylertown (11-2) Class 2A West Bolivar (12-2) at Calhoun City (14-0) Lumberton (12-1) at Taylorsville (13-0) Class 1A Durant (13-0) at Okolona (11-2) Dexter (8-5) at Mount Olive (9-4) ——— W Boston...........................11 New York.......................8 Toronto..........................6 New Jersey...................5 Philadelphia...................3 College Basketball GF GA 67 53 59 47 54 52 73 72 48 53 GF GA 62 58 76 67 47 53 60 63 52 84 GP W L OT Pts GF GA Phoenix............21 11 5 5 27 62 59 Los Angeles....21 13 8 0 26 62 53 San Jose.........20 10 6 4 24 60 54 Dallas...............20 11 8 1 23 59 58 Anaheim..........23 10 10 3 23 57 69 NOTE: Two points for a win, one point for overtime loss. Today’s Games Carolina at Boston, 11 a.m. New Jersey at N.Y. Islanders, noon Calgary at Philadelphia, noon Ottawa at Pittsburgh, noon Nashville at Minnesota, 1 p.m. Chicago at Anaheim, 3 p.m. Tampa Bay at Washington, 4 p.m. Detroit at Columbus, 6 p.m. Toronto at Buffalo, 6:30 p.m. Montreal at Atlanta, 6:30 p.m. N.Y. Rangers at Florida, 6:30 p.m. St. Louis at Dallas, 7:30 p.m. San Jose at Vancouver, 9 p.m. Saturday’s Games Philadelphia at New Jersey, noon Calgary at Pittsburgh, noon Buffalo at Montreal, 6 p.m. Toronto at Ottawa, 6 p.m. Florida at Tampa Bay, 6:30 p.m. Dallas at St. Louis, 7 p.m. N.Y. Rangers at Nashville, 7 p.m. Anaheim at Phoenix, 7 p.m. Minnesota at Colorado, 9 p.m. San Jose at Edmonton, 9 p.m. Chicago at Los Angeles, 9:30 p.m. LOTTERY Sunday’s drawing La. Pick 3: 7-4-0 La. Pick 4: 8-7-5-6 Monday’s drawing La. Pick 3: 1-8-3 La. Pick 4: 6-3-3-5 Tuesday’s drawing La. Pick 3: 9-8-7 La. Pick 4: 2-3-4-1 Friday’s drawing La. Pick 3: 6-7-8 La. Pick 4: 0-4-6-7 Saturday’s drawing La. Pick 3: 5-5-0 La. Pick 4: 4-6-2-7 Easy 5: 4-10-16-25-26 La. Lotto: 3-5-20-35-36-37 Powerball: 10-12-38-53-57 Powerball: 1; Power play: 5 21 Sun F 3:51 10:04 4:18 10:31 06:37 05:00 4:55p 6:36a NoMoon 11:47a 22 Mon > 4:45 10:59 5:13 11:27 06:38 05:00 5:47p 7:35a 12:14a 12:42p 23 Tue > 5:44 11:58 6:12 ----- 06:39 04:59 6:46p 8:32a 1:10a 1:38p Friday, November 26, 2010 24 Wed 6:46 12:31 7:14 1:00 06:40 04:59 7:48p 9:25a 2:07a 2:35p 25 Thu 7:48 1:34 8:15 2:02 06:41 04:59 8:54p 10:12a 3:03a 3:31p 26 Fri 8:48 2:35 9:15 3:02 06:42 04:58 10:00p 10:55a 3:58a 4:25p 27 Sat 9:46 3:33 10:12 3:59 06:43 04:58 11:07p 11:33a 4:51a 5:16p ____________________________________________________________________________ The Vicksburg Post invites all hunters to submit photographs of wildlife they have Major=2Please hours/Minor=1 TimesA general are centered on the major/minor killed. include thehour following: location of the hunt; whatwindow type of Fweapon = Fullwas Moonused; N =how Newlong MoontheQshot = Quarter > = Peak Activity! was; and the size of the animal. If it is a buck, DST column will haveon* rack in itlength, if in width effectand thatpoints. day. Please submit pictures of include information Calibrated for they Timehave Zone:been 6W blooded. Pictures with an excess amount of blood children before to renew yourcantables at willDon't not beforget considered. Photos be hand-delivered to The Vicksburg Post, 1601F The Vicksburg Post B3 ON THE HUNT North Frontage Road, Vicksburg; e-mailed to sports@vicksburgpost.com; or mailed to: Sports, P.O. Box 821668, Vicksburg, MS, 39180. ____________________________________________________________________________ SPORTING TIMES Jordan Headley, 10, bagged this 115-pound doe during youth season in northern Warren County. She was hunting with her grandfather, Robert Peters. Jordan is the daughter of Blake and Kelly Headley of Jackson. FISHING/HUNTING TIMES Longitude: 90.90W Latitude: 32.32N 2010 A. M. P. M. SUN TIMES MOON MOON Nov Minor Major Minor Major Rise Sets Rises Sets Up Down DST ____________________________________________________________________________ 28 Sun Q 10:39 4:27 11:04 4:52 06:43 04:58 NoMoon 12:08p 5:41a 6:06p 29 Mon 11:28 5:16 11:58 5:41 06:44 04:58 12:12a 12:42p 6:31a 6:55p 30 Tue ----- 6:02 12:15 6:27 06:45 04:58 1:17a 1:15p 7:20a 7:45p 01 Wed 12:35 6:47 1:00 7:13 06:46 04:57 2:23a 1:51p 8:11a 8:37p 02 Thu 1:20 7:33 1:47 8:00 06:47 04:57 3:30a 2:30p 9:03a 9:31p 03 Fri 2:08 8:22 2:36 8:50 06:48 04:57 4:38a 3:14p 9:58a 10:27p 04 Sat > 3:00 9:14 3:29 9:43 06:48 04:57 5:45a 4:04p 10:56a 11:25p ____________________________________________________________________________ Major=2 hours/Minor=1 hour Times are centered on the major/minor window F = Full Moon N = New Moon Q = Quarter > = Peak Activity! DST column will have * in it if in effect that day. Calibrated for Time Zone: 6W Don't forget to renew your tables at David Jackson•The Vicksburg Post Warren Central’s Kourey Davis dunks during Wednesday’s game against Crystal Springs. WC Continued from Page B1. Nikolas Koon, 7, bagged this 10-point buck on Nov. 7. He was hunting at Ashland Hunting Club in Port Gibson and used a .243 Winchester. Vicksburg resident Eric Douglas bagged this 185-pound, 10-point buck on the opening weekend of archery season at Ridgeway Hunting Club. It was Douglas’ first bow kill. Hawks snap funk with win over Wizards By The Associated Press NBA Josh Smith isn’t concerned that the Atlanta Hawks have yet to beat a team with a winning record. Even against a struggling Southeast Division opponent like Washington, the Hawks must start somewhere. “We hadn’t had a game like this all season,” Smith said. “We have to make a statement every time we play a division team. We just have to get wins at this point.” Joe Johnson scored 21 points, Smith added 20 with 14 rebounds and Atlanta beat the Wizards 116-96 on Thursday night to snap a three-game losing streak. Al Horford finished with 15 points and 13 boards for the Hawks, who won their 11th straight over Washington. The fourth-year center credited a team meeting with helping to straighten out some misguided principles. Before the game, coach Larry Drew indicated he was close to making some lineup changes if he didn’t see an improved effort on defense. A blowout loss at home Monday to Boston embarrassed the entire team. “We addressed a lot of issues we had as a team,” Horford said. “I kind of wanted to go out and show it. I didn’t want to talk about it before the game. We did that tonight, and I’m happy to get this win.” Gilbert Arenas scored 21 and Nick Young added 20 for the Wizards, who still seek their first victory against an opponent with a winning record and their first road victory, too. Washington rookie John Wall, the NBA’s No. 1 overall draft pick, finished with 10 points, missing his first seven shots from the field and failing to score until his runner made it 69-50 midway through the third quarter. “We get paid for this,” Wall said. “This is our job, this is our dream. This is what we want to do. So we need to start acting like it and take it more serious.” welcomed change for a Hawks team that trailed entering the second period in its last four games. “The disappointing thing was our lack of competitiveness to start the game,” Wizards coach Flip Saunders said. “I couldn’t understand why. (We) basically had a day off, had an opportunity and just did not match their intensity. We’ve been getting destroyed by points in the paint. We just haven’t shown any physical presence And we have not competed against a good team yet.” Smith gave the Hawks a 21-point lead midway through the third with a left-handed dunk. Drew called the play during a timeout, and Johnson responded with a perfectly placed alley-oop pass to Smith, who jumped past Wall for the jam near the right baseline. Drew rested his starters in the fourth quarter. Saunders did the same except for leaving Arenas on the floor to work with some of Washington’s younger players. Wall, though, sat during the final period. Before returning as a reserve in an overtime win over Philadelphia on Tuesday, Wall missed four games with a sprained left foot and is also battling a sore knee.. “You’re supposed to get up for games like this,” Young said. “It seems we always get embarrassed on national TV.” David Buehler then tried a game-tying 59-yard field goal — his kick had the distance, but sailed just wide left with 25 seconds left. The Cowboys came in 2-0 under interim coach Jason Garrett, playing like the Super Bowl contenders they were supposed to be instead of the 1-7 squad they became under coach Wade Phillips. They fell behind 17-0 less than 11 minutes into the game, but showed poise and toughness by fighting back. Dallas scored touchdowns after Reggie Bush fumbled on a punt return and Gerald Sensabaugh intercepted a pass by Brees that deflected off the hands of a receiver. The Cowboys led 27-23 on Tashard Choice’s 1-yard TD run with 5:51 left. New Orleans then went three-and-out, and Williams was streaking down the field soon after that. “It really looked like he made a very conscious effort to lock it up, even get the other hand on it,” Garrett said. “It’s one of those things you preach and you drill all the time: Don’t turn a great play into a disastrous play.” The Saints won their fourth straight game. The defending Super Bowl champions played on Thanksgiving for the first time, and hope to be back at Cowboys Stadium in February for another Super Bowl. Brees finished 23-of-39 for 352 yards, getting the Saints started with a game-opening, four-play drive that ended on Chris Ivory’s 3-yard run and never had a second-down play. Defensive lineman Will Smith intercepted a screen to set up Garrett Hartley’s 50-yard field goal, then Ivory scored on a 6-yard run. New Orleans led 20-3 before Buehler kicked a 53-yard field goal as the first half ended and Miles Austin went 60 yards on an end around on the second play of the second half. If the ending had been slightly different, Garrett would’ve been the face of two of the greatest Thanksgiving rallies in club history. In 1994, he made a rare start at quarterback in place of an injured Troy Aikman and took the Cowboys from a 17-3 deficit against Brett Favre and the Packers to a 42-31 victory. Instead, this game might go down with Leon Lett’s snowy gaffe in 1993 as another one that got away. “I think we demonstrated again what we’ve done the last few weeks — battle and fight,” Garrett said. “There were a lot of things to be proud of.” Clippers 100, Kings 82 Eric Gordon scored 28 points and rookie Blake Griffin had 25 points and 15 rebounds for the Clippers, who took full advantage of their new onetwo. long arms on defense cloging passing lanes and igniting fast breaks, where the offense is most comfortable, the Vikings are averaging 64.8 points per contest. The Vikings can also hit the 3-ball, hitting 36 percent from beyond the arc. But as Wednesday night’s loss showed, the Vikings are far from a finished product. They have to improve on defense and learn to execute better in the halfcourt set, where playoff games are won and lost. They tend to make youth-related turnovers. But the upside is startling. Johnson can already see that the future is very bright for his young squad. “We’ve got some players, that are young and who are full of talent and hustle,” Johnson said. “They just love the game of basketball and they’re making it really easy.” The Vikings return to action on Saturday at home against Northwest Rankin. USM Continued from Page B1. games, the latter two by three points each. The Golden Hurricane defense, much maligned after giving up 51 points to East Carolina in the season opener and 65 points to Oklahoma State two weeks later, hasn’t allowed more than 28 points in Tulsa’s last eight games. “I’m proud of how hard our guys have played and the improvement they’ve made from the back end to the front end,” Graham said. “The key stat is scoring offense and scoring defense. If you look in conference play, our guys have played really well except for week one.” Southern Miss has won three straight games. More impressively, after losing 41-13 to South Carolina in the season opener, the two other losses have come by just one point, 44-43 to East Carolina and 50-49 in two overtimes to Alabama-Birmingham. They rolled over Houston 59-41 last Saturday in a game played six days after three of their players — Martez Smith, Deddrick Jones and Tim Green — were shot outside a Hattiesburg bar on Nov. 14 while celebrating a win over Central Florida. All three survived, although the shooting left Smith paralyzed from the waist down. Smith made an appearance before the Houston game to be honored on Senior Day. Fedora said Jones has been released from the hospital and that all three players “are all doing well and are all in really, really good spirits. We can all learn a lesson in the way these guys have handled this situation.” Decals Saints Continued from Page B1. “I lost the ballgame,” Williams said. “I let my teammates down. I need to fall down. We run the clock down and win the game. I was trying to make a play and they did a good job. ... We had the momentum going our way. We were there. That was a W.” Until Jenkins snatched the ball away. “It is an effort play and a heart play,” Saints coach Sean Payton said. “One of those plays that inspires everybody on the team.” After Brees’ 12-yard TD pass with 1:55 left put New Orleans (8-3) ahead, the Cowboys had one more chance after twice trailing by 17 points. With a series of short passes, Jon Kitna got the Cowboys (3-8) to the New Orleans 41 before three consecutive incompletions. 601-631-0400 1601 N. Frontage • Vicksburg, MS B4 Friday, November 26, 2010 NFL Brady, Patriots feast on Lions The associated press New York Jets wide receiver Brad Smith loses his left shoe as he runs away from Cincinnati Bengals kicker Aaron Pettrey on an 89-yard kick return for a touchdown during the fourth quarter Thursday. The Jets won 26-10. It must be the shoe Jets blast hapless Bengals, 26-10 quarterback, also had a 53-yard touchdown run. “He’s a phenomenal athlete,” coach Rex Ryan said. “Everything we ask him to do, he The Vicksburg Post does.” With or without all of his footwear. “I think all that running in the backyard with no shoes on with my brother,” Smith said, “that helped.” Hours after New England beat Detroit to improve to 9-2, New York matched the Patriots with its fourth straight victory to set up a meaty Monday night matchup Dec. 6 in Foxborough, Mass., for first place in the AFC East. “The NFL couldn’t have scripted it better,” Jets safety Jim Leonhard said. New York some- thing different in every phase of the game every week. We find ways to put ourselves in the hole. I’m out of answers.” It looked as though New York was headed for yet another frenzied finish. The Jets had consecutive road overtime victories followed by a nail-biter Sunday, when they scored the winning touchdown with 10 seconds left against Houston. But it took New York only two plays to go ahead after halftime in this one. Mark Sanchez hit Santonio Holmes for 16 yards, then Smith used superb blocks by Dustin Keller and D’Brickashaw Ferguson to speed down the left sideline untouched for a 53-yard TD run. The Jets’ defense followed with a three-and-out, but Sanchez gave the ball back on an interception by Rey Maualuga. “A terrible decision,” Sanchez said. DETROIT (AP) — Tom Brady looks as sharp as ever for the New England Patriots — just in time for one of the biggest games of this NFL season. Brady threw for 341 yards and four touchdowns, two each to Deion Branch and Wes Welker, in New England’s 45-24 win over the Detroit Lions on Thursday. It was the third victory in 12 days for the Patriots, who now enjoy a long layoff before hosting the New York Jets on Dec. 6 in a Monday night matchup between the two AFC East leaders. New England and New York both won Thursday to improve to 9-2, a half-game ahead of Atlanta for the league’s best record. “I really appreciate what these guys have done so far to be where we’re at,” Patriots coach Bill Belichick said. “It hasn’t always been perfect or good, but we have a good opportunity here ahead of us.” Brady went 21 of 27 with no interceptions, earning a perfect quarterback rating of 158.3 for the second time in his career. The Patriots traded Randy Moss in early October, but they don’t seem to have lost any explosiveness. Branch caught touchdown passes of 79 and 22 yards Thursday, and Welker caught eight passes for 90 yards. New England scored at least 31 points for the third straight week. Brady’s first touchdown pass to Branch — the 79-yarder — was a jaw-dropper. Branch was wide open behind Alphonso Smith, and although Smith caught up with him around the 25-yard line, Branch slipped free of the Detroit defensive back twice en route to the end zone. “He was supposed to run an in-cut and the guy was sitting on it,” Brady said. “He threw his hand up and I laid it out there for him. He made a great The associated press New England Patriots running back BenJarvus Green-Ellis, left, celebrates his 15-yard touchdown run with Sammy Morris in the second quarter Thursday. run after catch. Certainly it’s not how we drew it up, but it’s just a great play by a great player.” That touchdown tied the game at 24 in the third quarter. Branch beat Smith again for a touchdown early in the fourth to make it 31-24. “I think these last three games we’ve been preparing very well,” Branch said. “We have a big weekend ahead of us. We have a little off time, but I think mentally the guys need to focus on what we’re trying to get accomplished.” The Lions (2-9) are headed toward another losing season after dropping their seventh straight Thanksgiving game. Detroit has actually been pretty competitive this year, and the Lions took an early 14-3 lead against New England. But they couldn’t hold off Brady, even after pressuring the quarterback early on. “We got after him, got him a little rattled,” defensive lineman Ndamukong Suh said. “Obviously he settled back down like the veteran quarterback, got his offense under control and made plays.” Brady became the first player to have a perfect passer rating this season with a minimum of five attempts, according to STATS LLC. His first perfect game was Oct. 21, 2007, when he threw six touchdown passes in a victory over Miami. In addition to Branch and Welker, Brady had help from former Ole Miss running back BenJarvus Green-Ellis, who ran for two touchdowns. His second one finished the scoring with 3:14 left and led to some pushing and shoving. The frustrated Lions bookended the extra point with two unsportsmanlike conduct penalties, meaning New England’s ensuing kickoff was taken from the Detroit 40. Friday, November 26, 2010 The Vicksburg Post B5 B6 Friday, November 26, 2010 TONIGHT ON TV n MOVIE “Slums of Beverly Hillsâ€? — The niece, Marisa Tomei, of a divorced man, Alan Arkin, helps raise his adolescent daughter, Natasha Lyonne, and two sons on the outskirts of Beverly Hills./7 on Reelz n SPORTS CBS college sports — Southern Miss still has a shot at the CUSA title game, but the Golden Eagles will need a win tonight at Tulsa./5:30 on CBS n PRIMETIME “20/20â€? — President Barack Marisa Tomei Obama and first lady Michelle Obama discuss challenges facing the commander in chief./9 on ABC THIS WEEK’S LINEUP n EXPANDED LISTINGS TV TIMES — Network, cable and satellite programs appear in Sunday’s TV Times magazine and online at. com MILESTONES n BIRTHDAYS Rich Little, impressionist, 72; Tina Turner, singer, 71; John McVie, rock musician, 65; Linda Davis, country singer, 48; Kristin Bauer, actress, 37; Peter Facinelli, actor, 37; Natasha Bedingfield, pop singer, 29; Lil Fizz, singer, 25; Aubrey Collins, singer, 23. n DEATH Bernard Matthews — The man whose investment in 20 eggs laid the foundation for Britain’s biggest turkey business has died at 80. Matthews’ company announced today that he had died a day earlier at his home. The company’s advertising slogan — “bootifulâ€? — stemmed from Matthews’ Norfolk dialect, and was drummed in to the nation’s consciousness through television advertising. PEOPLE Joel mending after hip replacements Billy Joel is recovering from double hip-replacement surgery. Joel spokeswoman Claire Mercuri said Wednesday that the 61-year-old pop star had both hips replaced last week to correct a congenital condition. She said Joel, the Rock and Roll Hall of Famer responsible for such hits as “Piano Man,â€?“Uptown Girlâ€? and “New York State of Mind,â€? is doing Billy Joel “extremely well.â€? Joel toured this year and was recently promoting the documentary film “The Last Play at Shea.â€? There’s no word on when he plans to perform on stage again. Comedian Lopez’s wife files for divorce George Lopez and his wife of 17 years are making their breakup official with her filing for divorce. Ann Serrano Lopez filed her petition, citing irreconcilable difference,s Tuesday in Los Angeles. The pair announced their breakup in September and said they would remain partners in George Lopez and Ann Sera charitable foundation. They rano Lopez have a 14-year-old daughter, and Ann Lopez is seeking physical custody. The filings do not offer any additional details about the split. The pair were married in September 1993 and did not list a separation date. The 49-year-old comedian hosts the talkshow “Lopez Tonightâ€? on TBS. Chimney fire damages Fleiss’ home A Thanksgiving Day chimney fire has ravaged part of the Nevada home of former Hollywood madam Heidi Fleiss. The 44-year-old Fleiss said she was at the house in Pahrump, west of Last Vegas, when the fire broke out, but was unharmed. The fire started in a chimney that lacked a “spark arrestor,â€? a device that prevents sparks from escaping the fireplace. Fire officials couldn’t immediately be reached for comment. It’s unclear how much damage the house susHeidi Fleiss tained,. ANd ONE MOrE And the bAnds plAyed on Orleans musicians keep busy post-Katrina neW oRleAns (Ap) — More than five years after Hurricane Katrina, New Orleans’ music scene remains vibrant and lively, despite the fact that some musicians forced from their homes haven’t returned and the doors to many places where they used to entertain remain closed. Still, soul singer Irma Thomas said most changes are so subtle they’ve mostly gone unnoticed thanks in part to national exposure through television shows like the HBO series, “Treme,â€? events like the annual New Orleans Jazz & Heritage Festival and charitable efforts like Habitat for Humanity’s Musicians Village. “And, that’s a good thing,â€? Thomas said in an interview. “New Orleans is one of those places that doesn’t take well to extreme changes.â€? But ever since Aug. 29, 2005, when Katrina struck land and broken levees caused massive flooding that wiped out entire neighborhoods, change is exactly what the city’s undergone. It’s’s overall repopulation. “It’s hard to tell, though, because musicians here are at so many different levels,â€? she said. “There are street musicians who don’t do clubs and then there are people like Irma Thomas who get the great dates in the clubs. There’s probably a good amount who have returned, but there’s also The associa associaTed press Margie Perez performs in a music club in New Orleans. a whole lot who moved on after the storm.â€? Perez said Katrina took everything she had, forcing her to start over from scratch. “For a few years after, the gigs were few and far between,â€? she recalled. “It was really toughgoing.â€? She said she wouldn’t have made it without help from the Tipitina Foundation’s MusicArtist Co-Op, which helped link her with disaster aid groups, provided free recording studio time and tips on how to redesign and market her CDs. “The co-op empowered me, gave me hope and a spirit of camaraderie to let me know I wasn’t alone,â€? she said. Five years later, she said being a musician here isn’t as hard. “There’s gigs to be had, if you’re willing to look for them and work hard enough for them,â€? she said. Bass guitarist Donald Ramsey, who was born and raised in New Orleans, agreed. In fact, he recalled getting a gig shortly after the storm. “A lot of club owners on Bourbon Street didn’t suffer damage like those with businesses in the inner city. Just after Katrina, maybe 20 to 25 percent of the clubs I played were available. It’s much better now. I’d say 99 percent of them are back and running. Music wise? It’s on and poppin’. “If you’re proficient on your instrument, then naturally you will get a lot of calls for gigs. How busy you are is all according to who knows you and how well you play,â€? he said. Ramsey said before the storm he played at Tipitina’s, Sweet Lorraine’s, House of Blues, Maple Leaf and Snug Harbor to name a few. “All of those places are operating now, and there are a bunch of new spots in place, too.â€?’s been performing with pianist Allen Toussaint and only a handful of dates are usually played in the city. “The majority of my income is from the road,â€? he said. That kind of road exposure and being featured in shows like “Tremeâ€? or on late night talk shows can only help the city’s comeback, Thomas said. “People may not be aware that the musician they’re hearing is from New Orleans or that they got their start in New Orleans,â€? she said. “But that kind of exposure, for them and the city, is priceless. And when we’re represented in the national spotlight, it just shows that New Orleans as a whole is a city of survivors.â€? West, Kung Fu Panda star at Macy’s parade ne yoRK (Ap) — A highneW kicking Kung Fu Panda and a diary-toting Wimpy Kid joined the giant balloon lineup as the Macy’s. “We don’t have anything like this in England,â€? she exclaimed. “We have parades. We don’t have any sort of huge, floating beasts. It’s very cool.â€? As millions more watched the live broadcast on television, revelers gathered nationwide for other parades in cities such as Detroit, Chicago and Philadelphia. The parades headline observances across the nation that also feature football and family dinners with too much food on the table. In his weekly radio and Internet address, President Barack Obama called on Americans to help each other through tough times. “This is not the hardest Thanksgiving America has ever faced,â€? Obama said. “But as long as many members of our American family are hurting, we’ve got to look out for one another.â€? He later telephoned ten U.S. servicemen and women sta- Pillsbury Doughboy and Spider-Man — the last with a new fan in Mayor Michael Bloomberg. He said that he had traditionally favored Snoopy, but after the Marvel Entertainment character was involved in a recent event promoting city services for jobseekers, “Spidey is my new favorite.â€? The associa associaTed press The Kung Fu Panda Balloon floats through Times Square during the Macy’s Thanksgiving Day Parade in New York Thursday. tioned around the world to thank them for their service and sacrifice. He wished them and their families a happy Thanksgiving, before joining his own for the holiday. The Macy’s parade featured an eclectic lineup of entertainers including Kanye West, Gladys Knight and Colombian rocker Juanes. The Broadway casts of “American Idiotâ€? and “Elfâ€? performed, along with marching bands from across the United States. Perched on her father’s shoulders, 16-month-old Stella Laracque wriggled and danced with excitement as SpongeBob SquarePants, Hello Kitty, Shrek and other beloved figures wafted past her. Another new balloon was Virginia O’Hanlon, the 8-year- old girl whose letter to the editor elicited the response, “Yes, Virginia, there is a Santa Claus.â€? Santa Claus closed the parade as always. A cheer erupted as he passed by on his sleigh, shaking his enormous belly. Returning balloons included The Best Deal InTown 9.00 LUNCH $ Tax Included Comes complete with Salad Bar, Dessert, Drink & Your Choice of 2 Soups TONEY’S RESTAURANT AND LOUNGE 1903 MISSION 66 Vicksburg, MS • 601-636-0024 Banners 601-631-0400 1601 N. Frontage • Vicksburg, MS •FRESH SEAFOOD •DELICIOUS STEAKS •PO BOYS •GOURMET SALADS •HAMBURGERS •SANDWICHES Woman puts gun on grave to clear spirit Some people lay flowers or notes at gravesites. A woman in South Carolina left a handgun. Police in the northwestern county of Spartanburg said a 28-year-old woman who hadn’t been feeling well consulted a spiritual adviser, who told her she needed to return something that was given to her to cleanse her soul. So the woman left a .45-caliber handgun in a box at a man’s’s office. No charges have been filed. The Vicksburg Post 2&#&-*'"7#2',%!-,2',3#1 '3*&%$)*$,&/ (3*--&%103,$)01 $)*$,&/ "35*$)0,&$"44&30-& #655&31&"4 .645"3% (3&&/4 '3*&%$03/ (3&&/#&"/4 3*$&(3"7: 3*$&(3" 3*$&(3"7: $"/%*&%:".4 1631-&)6--1&"4 8"-%03'4"-"% $0-&4-"8 1& "8 "$)$0##-&3 453"8#&33:$",& "8 1& SUPPORT OUR CITY; EAT AND SHOP DOWNTOWN. HOURS - MONDAY - FRIDAY 11 AAM M TO 9 PM; SUNDAY 11AM - 2 PM 601-638-2030 'DLO\/XQFK 'LQQHU6SHFLDOV)XOO%DU Voted Vicksburg’s Favorite Restaurant 8BTIJOHUPO4U7JDLTCVSH .4 5VFTEBZ'SJEBZBNQNQNQN 4BUVSEBZBNQN $MPTFE4VOEBZ.POEBZ Friday, November 26, 2010 The Vicksburg Post B7 Mother does a slow burn picking up smokers’ trash Dear Abby: My husband and I returned to our hometown and bought a bungalow in a cute older neighborhood. The homes are close together, separated by a single driveway. Our neighbors on both sides of us are smokers. They smoke on their front porches and flick their smoldering butts onto the driveway confrontation could result in an escalation of the problem. Should I continue gath- DEAR ABBY ABIGAIL VAN BUREN ering up the butts and keep my mouth shut? Or should I just “butt out”? — Bothered in Missouri Dear Bothered: If you are concerned about a hostile reaction from your neighbors, do not approach them — particularly if you’re afraid that doing so could become confrontational. Instead, plant hedges or bushes between your property and theirs, and have your children play — under your supervision — in the backyard. TOMORROW’S HOROSCOPE BY BERNICE BEDE OSOL • NEWSPAPER ENTERPRISE ASSOCIATION If tomorrow is your birthday: Do all that you can to improve your job performance in the coming months, because when you do, it could lead to several peripheral advantages you otherwise would never receive. Sagittarius (Nov. 23-Dec. 21) — If you have to choose between doing something acceptable for appearance’s sake and doing something that offers personal benefits, you might find it difficult to select. Choose wisely. Capricorn (Dec. 22-Jan. 19) — When it comes to anything important, such as matters having to do with your job or family, do not rush to judgment. Aquarius (Jan. 20-Feb. 19) — Think all of your moves through carefully, and don’t be afraid to ask questions before making any kind of investment. Your financial security could be a bit fragile and uncertain. Pisces (Feb. 20-March 20) — Concentrating on problems that merely might happen instead of focusing on what is at hand now is a waste of time. Handle what is right in front of you and let tomorrow take care of itself. Aries (March 21-April 19) — People are depending upon you to be a conveyor of constructive information that won’t lead them astray, so don’t pretend to have knowledge that you don’t possess. Taurus (April 20-May 20) — It’s impossible to resolve an anguished misunderstanding with a friend until you are ready to forgive and forget. Don’t nurture anger and gloom. Gemini (May 21-June 20) — Ground you’ve already gained can be lost again if you bring in persons whose goals are not in harmony with yours. The wrong associates will only cause confusion and loss. Cancer (June 21-July 22) — When involved in an important commercial transaction, double-check all the facts and figures before signing on the dotted line. Indifference or carelessness could cost you a bundle. Leo (July 23-Aug. 22) — It could prove to be unwise to reveal your business strategy to someone who is not directly involved. This person could come in contact with your competitor and innocently reveal your game plan. Virgo (Aug. 23-Sept. 22) — Even if you can’t do anything about it, give some thought as to how you might possibly mend a relationship that is now on its last legs. If your ideas have merit, they might work. Libra (Sept. 23-Oct. 23) — There are indications that you might put your foot in your mouth today, so, when dealing with others, be mindful of this and keep yourself from saying anything that would be better left unsaid. Scorpio (Oct. 24-Nov. 22) — Take your mind off of acquiring material desires and focus only on protecting priceless intangibles such as friendships and family. The results will be far more gratifying. TWEEN 12 & 20 BY DR. ROBERT WALLACE • NEWSPAPER ENTERPRISE ASSOCIATION Dr. Wallace: I’m a single parent of a 12-year-old son and a 11-year-old daughter. I love them very much. They are my life! I constantly strive to be the best parent possible, and I’m always looking for ways to improve. All suggestions will be appreciated. — Mom, Cedar Rapids, Iowa. Mom: My top three ingredients for successful parenting are: showing love, giving compliments and listening. Careers and Colleges ran an article entitled “Are Your Parents Driving You Crazy?” Presented by teens that desire harmony at home, it offers 10 useful pointers for parents. I’m sure you will find some of them useful. • Don’t label me. When you compare me to someone else and say I’m the musician and he’s the athlete, it makes us both feel inadequate. • Don’t minimize my troubles. If I’m brokenhearted, don’t talk to me about puppy love and other fish in the sea. Just listen and try to understand how I’m feeling. • Give me a compliment. I know you hate my hair — but praise me on something. Even if you’re used to my varsity letters or good grades, I still like to hear that you’re proud. • Play fair. If you had a bad day at work, don’t take it out on me. (And if I’m nervous about a test or a date, I’ll try not to be crabby to you.) • Don’t invade my privacy. Treat me with more respect, and I’ll do the same. • Don’t embarrass me in front of friends. I’d rather you’d save your comments — good or bad — for when we’re alone. • Spend time with me. Invite me to go out to breakfast or to the movies with you. I just might say yes. • Give me information. Tell me what you know about condoms or Chlamydia or drugs, even if I roll my eyes. • Choose your grievance. Instead of fighting over everything (room, clothes, music), pick one thing and let’s work on getting it straightened out. • Start letting go. Families should provide both roots and wings. And besides, you don’t want me living at home when I’m 30, do you? • Dr. Robert Wallace writes for Copley News Service. E-mail him at rwallace@Copley News Service. Dear Abby: My mom has three sisters, two of whom I am very close to and love dearly. The problem anything wrong. However, my mom told me later she was “hurt” because I had talked to Aunt Sandy knowing the family is upset with her. Mom said she’d appreciate it if I didn’t do it again. I tried to explain that the way she feels about her sister shouldn’t have anything to do with our relationship, but Mom refuses to understand. I want a connection with my Aunt Sandy without hurting my mom. Please help. — We’re Still Related Dear Still Related: I wish you had told me in more detail why your mother is angry with Sandy, and why the rest of the family is cooperating in isolating her. However, you are an adult. Whom you choose to befriend is your business, not your mother’s. If you wish to pursue a relationship Home remedies plentiful for plantar wart removal Dear. By far the most common remedy I received was iodine. The wart is first pumiced to remove the layers of dead skin and then the iodine is applied. One reader suggested Cassia bark oil applied once a day after removing the dead skin with a razor. She warned that it should be applied only to the wart because it can damage normal skin. She also recommended tea tree oil for common warts on the hands. Another reader took one 500-milligram capsule of olive leaf extract three times a day and was wart-free in three months. Another person reported success treating her boyfriend’s plantar warts with a cotton ball soaked in apple cider vinegar applied to the wart and secured with duct tape each night. After a few weeks the warts were gone. A physician wrote in suggesting soaking the foot in hot water and gradually increasing the water temperature until the skin turns cherry red. He says that two or three treatments are usually successful in eradicating the virus, thus causing the wart to disappear. A final reader, attempting to avoid surgery to remove her son’s wart, was advised by a friend to use an herbal product known as Wart Wonder. I cannot recommend or condemn any of these approaches because I have no experience with them. Dear Dr. Gott: I recently read your column about the person suffering from plantar warts. My son had a number these (large and small) a few years ago. I took him to a dermatologist, who looked at his foot and told us to use overthe-counter Duofilm. He said to apply the product twice a day, and every three days either scrape or pumice the wart and start the process over again. A month later, I took my son back, and the doctor declared the process was working and to keep at it. He then proceeded to charge us $80 for the five-minute visit. The doctor didn’t even do anything! I would like to say — save your money, folks, and do the removal yourself. Dear Reader: Unfortunately, this situation is becoming ASK THE DOCTOR Dr. PETEr GOTT more and more common. As you saw in my last column and in the above letter, many readers are frequently dissatisfied with the care they get from a doctor for common and plantar warts, not to mention how painful some of the procedures can be. Remember, readers, that warts are caused by a virus and are commonly acquired by touching other warts (such as those on the hands), or by being barefoot in public showers or pool areas. • Write to Dr. Peter Gott in care of United Media, 200 Madison Ave., 4th fl., New York, NY 10016. him in a long line. Are there rules of etiquette for this? I felt a little awkward essentially cutting in line after he was so chivalrous. — Nicole in Denver Dear Nicole: There is no rule of etiquette that dictates it, but you could have offered the gentleman a chance to be in line in front of you. How- ever, if you did, he might have extended his chivalry further and refused. 01. Legals NOTICE SAMANTHA JO CUNNINGHAM The State of Tennessee, Department of Children's Services, has filed a petition against you placing the legal custody of your child, Patrick Snell, with kin. It appears that ordinary process of law cannot be served upon you because your whereabouts are unknown. You are hereby ORDERED to serve upon Maelena A. Holmes, Attorney for the Tennessee Department of Children Services, 1300 Salem Road, Cookeville, Tennessee 38506, (931) 646-3011, an Answer to the Petition to Declare Child Dependent and Neglected and for Temporary Legal Custody to Kin filed by the Tennessee Department of Children Services, within thirty (30) days of the last day of publication of this notice, which will be December 27, 2010, and pursuant to Rule 39(e)(1) of the Tenn. R. Juv. P. you must also appear in the Juvenile Court of Putnam County, Tennessee at Cookeville, Tennessee on the 9th day of December, 2010, at 9:00 a.m. for the Hearing on the aforementioned Petition filed by the State of Tennessee, Department of Children's Services If you fail to do so, a default judgment will be taken against you pursuant to Tenn. Code Ann. S 36-1-117(n) and Rule 55 of the Tenn. R. of Civ. P. for the relief demanded in the Petition. You may view and obtain a copy of the Petition and any other subsequently filed legal documents at the Juvenile Court Clerk's Office, Cookeville, Tennessee. Publish: 11/5, 11/12, 11/19, 11/26(4t) IN THE CHANCERY COURT OF WARREN COUNTY, MISSISSIPPI NINTH JUDICIAL DISTRICT IN THE MATTER OF THE ESTATE OF ESSIE RUCKER DURMAN, DECEASED CAUSE NO. 2010-0140PR NOTICE TO CREDITORS Letters Testamentary having been granted on the 11th day of October, 2010, by the Chancery Court of Warren County, Mississippi, to the undersigned Executor upon the Estate Of Essie Rucker Durman, October, 2010. /s/ Maurice Durman MAURICE DURMAN EXECUTOR Publish: 11/19, 11/26, 12/2, 12/10(4t) SEALED INSURANCE PROPOSALS The City of Vicksburg is accepting proposals for insurance coverage prior to December 20, 2010 by 9:00 a.m.in the City Clerk's Office. Your proposal MUST include the following lines of insurance coverage. IF PROPOSAL DOES NOT INCLUDE ALL LINES OF COVERAGE, YOU MUST SPECIFY ANY DEVIATION FROM THE TYPE, AMOUNT, AND OR LIMITS SPECIFIED. Proposal packets may be picked up in the City Clerk's Office on the 2nd floor of City Hall, 1401 Walnut Street. Proposals will be received in the office of the City Clerk of the City of Vicksburg, Mississippi until 9:00 a.m., Monday, December 20, 2010 and publicly opened by the Mayor and Aldermen of the City of Vicksburg in a Regular Board Meeting at 10:00 o'clock a.m., Monday, December 20, 2010. Bidders are cautioned that the City Clerk does not receive the daily U.S. Mail on or before 9:00 a.m. Proposals will be time-stamped upon receipt according to City Clerk's time clock. Proposals should include cost for the following types of insurance: General Liability Public Officials Liability Injunctive Relief Liability Coverage Law Enforcement Liability Buildings and contents/ Personal Property Electronic Data Processing Equipment Valuable Papers Mobile/Heavy Equipment Crime Coverage Malpractice Coverage for EMS paramedics Automobile Liability Physical Damage to Vehicles including Fire Trucks Water Front Docks Boilers & Machinery Swimming Pool Liability Coverage Airport-Vicksburg Liability Premises E & O Airport-Mounds, LA. Liability Premises E & O The Mayor and Aldermen of the City of Vicksburg reserve the right to reject any and all proposals and to waive informalities. /s/ Walter W. Osborne, Jr. Walter W. Osborne, Jr., City Clerk Publish: 11/26, 12/3(2t) 01. Legals Dogwood road; thence with the North right-of-way of Dogwood Road, South 42-49-54 West, 141.47 feet; thence with the North right-of-way of Dogwood Road, South 42-49-54 West, 202.37 feet; thence with the North right-of-way of of Sale has this day been mailed to the Internal Revenue Service at 1555 Poydras Street, New Orleans, Louisiana 70112. The property will be sold subject to the interest of the Internal Revenue Service by virtue of a Federal Tax Lien filed in the Real Estate records of Warren County, Mississippi on June 19, 2009. As the undersigned Substituted Trustee, I will convey only such title as is vested in me under said Deed of Trust. This 3rd day of November, 2010. Prepared by: Floyd Healy Floyd Healy Substituted Trustee 1405 N. Pierce, Suite 306 ______________________ Little Rock, Arkansas 72207 Publish: 11/5, 11/12, 11/19, 11/26(4t) 01. Legals NOTICE OF SUBSTITUTED TRUSTEE'S SALE STATE OF MISSISSIPPI COUNTY OF WARREN WHEREAS, on April 28, 2005, Carmine Lancellotti executed a promissory note payable to the order of Novastar Mortgage, Inc.; and WHEREAS, the aforesaid promissory note was secured by a Deed of Trust dated April 28, 2005, executed by Carmine Lancellotti and Linda Lancellotti and being recorded in Book 1529, Page 302, and as Instrument No. 221815 of the records of the Chancery Clerk of Warren County, Mississippi; and which aforesaid Instrument conveys to Alan Derivaux, Trustee and to Mortgage Electronic Registration Systems, Inc. as Nominee for Novastar Mortgage, Inc. as Beneficiary, the hereinafter described property; and WHEREAS, said Deed of Trust was assigned to The Bank of New York Mellon, as Successor Trustee under Novastar Mortgage Funding Trust, Series 2005-2, by an Assignment filed of record on October 28, 2010, and recorded in Book 1514, Page 782, in the office of the Clerk of the Chancery Court of Warren County, Mississippi; and WHEREAS, The Bank of New York Mellon, as Successor Trustee under Novastar Mortgage Funding Trust, Series 2005-2, having executed a Substitution of Trustee to substitute Floyd Healy as trustee in the place and stead of Alan Derivaux the same having been recorded in Book 1514, Page SEALED BIDS 783, of the records of the The Warren County Board of Chancery Clerk of Warren Supervisors will receive County, Mississippi; and SEALED BIDS until 10:00 WHEREAS, default having occurred under the terms a.m. on Monday, December and conditions of said 20, 2010 for Term Contracts promissory note and Deed of for RIP RAP & LIMESTONE Trust and the holder having declared the entire balance PRODUCTS for the Warren due and payable; and County Highway WHEREAS, Floyd Healy, Department. The Bid File Substituted Trustee in said Deed of Trust will on the number is 11152010. 29th day of November, 2010, Complete specifications and between the hours of 11:00 a.m. and 4:00 p.m., offer for instructions for bidding may sale and will sell at public be obtained from the Warren outcry to the highest bidder County Chancery Clerk's for cash at the Main West steps of the Warren County Office, 1009 Cherry Street, Courthouse in Vicksburg, Vicksburg, MS 39183. Mississippi, the following The phone number is described property located and situated in Warren 601-636-4415. County, Mississippi, to wit: The Warren County Board of PARCEL ONE: Part of Supervisors reserves the Section 43, Township 14 North, Range 3 East, Warren right to determine County, Mississippi, more responsible bidders, particularly described as responsive bids, the lowest follows: Commencing at the northwest corner of Section and best bids, award to 43, Township 14 North, multiple bidders, reject any Range 3 East, Warren County, Mississippi, being an and all bids, waive any iron bolt; thence South, 3148 informalities in the bids or feet, more or less to a 4 inch bidding process and to boiler tube; thence S 83-30 E, 2199.22 feet to a point on award to the bidder(s) the north right-of-way of believed most advantageous Dogwood Road; thence to Warren County. North, 473.77 feet; thence N 46-00-00 E, 1076.66 feet to Published pursuant to Board an existing steel shaft, being Order dated this the 15th day the point of beginning of the of November 2010, herein described parcel; thence N 57-00-00 E, 483.57 Warren County Board of feet to the West right-of-way Supervisors of Hankinson Road; thence with the West right-of-way of By: Dot McGee, Chancery Hankinson Road, S 24-30-11 Clerk E, 239.99 feet to the North Publish: 11/26, 12/3(2t) right-of-way of Dogwood Road; thence with the North IN THE CHANCERY right-of-way of Dogwood Road, S 56-29-48 W, 438.06 COURT OF WARREN feet; thence leaving said COUNTY, MISSISSIPPI right-of-way, N 35-23-25 W, IN RE: ESTATE OF LUDY 241.41 feet to the point of PERRY YOUNG, SR., beginning, containing 2.5 DECEASED acres, more or less. PROBATE NO. PARCEL TWO: Part of Section 43, Township 14 2010-149 PR North, Range 3 East, Warren NOTICE TO CREDITORS County, Mississippi, more LUDY PERRY YOUNG, SR. particularly described as Letters Testamentary on the follows: Commencing at the Estate of the above Northwest corner of Section decedent having been 43, Township 14 North, granted on the 9th day of Range 3 East, Warren County, Mississippi, being an November, 2010 by the iron bolt; thence South 3148 Chancery Court of Warren feet, more or less to a 4 inch County, Mississippi to the boiler tube; thence South undersigned Executor of the 83-30 East, 2199.22 feet to a Estate of Ludy Perry Young, point in the North right-ofSr., deceased, notice is way of Dogwood Road; hereby given to all persons thence North, 473.77 feet; thence North 46-00 East, having claims against said 250.00 feet to the point of estate to present said claims beginning of the herein to the Clerk of this Court for described parcel; thence probate and registration North 46-00-00 East, 608.63 according to law, within feet to an existing iron rod; ninety (90) days from the first thence North 46-00-00 East, publication of this notice or 218.03 feet to an existing steel shaft; thence South said claims will be forever 35-23-25 East, 241.41 feet to barred. the North right-of-way of THIS the 9th day of Dogwood road; thence with November, 2010. the North right-of-way of LUDY PERRY YOUNG, JR., Dogwood Road, South Executor 42-49-54 West, 141.47 feet; thence with the North James R. Sherard right-of-way of Dogwood 1010 Monroe Street Road, South 42-49-54 West, Vicksburg, MS 39183 202.37 feet; thence with the Publish: 11/12, 11/19, 11/26 North right-of-way of (3t) stay-at-home of Teachers, Sale has this day been mailed to thecollege Internal students, parents, Revenue Service at 1555 nurses.Street, . . they’re Poydras New all Orleans, Louisiana 70112. delivering the The property will benewspaper sold subject to the interest of the in their spare time and Internal Revenue Service by earning extra income! virtue of a Federal Tax Lien filed the Real Estate It’sineasy - and it’s a great records of Warren County, Mississippi on June 19, cash. way to earn extra 2009. As the undersigned Substituted Trustee, I will convey only such title as is vested in meTo under joinsaid Deed of Trust. Vicksburg Post ThisThe 3rd day of November, 2010. newspaper team Prepared by: Floyd Healy you must be Floyd Healy Substituted Trustee dependable, 1405 N. Pierce, Suite have 306 ______________________ insurance, reliable Little Rock, Arkansas 72207 transportation, and Publish: 11/5, 11/12, 11/19, 11/26(4t) be available to deliver 11. Business Opportunities 11. Business Opportunities cry, offer for sale and will sell, at the west front door of the Warren County Courthouse at Vicksburg, Friday, November 26, 2010 Mississippi, for cash to the highest bidder, the following described land and property situated in Warren County, Mississippi, to-wit: TRUSTEE'S NOTICE All of Lot Nine (9) in that OF SALE certain survey in said City of WHEREAS, on July 1, 2006 Vicksburg known as King Julie Patton executed a deed and Dyer's No. 2 Addition to of trust to James R. Sherard, Lane's Survey as shown by Plat Duly recorded in Book Trustee for the benefit of M. Deloris Terrell, which deed of 116 at Page 66 of the land records in the office of the trust is recorded in Deed Clerk of the Chancery Court Book 1611 at Page 409, in of Warren County, the office of the Chancery Mississippi. Clerk of Warren County, I will only convey such title as is vested in me as Mississippi; and Substitute Trustee. WHEREAS, default having been made in the terms and WITNESS MY SIGNATURE, this 15th day of November, conditions of said deed of 2010. trust and the entire debt Emily Kaye Courteau secured thereby having been Substitute Trustee declared to be due and 2309 Oliver Road payable in accordance with Monroe, LA 71201 the terms of said deed of (318) 330-9020 sbl/F08-3402 trust, and the legal holder of the indebtedness, N. Deloris Publish: 11/19, 11/26, 12/3 Terrell, having requested the (3t) 01. Legals undersigned Trustee to execute the trust and sell said land and property in accordance with the terms of said deed of trust for the purpose of raising the sums due thereunder, together with attorney's fees, trustee's fees and expenses of sale; NOW, THEREFORE, I, James R. Sherard, Trustee in said deed of trust, will on Monday, the 29th day of November, 2010 offer for sale and will sell at public outcry, to the highest bidder for cash, within the legal hours (being between 11:00 a.m. and 4:00 p.m.) at the west or front door of the County courthouse, Warren County, Mississippi, the following described property situated and lying in the City of Vicksburg, Warren County, Mississippi, to-wit: All of Lot Five (5) in Square Twenty-Nine (29) of that certain survey known as the Vicksburg Wharf and Land Company's Resurvey, as shown by plat of record in Deed Book 69 at Page 140 of the Warren County, Mississippi Land Records and being the same property conveyed to Miss Lois M. Bori and O.J. Bori by warranty deed dated October 1, 1952 and recorded in Deed Book 298 at Page 201 of the Warren County, Mississippi Land Records. I will convey only such title as is vested in me as Trustee, WITNESS my signature this the 1st day of November, 2010. /s/ James R. Sherard JAMES R. SHERARD Publish: 11/5, 11/12, 11/19, 11/26(4t) Substitute Trustee's Notice of Sale STATE OF MISSISSIPPI COUNTY OF Warren WHEREAS, on the 13th day of March, 2007, and acknowledged on the 13th day of March, 2007, Willie C. Thomas, executed and delivered a certain Deed of Trust unto William H. Glover, Jr., Trustee for Wells Fargo Bank, N.A., Beneficiary, to secure an indebtedness therein described, which Deed of Trust is recorded in the office of the Chancery Clerk of Warren County, Mississippi in Book 1645 at Page 567 #243939; and WHEREAS, on the 19th day of November, 2008, the Holder of said Deed of Trust substituted and appointed Emily Kaye Courteau as Trustee in said Deed of Trust, by instrument r ecorded in the office of the aforesaid Chancery Clerk in Book 1488 at Page 14 #263173; and WHEREAS, default having been made in the payments of the indebtedness secured by the said Deed of Trust, and the holder of said Deed of Trust, having requested the undersigned so to do, on the 10th day of December,: All of Lot Nine (9) in that certain survey in said City of Vicksburg known as King and Dyer's No. 2 Addition to Lane's Survey as shown by Plat Duly recorded in Book 116 at Page 66 of the land records in the office of the Clerk of the Chancery Court of Warren County, Mississippi. I will only convey such title as is vested in me as Substitute Trustee. WITNESS MY SIGNATURE, this 15th day of November, 2010. Emily Kaye Courteau Substitute Trustee 2309 Oliver Road Monroe, LA 71201 (318) 330-9020 sbl/F08-3402 Publish: 11/19, 11/26, 12/3 (3t) 11. Business Opportunities ! No Wonder Everybody’s Doing It Your Hometown Newspaper! Openings Available in: afternoons Monday Friday and early mornings Saturday and Sunday. Utica, Vicksburg & Delta, Louisiana areas 601-636-4545 ext. 181 01. Legals ADVERTISEMENT FOR BIDS The Vicksburg Warren School District will receive SEALED BIDS, marked 10-11-13 until 9:00 A.M. on December 10, 2010 for Surplus Property. Specifications may be obtained from the Office of Purchasing at 1500 Mission 66, Vicksburg, Mississippi 39180. The Board of Trustees reserves the right to accept or reject any and all bids and to waive informalities. Dr. Elizabeth Swinford Superintendent Publish: 11/19, 11/26, 12/3 (3t) 02. Public Service FREE TO GOOD home. 5 female 5 male puppies 601-529-3761. No calls before 10am. KEEP UP WITH all the local news and sales...Subscribe to The Vicksburg Post TODAY!! Call 601636-4545, Circulation. Discover a new world of opportunity with The Vicksburg Post Classifieds.. 24. Business Services The Vicksburg Post 05. Notices ATTEND COLLEGE ONLINE from home. *Medical, *Business, *Paralegal, *Allied Health. Job placement assistance. Computer available. Financial aid if qualified. SCHEV certified. Call 888-210-5162.. WANTED. INFORMATION ON Frank Longmyre, who was born around 1826 in Mississippi. Allen D. Green, P.O. Box 165457, Little Rock AR 72216. 06. Lost & Found FOUND! SMALL WHITE MALE dog. He is very friendly, found in the Culkin Road area. 601-529-3041. CALL 601-636-SELL AND PLACE YOUR CLASSIFIED AD TODAY. 24. Business Services 06. Lost & Found LOST A DOG? Found a cat? Let The Vicksburg Post help! Run a FREE 3 day ad! 601-636-SELL or e-mail classifieds@vicksburg post.com LOST MALE CAT! Dark gray with black stripes. No collar, Goes by Jinx. Willow Creek Subdivision/ Bovina Reward if found. 601-5297611, 601-529-4040. 07. Help Wanted “ACEâ€? Truck Driver Training With a Difference Job Placement Asst. Day, Night & Refresher Classes Get on the Road NOW! Call 1-888-430-4223 MS Prop. Lic. 77#C124 LOOKING FOR A Federal or Postal Job? What looks like the ticket to a secure job might be a scam. For information call The Federal Trade Commission, toll free 1-877-FTC-HELP, or visit. A message from The Vicksburg Post and the FTC. !! " # $%&'$($' )*)* # ' + " OUTPATIENT MENTAL HEALTH Facility now seeking licensed individual to serve as program director for Outpatient Mental Health Rehabilitation. Interested applicants please fax resumes to the attention of: Mrs. Melissa Williams at 318-574-8646. PART TIME ON-SITE apartment manager needed for small local apartment complex. Must be honest, dependable, work well with public, must have good clerical skills, experience a plus. Serious inquiries only, fax resume to: 318-3521929. QUALITY TRANSPORT INC. Regional drivers needed for bulk petroleum products. Must have Class a with X end. Good driving record required. Company paid health insurance, 401K, and other benefits. SIGN ON BONUS. New equipment. Call 800-7346570 ext 10. 12. Schools & Instruction ATTEND COLLEGE ONLINE from home. *Medical, *Business, *Paralegal, *Allied Health. Job placement assistance. Computer available. Financial aid if qualified. SCHEV certified. Call 888-210-5162. 14. Pets & Livestock AKC/ CKC REGISTERED Yorkies, Poodles and Schnauzers $400 and up! 601-218-5533, FOR SALE SOLID White Bulldog Puppies 601-529-9957. VICKSBURG WARREN HUMANE SOCIETY Highway 61 South 601-636-6631 Currently has 30 puppies& dogs 39 cats & kittens available for adoption. Call the Shelter for more information. Please adopt today! Foster a Homeless Pet! 15. Auction LOOKING FOR A great value? Subscribe to The Vicksburg Post, 601-6364545, ask for Circulation. 17. Wanted To Buy I PAY TOP dollar for junk vehicles. Call 601-218-0038. WE HAUL OFF old appliances, lawn mowers, hot water heaters, junk and abandoned cars, trucks, vans, etcetera. 601-940-5075, if no answer, please leave message. $10 START UP KIT Don’t send that lamp to the curb! Find a new home for it through the Classifieds. Area buyers and sellers use the Classifieds every day. Besides, someone out there needs to see the light. 24. Business Services 24. Business Services TO BUY OR SELL AVON CALL 601-636-7535 ! WE ACCEPT MOST MAJOR CREDIT CARDS e y r The Vicksburg Post Friday, November 26, 2010 19. Garage & Yard Sales 28. Furnished Apartments 29. Unfurnished Apartments 31. Mobile Homes For Rent GARAGE SALE OVER? River City Rescue Mission will pickup donated left over items. 601-636-6602. $600 MONTHLY STUDIO. $900 1 bedroom townhouse. Utilities/ Cable/ Laundry. Weekly cleaning 601-661-9747. 1, 2 AND 3 BEDROOM APARTMENTS, downtown. $400 to $650 monthly, deposit required. 601-638-1746. 2 BED, 2 BATH, Grange Hall Road. Application, deposit required. Call 601831-4833. 1 BEDROOM. FURNISHED, with utilities, washer/ dryer, wireless internet, cable, garage. $200 weekly. 601-638-1746. 2228-C GROVE STREET. 3 bedrooms, 2 baths. Refrigerator, stove, dishwasher. Water, sewer, trash included. $550 monthly with $400 deposit. Section 8 welcome. 662-312-3894. MEADOWBROOK PROPERTIES. 2 or 3 bedroom mobile homes, south county. Deposit required. 601-619-9789. STILL HAVE STUFF after your Garage Sale? Donate your items to The Salvation Army, we pick-up! Call 601-636-2706. THIS IS WHERE the Black Friday and Saturday deals are. Furniture, toys, TV'S. 208 Katherine Drive. 7am- until. Completely furnished 1 bedroom and Studio Apartments. All utilities paid including cable and internet. Enclosed courtyard, Laundry room. Great location. $750 - $900 month. 601-415-9027, 601-638-4386. Commodore Apartments 1, 2 & 3 Bedrooms 29. Unfurnished Apartments 18. Miscellaneous For Sale For Results You Can Measure, Classified Is The Answer. Best Bargain Basement FINDER’S KEEPER’S FLEA MARKET opens this Friday! Searching for Venders! Make extra CHRISTMAS CASH! CALL TODAY! 601-661-8990 1950'S LESTER PIANO. Good condition, upright, light wood. $400. 601-3979384. 42 INCH HD T.V. Acoustic/ electric Esteban Guitar. Sharp carousel microwave. $125 each. 601-529-9765. CAPTAIN JACK'S SHRIMP Special! Frozen, headless, 5 pounds$24.99. Also Froglegs, Alligator, Crawfish Tails. Thursday, Friday, Saturday. 601-638-7001. CLASSROOM STUDENT DESKS $20, wood/ metal. Discount Furniture Barn, 601-638-7191. •Rent Office Space By The Square FOOT FOR LESS THAN 45 cents per day, have The Vicksburg Post delivered to your home. Only $14 per month, 7 day delivery. Call 601-636-4545, Circulation Department. MOBILE HOME REPAIR and service. Over 35 years experience. For estimate, 601-218-2582. NEW MATTRESS SETS. Twin set, $175, Full set, $219. Discount Furniture Barn, 600 Jackson Street. THE PET SHOP “Vicksburg’s Pet Boutique” •Find An Exercise Bike And Lose INCHES Tuesday- 11/23, Wednesday- 11/24, Friday- 11/26, Saturday- 11/27, 10am-5pm. Antiques, jewelry and gift items. A VARIETY OF SIZES, STYLES & COLORS! COME IN FOR A FITTING! 1415 Washington Street Call 601-638-5943. 19. Garage & Yard Sales BLACK FRIDAY GARAGE SALE. 1403 South Frontage Road, by Saxtons. 9amuntil. Come shop with us for Christmas gifts. Furniture, picnic table, stand-up basketball goal and lots more. •Buy A House With A Great YARD What's going on in Vicksburg this weekend? Read The Vicksburg Post! For convenient home delivery, call 601-636-4545, ask for circulation. 24. Business Services MARSHALL APARTMENTS 821 Speed Street Newly remodeled apartment with 2 bedrooms, 1 bath, large living room, dining room, kitchen with breakfast bar $425 monthly (water included) 601-619-6800 Call Today for Details 601-638-0102 MOVING SPECIALS!! 1, 2 and 3 bedroom. Call for information 601-636-0447. TAKING APPLICATIONS ON 2, 3 and 4 bedroom. $200 deposit on each. Refrigerator and stove furnished. 601-634-8290. VAN GUARD APARTMENTS, 2 BEDROOM TOWNHOUSES with washer and dryer hookup, $500 monthly, $300 deposit, $30 application fee. 601-631-0805. TREY GORDON ROOFING & RESTORATION 209 SMOKEY LANE 2 bedroom, 1 bath, , $475 monthly, deposit,references required, quiet neighborhood. 662-719-8901. •Roof & Home Repair (all types!) •30 yrs exp •1,000’s of ref Licensed • Insured 601-618-0367 DIRT AND GRAVEL hauled. 8 yard truck. 601638-6740. Great Expectations Remodeling and Flooring 769-203-9023 BEAUTIFUL LAKESIDE LIVING Voted #1 Apartments in the 2009 Reader’s Choice • 1, 2 & 3 Bedroom Apts. • Beautifully Landscaped OLD FASHION CONSTRUCTION 601-634-6320 601-529-4040 PURVIS UPHOLSTERY. ANTIQUES to four wheelers. We do it all. Call 601-634-6073. River City Lawn Care You grow it - we mow it! Affordable and professional. Lawn and landscape maintenance. Cut, bag, trim, edge. 601-529-6168. Classifieds Really Work! 29. Unfurnished Apartments to Fine Restaurants, Shops, Churches, Banks & Casinos Secure High-Rise Building • Off Street Parking • 9 1/2 Foot Ceilings • Beautiful River Views • Senior Discounts • • Pool • Fireplace • Spacious Floor Plans 601-629-6300 501 Fairways Drive Vicksburg 3 BEDROOM 1 Bath, $600 on Oak Street; 2 bedroom 1 bath $450 off Cain Ridge Road; 601-991-1976. 601-636-8193 VicksburgRealEstate.com Licensed in MS and LA Jones & Upchurch Real Estate Agency 1803 Clay Street Judy Uzzle-Ashley... 601-636-6490 34. Houses For Sale Beautiful 3 BR, 2 BA home has 2183 sq. ft. and sits back on 7.1 acres. Completely remodeled. Must see!! REDUCED TO $185,000! Debra Grayson Ask Us. McMillin Real Estate 601-831-1386 Candy Francisco FHA & VA Mortgage Originator Conventional ! Construction Mortgage ! First-time Loans Homebuyers ! ! 601.630.8209 Member FDIC 2150 South Frontage Road bkbank.com REALTOR®•BUILDER•APPRAISER Open Hours: Mon-Fri 8:30am-5:30pm 2170 S. I-20 Frontage Rd. LOS COLINAS. SMALL 2 Bedroom, 2 Bath Cottage. Close in, nice. $795 monthly. 601-831-4506. 2 BEDROOMS, 2 bath on 1 acre in Tallulah area. 24x20 shed, 31x19 shop. 318-5372118, 318-381-2779. 601-636-0502 PUT THE CLASSIFIEDS TO WORK FOR YOU! Check our listings to find the help you need... • Contractors • Electricians • Roofers • Plumbers • Landscapers Classifieds Really Work! 29. Unfurnished Apartments. 40. Cars & Trucks Big River Realty Rely on 20 years of experience in Real Estate. DAVID A. BREWER 601-631-0065 Bigriverhomes.com 35. Lots For Sale 475 Mallet Road BARGAIN!! PRIME OFFICE space, $450 monthly. Call 601629-7305 or 601-291-1148. Rental including Corporate Apartments Available Bradford Ridge Apartments Live in a Quality Built Apartment for LESS! All brick, concrete floors and double walls provide excellent soundproofing, security, and safety. 601-638-1102 • 601-415-3333 33. Commercial Property 3 BEDROOMS, 2 BATHS, split plan, brick, beautiful landscaping, Openwood Plantation! $1,150 monthly. Call 601-831-0066. 801 Clay Street • Vicksburg George Mayer R/E Management •Get Better MILEAGE With A New Car. McMillin Real Estate Broker, GRI 601-634-8928 . • Lake Surrounds Community I CLEAN HOUSES! 35 years experience, days only. Call 601-831-6052 days or 601-631-2482, nights. KEEP UP WITH ALL THE LOCAL NEWS AND SALES... SUBSCRIBE TO THE VICKSBURG POST TODAY! CALL 601-636-4545, ASK FOR CIRCULATION. 30. Houses For Rent FREE ESTIMATES Downtown Convenience • 601-630-2921 $263 MOVE-IN SPECIAL 32. Mobile Homes For Sale 601-638-2231 DOWNTOWN, BRICK, Marie Apartments. Total electric, central air/ heat, stove, refrigerator. $500, water furnished. 601-6367107, trip@msubulldogs.org 34. Houses For Sale RV FOR RENT, 1 or 2 people. No pets, utilities furnished. Deposit required. 601-301-0285. 605 Cain Ridge Rd. Vicksburg, MS 39180 780 Hwy 61 North • Bankruptcy Chapter 7 and 13 • Social Seurity Disability • No-fault Divorce Utilities Paid • • 1 Bedroom/ 1 Bath 2 Bedrooms/ 2 Bath Studios & Efficiencies Confederate Ridge Toni Walker Terrett Attorney At Law 601-636-1109 No Utility Deposit Required Classic Elegance in Modern Surroundings $550 MONTHLY, GATED. Has it all. 2 bedroom, washer/ dryer included. 1115 First North, 512-787-7840. 21. Boats, Fishing Supplies • Painting done on homes & businesses • Repair work • Power washing USED TIRES! LIGHT trucks and SUV's, 16's, 17's, 18's, 19's, 20's. A few matching sets! Call TD's, 601-638-3252. $100 OFF OF First month rent. Eastover Drive Apartments. 3 bedrooms $525 monthly, $300 deposit. Management 601-631-0805. Courtney's Neuvo Image, 3508 South Washington Street DOGGIE SWEATERS ARE HERE! B9 40. Cars & Trucks BOVINA AREA- LAKE front, cul-de-sac, approximately 1.5 acres. Reduced to $16,000. 601-831-0302. 36. Farms & Acreage LAND LIQUIDATION* 20 acres, $0 down, $99/month. Only $12,900 near growing El Paso, Texas. Guaranteed owner financing. NO CREDIT CHECKS! Money back guarantee. FREE map and pictures. 866-383-8306. 40. Cars & Trucks 1996 CHEVROLET BLAZER LE. V6, loaded, leather, like new. $3500 or best offer. 601-279-6456, 601-631-1185. 2001 PONTIAC GRAND AM. V6, automatic, air, sunroof. Runs good, looks good. $2200. 601-397-9384. 2002 FORD EXPLORER Sport Trac truck, 125,000 miles, well maintained, $7,900. 601-636-7268, 601573-0253. 2005 Lincoln LS. Silver, black custom top, sunroof. Must see! Beautiful car! Call Bobby, 601-218-9654 days, 601-636-0658 nights. Dealer. ALL CREDIT APPROVED Easy Financing for Everyone. Just bring your paystub! Down payments from $800 Gary’s Cars -Hwy 61S 601-883-9995 Get pre-approved @ B10 Friday, November 26, 2010 The Vicksburg Post George Carr Truck & SUV FALL SELL OFF! 1995 Jeep Grand Cherokee 1997 GMC Yukon GT 4x4 2008 Chevy Trailblazer 2007 GMC Canyon SLE 2008 Jeep Liberty As Is Special 2-Door, Loaded Local Trade In, Clean Extra Cab Loaded, Leather #41436A #P9338 #1939A #41426A #30057C 3,495 $ 2007 Toyota Tundra 7,495 12,995 12,995 14,995 $ 2004 Ford F-150 Lariat 4x4 $ $ $ 2009 Jeep Wrangler 2008 Chevy 1500 2008 Chevy Silverado LT Ext. Cab Extra Cab Extra Cab Automatic, Soft Top Extra Cab, White Black, Tool Box #41445A #41370A #P9488 #P9244 #P9503 14,995 16,995 18,495 18,995 18,495 $ $ $ $ $ 2008 Ford F-250 Crew Cab 2006 Honda Ridgeline 2010 Chrysler Town and Country 2009 Chevy 2500 Reg. Cab 4x4 2010 Saturn VUE Gas Engine #P9412A Clean, Silver Truck Sto & Go! Only 18,000 Miles 100,000 Miles, Powertrain Warranty #41401A #P9323 #41498A #P9431 18,995 19,495 19,495 20,495 20,995 $ $ $ $ $ 2010 Chevy Colorado LT Crew 2010 GMC Terrain 2010 Ford F-150 Crew 2008 Chevy 4x4 Extra Cab 2010 Saturn Outlook Only 15,000 Miles Enterprise Special #P9308 #P9493 Only 22,000 miles XLT Low Miles, One Owner. Red, Extra Clean #P9363 #41359A #P9434 20 Read Beauty, Leather Leather, Only 22,000 Miles One Owner, Local Trade-In Diesel, Loaded, Not A Farm Truck! Only 2,300 Miles. #P9437 #P9243A #41497A #P9408A #41490A 24,995 25,995 26,995 26,995 28,995 $ $ $ $ $ 2008 Buick Enclave 2008 GMC Yukon XL 2009 Chevy Crew 4x4 LTZ 2010 GMC Acadia 2008 Chevy 2500 Crew 4x4 LTZ Black Beauty, Fully Loaded, SLT Loaded #P9326 Only 12,000 Miles Loaded, SLT Duramax Diesel #P9207A #P9242 #P9479 #41385A 29,995 $32,995 $34,995 $34,995 $36,995 $ 2008 GMC Yukon Denali XL Nav. System, Entertainment, Sunroof 2009 Lincoln Navigator 2008 GMC Yukon Denali 2010 Chevy Suburban LTZ 4x4 2010 Chevy Duramax Crew 4x4 Oct.
https://issuu.com/vicksburgpost/docs/112610
CC-MAIN-2017-30
refinedweb
27,072
74.39
CodePlexProject Hosting for Open Source Software I'd like to know the same thing. I want the user to be able to kick off an automatic layout on demand, but only when they demand it. So when a user adds a vertex or adds an edge, nothing else changes except that the new vertex shows up or the new edge shows up. The user would drag it where it wants it, and the user's layout would be respected until they initiated a re-layout. Is that possible? "Is that possible?" i guess not(((((((( I need the same thing. The removed vertex should disappear but the others should maintain the position. The way I did this is by using the CanLayout property of the GraphLayout class. It is protected and only has a get accessor so I overwrote it in my custom graph layout class, so that it returns a settable value: public class PocGraphLayout : GraphLayout<PocVertex, PocEdge, PocGraph> { private bool shouldLayout = true; protected override bool CanLayout { get { return shouldLayout; } } public bool ShouldLayout { get { return shouldLayout; } set { shouldLayout = value; } } ...... } Now I can set the ShouldLayout property to false before I delete an item and the graph won't re-layout. Unfortunately, setting it back to true right after you delete the item will cause it to re-layout anyway. So I had to set it to true at all places that required the graph to re-layout - when you change the layout type, when you load a file, when you create a new graph, when you add new vertices or edges (unless you want it to not re-layout when these are added, in which case you still have to set it to false) and possibly some more place I'm forgetting. Hope that helps someone. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://graphsharp.codeplex.com/discussions/217158
CC-MAIN-2017-13
refinedweb
332
70.23
First of all, I will introduce the PID file under Linux / var / run / directory. The details are as follows: The *. PID file in the / var / run / directory of Linux system is a text file with only one line of content, that is, the PID of a process. The function of the PID file is to prevent the process from starting multiple copies. Only the process that obtains the write permission (F_WRLCK) of the specific PID file (fixed path and file name) can start normally and write its own process PID to the file. The redundant process of the other same program will exit automatically. Programming implementation: Call fcntl () system call to set the specified PID file to F_WRLCK lock state, write the PID of the current process if the lock succeeds, and the process continues to execute downward; if the lock fails, it indicates that the same process is running and the current process exits. #define PID_FILE "/var/run/xxxx.pid" int lock_file(int fd) { struct flock fl; fl.l_type = F_WRLCK; fl.l_start = 0; fl.l_whence = SEEK_SET; fl.l_len = 0; return (fcntl(fd, F_SETLK, &fl)); } int alone_runnind(void) { int fd; char buf[16]; fd = open(PID_FILE, O_RDWR | O_CREAT, 0666); if (fd < 0) { perror("open"); exit(1); } if (lock_file(fd) < 0) { if (errno == EACCESS || errno == EAGAIN) { close(fd); printf("alone runnind\n"); return -1; } printf("can't lock %s: %s\n", PID_FILE, strerror(errno)); } Ftruncate (fd, 0); // Set file size to 0 sprintf(buf, "%ld", (long)getpid()); write(fd, buf, strlen(buf) + 1); return 0; } Attention should be paid to: 1. The locks added by the process will automatically fail after the process exits. 2. When the process closes the file descriptor fd, the lock is invalid. (So the FD cannot be shut down throughout the lifecycle of the process); 3. The state of the lock will not be inherited by the child process. If the process closes, it will fail regardless of whether the process is running or not. Here’s how PID files work in the / var / run directory under Linux Under the Linux system directory / var / run, we usually see a lot of *. PID files. And often the newly installed program will generate its own PID file under the / var / run directory after running. So what’s the use of these PID files? What is its content? (1) The content of the PID file: the PID file is a text file with only one line of content, recording the ID of the process. You can see it with the cat command. (2) The role of PID files: prevent processes from starting multiple copies. Only by obtaining the write permission (F_WRLCK) of the PID file (fixed path fixed file name) can the process start normally and write its own PID into the file. The redundant processes of the other same program exit automatically. (3) Programming skills: Call fcntl to set the locked F_SETLK state of the PID file, where the locked flag bit F_WRLCK. If the lock is successful, the current PID of the process is written and the process continues to execute. If the lock is unsuccessful, it means that the same process is already running and the current process is terminated and exited. lock.l_type = F_WRLCK; lock.l_whence = SEEK_SET; if (fcntl(fd, F_SETLK, &lock) < 0){ // Lock in unsuccessfully, quit... } sprintf (buf, "%d\n", (int) pid); pidsize = strlen(buf); if ((tmp = write (fd, buf, pidsize)) != (int)pidsize){ // Write unsuccessfully, quit... } (4) Some matters needing attention: I) If the process exits, the lock added by the process will automatically fail. Ii) If the process closes the file descriptor fd, the lock is invalid. (This file descriptor cannot be closed during the entire process run) Iii) The state of the lock is not inherited by the child process. If the process closes, the lock fails regardless of whether the process is running. summary The above is a detailed description of the PID files in the Linux / var / run / directory and the role of PID files. I hope it will be helpful to you. If you have any questions, please leave a message for me, and the editor will reply to you in time. Thank you very much for your support to developpaer.
https://developpaper.com/detailed-description-of-pid-files-in-var-run-directory-under-linux-and-the-role-of-pid-files/
CC-MAIN-2021-21
refinedweb
702
72.26
Starting or learning new programming language, tool or platform will let you come face to face with similar, unique or completely different type of error messages while developing or building a product. It is always frustrating to spend countless of hours and sometimes days trying to figure out what is causing an error in your code and how to fix it. Although, you can get help online or through a mentor but one way to make sure that if you face the same problem again in future that you can remember how you solve it is to document your solution. These solutions will serve as a reference point or reminder in the future if you have forgotten how you arrived to your solution. Since I started learning React Native and sharing my journey with you guys, I decided to create this article where I will be documenting from the smallest to the biggest issues and errors I have encountered while working with React Native. In as much as I know it might help other people, I will also like to encourage you to use the comment box to share the errors you have faced while developing React Native application and how you solved it. You can send me the information directly through my contact form if you will like me to include it in this articles. The advantage of have this post is that finding solution to our problem immediately will help us learn faster and also cut down the development time considerably. Below is some of the issues. This is a work in progress and I will keep updating it. React native Mismatch Error: Javascript version: 0.50.3 Native version 0.49.5 The error message in this situation is easy to understand. This normally happens when you update your react native version or when you use framework or module that contain a lower version of Javascript than react native. 1. One option is to downgrade your React Native version that is compatible with the native version. 2. You can upgrade all your dependencies React native error cannot find module ‘./assets/empty-module.js’ The react native error cannot find module ‘./assets/empty-module.js’ usually occur when the server is not running. You can solve this issue by starting the server using the command – react-native start Packager can’t listen on port 8081 – $ react-native start –port=8088 I experience this error Packager can’t listen on port 8081 – $react-native start-port=8088 because I have an existing connection with a command line interface and I was trying to connect again using another CLI. This can also happen if another process or software is using that port. Could not delete path I can’t really remember why I was getting this error. Android Studio is complaining that it cannot delete some files in the build folder. This can occur when you open this project in android studio and javascript IDE. I usually close the javascript IDE, clean and rebuild my project. Another option is to delete the build folder and recompile it again. react native could not find com.android.tools.build:gradle:3.0.0 I could not also understand why react native was complaining that it could not find com.android.tool.build:gradle:3.0.0. When I looked at my gradle configuration, everything looks great but I keep getting this issues. The way I solved it is to add google() in gradle project level under buildScript like below buildscript { repositories { jcenter() google() } dependencies { classpath 'com.android.tools.build:gradle:3.0.0' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { mavenLocal() jcenter() maven { // All of React Native (JS, Obj-C sources, Android binaries) is installed from npm url "$rootDir/../node_modules/react-native/android" } } } UnableToResolveError: Unable to resolve module ‘AccessibilityInfo’ I am yet to find a solution to this issues. Raw text cannot be used outside of a <Text> tag. Not rendering string: “” This is another error you might come across when developing react native application. It is very simple error and the description points to what might be the issue. In my case, I wrote the code below and it throws this error. render() { return ( <View style={styles.container}> <View style={styles.firstrow}> </View> </View> ); } The issue if you look critically, is the space between the inner View component. When I removed the space, the code works perfectly. render() { return ( <View style={styles.container}> <View style={styles.firstrow}></View> </View> ); }
https://inducesmile.com/facebook-react-native/common-react-native-errors-i-have-faced-while-learning-react-native-and-how-i-solved-it/
CC-MAIN-2020-34
refinedweb
760
54.42
Maximum sum of segments among all segments formed in array after Q queries Introduction The efficiency of a data structure is determined by how well it handles a problem statement's query. A smart data structure choice can cut down on execution time, which is helpful in real-world problems. One such data structure is Disjoint Set Union (DSU), also known as Union-Find. This blog will discuss a problem based on the Disjoint-Set Union. Problem Statement Ninjas have been given an array A[ ] of size N. You have been given N queries in query[ ]. For each query you have to perform the following: 1. Remove the query[i]th elements from the segments. And break the segment containing A[query[i]] element into two different sets. 2. Find the maximum sum among the sum of all segments. INPUT N = 4 A[] = [1, 3, 2, 5]; Query[] = [3, 4, 1, 2]; OUTPUT 5 4 3 0 Explanation Query1: Delete the 3rd element (A[3] = 2) and break the array into segments {1, 3}, {5}. Among all segments maximum sum is 5 ( {5} ). Query2: Delete the 4rd element (A[4] = 5) and break the array into segments {1, 3}, {}. Among all segments maximum sum is 4 ( {1, 3} ). Query3: Delete the 1st element (A[1] = 1) and break the array into segments {3}, {}. Among all segments maximum sum is 3 ( {3} ). Query4: Delete the 2nd element (A[2] = 3) and break the array into segments {}, {}. Among all segments maximum sum is 0 ( {} ). INPUT N = 5 A[] = [1, 2, 3, 4, 5]; Query[] = [4, 2, 3, 5, 1]; OUTPUT 6 5 5 1 0 Explanation Query1: Delete the 4rd element (A[4] = 4) and break the array into segments {1, 2, 3}, {5}. Among all segments maximum sum is 6 ( {1, 2, 3} ). Query2: Delete the 2nd element (A[2] = 2) and break the array into segments {1}, {3}, {5}. Among all segments maximum sum is 5 ( {5} ). Query3: Delete the 3rd element (A[3] = 3) and break the array into segments {1}, {5}. Among all segments maximum sum is 5 ( {5} ). Query4: Delete the 5th element (A[5] = 5) and break the array into segments {1}, {}. Among all segments maximum sum is 1 ( {1} ). Query5: Delete the 1st element (A[1] = 1) and break the array into segments {}, {}. Among all segments maximum sum is 0 ( {} ). Approach We can solve the given problem using Disjoint-Set Union. Here in the given problem, after every query, we break the array into many small sets and then calculate the maximum sum over all segments. When we break a set into two smaller sets, the total sum is also divided. We can see that a query divides a set into at max two further sets, and their total sum is also divided accordingly. For example, let’s say there is a set S = {a1, a2, q, b1, b2} and currently we need to process a ‘query[i] = q’. Then after this query set S will be divided into two S1 = {a1, a2} and S2 = {b1, b2}. The idea is to simulate this procedure in reverse order(i.e., instead of breaking the array into further sets after every query, we start from the last query and merge elements of the array). Initially, we put every element into different sets. Now we process queries in reverse order. For each query, do the current element’s union with its left & right element in the array (i.e., left & right sets). And simultaneously keep track of the sum of each set(using disjoin-set union). Algorithm 1. Declare vector setSum(N+1, 0), finalAns, and initialize a DSU class object of size N. 2. Initialize setSum[i] = a[i-1] (i.e, we are putting each element in a different set). 3. Insert ‘0’ in finalAns because, after the last query, all elements will be removed, and hence answer for the last query will be zero. 4. Now, iterate the queries in reverse order(using variable ‘i’) and do the following. a) If p[query[i]] == -1, then set it as query[i] (i.e, it the current element is not part of any set, make it an independent set). b) If (query[i]-1 >= 0) and (p[query[i]-1] != -1) then call Union(query[i], query[i]-1). (i.e, if element left to current query element exist and is part of some set S, then combine current query element with set S) c) If (query[i]+1 >= 0) and (p[query[i]+1] != -1) then call Union(query[i], query[i]+1). (i.e, if element right to current query element exist and is part of some set S, then combine current query element with set S) d) Update maxSegSum as max(maxSegSum, setSum[Find(query[i])]) and push it into finalAns. 5. Finally, reverse finalAns and print it. Program #include <bits/stdc++.h> using namespace std; #define vi vector<int> class DSU{ public: int N; // 1 indexed vi p; // p[i] // Stores parent of i node. vi sz; // sz[i] // Stores the size of subtree of node i DSU(int inputN){ N = inputN; // Initialize all elements as inidividual sets. p.resize(N+1, -1); // Initially size of each set is 1. sz.resize(N+1, 1); } // This function perform 'FIND' operation of Disjoint-Set Union data structure. int Find(int i){ if(p[i] == i) return i; return p[i] = Find(p[i]); } // This function perform 'UNION' operation of Disjoint-Set Union data structure. void Union(int pu, int pv, vi& setSum){ // Finding the root of respective sets. pu = Find(pu); pv = Find(pv); // Return if both elements belongs // to the same set. if(pu == pv) return; if(sz[pu] < sz[pv]) swap(pu, pv); // Now pu is the bigger component. // Update the parent of smaller set. p[pv] = pu; // Increment the size of bigger set as we // are adding a smaller set in it. sz[pu] += sz[pv]; // Update the sum of new set. setSum[pu] += setSum[pv]; } }; void solve(int N, vi& a, vi& query){ // DSU object (has Union & Find methods). DSU dsu(N); // This vector stores of the sum of segments. vi setSum(N+1, 0); // This vector stores the maximum sum for each query. vi finalAns; // Initially every individual element is a set. for(int i=1; i<=N; i++) { setSum[i] = a[i-1]; } // After processing all queries only empty sets will remain. // i.e. maximum sum will be zero. finalAns.push_back(0); // For storing maximum segment sum till current query. int maxSegSum = INT_MIN; for(int i=N-1; i>0; i--){ /* If the current element isn't in any sets, set its parent as current element of the query. (i.e, make it a independent set.)*/ if(dsu.p[query[i]] == -1){ dsu.p[query[i]] = query[i]; } /* If element left of query[i] in array a[] has been added to some set or is a set itself, Union it with current set */ if(query[i]-1 >= 0 && dsu.p[query[i]-1] != -1){ dsu.Union(query[i], query[i]-1, setSum); } /* If element right of query[i] in array a[] has been added to some set or is a set itself, Union it with current set */ if(query[i]+1 <= N && dsu.p[query[i]+1] != -1){ dsu.Union(query[i], query[i]+1, setSum); } // Updating the maxSegSum. maxSegSum = max(maxSegSum, setSum[dsu.Find(query[i])]); // Push the answer for query[i-1]. finalAns.push_back(maxSegSum); } // Reverse the finalAns because we processed queries in a reverse order. reverse(finalAns.begin(), finalAns.end()); // Print the final answers. for(int x: finalAns)cout << x << " "; cout << endl; } // Driver Function. int main(){ int tt; cin >> tt; while(tt--){ int N; cin >> N; vi a(N), query(N); for(int i=0; i<N; i++) cin >> a[i]; for(int i=0; i<N; i++) cin >> query[i]; solve(N, a, query); } return 0; } INPUT 2 4 1 3 2 5 3 4 1 2 5 1 2 3 4 5 4 2 3 5 1 OUTPUT 5 4 3 0 6 5 5 1 0 Time Complexity The overall time complexity of this approach is O(N * log(N)). Space Complexity The auxiliary space complexity of the program is O(N). FAQs - What are the applications of the Disjoint-Set Union data structure(also known as Union-Find)? It has many applications. It is used for cycle detection, keeping track of connected components in an undirected graph, and as a sub-routine in Kruskal’s algorithm to find a minimum spanning tree. - What is the worst-case time complexity of union operations when size and path compression optimizations are used in DSU? The worst-case time complexity of union operations, when done using size & path compression, is O(M * logN), where N is the number of nodes and M is the number of operations. - In which operation is path compression optimization applied (in the implementation of DSU)? Path compression is used in the Find operation. It does not affect union operations in any way. Key Takeaways Cheers if you reached here!! This article discussed an intriguing problem using the Disjoint-Set Union data structure, also known as Union-Find. DSU!!
https://www.codingninjas.com/codestudio/library/maximum-sum-of-segments-among-all-segments-formed-in-array-after-q-queries
CC-MAIN-2022-27
refinedweb
1,540
74.08
Details - Type: New Feature - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: Java-SCA-M2 - Fix Version/s: Java-SDO-Next - Component/s: Java SDO Implementation - Labels:None Description Add an SDO lifecycle metadata Tuscany extension to unregister all the SDO Types in a namespace. The suggested method is SDOUtil.unregisterTypes(TypeHelper typeHelper, String namespace). Activity - All - Work Log - History - Activity - Transitions I'll look in to this too as my first work on this project, since it's something I'd like for what I'm working on This requirement could be reviewed in the light of the introduction of HelperContext. It's not clear what we might break by implementing this explict removal, but if the compartmentalization that HelperContext provides solves the use case that drove the requirement then we might elect not to do this. Kelvin, please elaborate on your previous comment in light of the following: Say I have a a TypeHelper with numerous, large and small namespaces registered. Then I need to "unregister" one of the small namespaces. I suspect it will be much more efficient to "unregister" the small namespace in the existing TypeHelper than to create a new TypeHelper and to re-register all the namespaces (minus 1) in the new TypeHelper. Ron, for the scenario you describe, this feature would be simpler. One concern with having an unregister method is what would happen if a user unregisters a namespace which other packages (that are not unregistered) have a dependency on. Would you expect that this feature would automatically unregister those packages as well or just leaves it up to the users to make sure that they don't put the registry into a corrupt state? Frank, I would not expect the runtime to manage dependencies in this case. Rather, it would be up to users to make sure they don't corrupt the registry. As Kelvin mentioned earlier, the introduction of HelperContext now provides a safe alternative to this "unregistration" in case users want to be sure they don't corrupt the registry. There is some discussion on Feb4,5,6 2008 IRCs, - here is the summary - what is the memory cost of having to init number of HelperContexts? Each one has at constrcution an instance of all HElpers that require to manage their own state – i.e. not CopyHelper or EqualityHelper, but all the others . The state of each of these helpers is not large until you start registering types so i think there would only be significant cost if there were duplication in storage of metadata across typehelpers so the problem with multiple HelperContexts would be when loading XML Also, please let know if the issue mentioned in the JIRA still a requirement. Yes, this is still a requirement. Again, my concern is the overhead required to register types in the typeHelper every time I need to remove a single namespace from the typeHelper. The memory cost is not significant to me, it is the cpu cost of re-registering types in a new typeHelper. Is anyone working on this? If no, I can give a try.
https://issues.apache.org/jira/browse/TUSCANY-761
CC-MAIN-2017-17
refinedweb
518
58.11
a side-by-side reference sheet grammar and invocation | variables and expression | arithmetic and logic | strings | regexes | dates and time | arrays | dictionaries | tables | relational algebra | aggregation | functions | execution control | files | libraries and namespaces | reflection General versions used The versions used for testing code in the reference sheet. show version How to get the version. Grammar and Invocation interpreter How to run the interpreter on a script. repl How to invoke the REPL. statement separator The statement separator. block delimiters The delimiters used for blocks. end-of-line comment How to create a comment that ends at the next newline. multiple line comment How to comment out multiple lines. Variables and Expressions case sensitive? Are identifiers which differ only by case treated as distinct identifiers? quoted identifier How to quote an identifier. Quoting an identifier is a way to include characters which aren't normally permitted in an identifier. In SQL quoting is also a way to refer to an identifier that would otherwise be interpreted as a reserved word null The null literal. pig: PigStorage, the default function for loading and persisting relations, represents a null value with an empty string. Null is distinct from an empty string since null == '' evaluates as false. Thus PigStorage cannot load or store an empty string. null test How to test whether an expression is null. sql: The expression null = null evaluates as null, which is a ternary boolean value distinct from true and false. Expressions built up from arithmetic operators or comparison operators which contain a null evaluate as null. When logical operators are involved, null behaves like the unknown value of Kleene logic. coalesce How to use the value of an expression, replacing it with an alternate value if it is null. nullif How to use the value of an expression, replacing a specific value with null. conditional expression The syntax for a conditional expression. Arithmetic and Logic true and false Literals for true and false. falsehoods Values which evaluate as false in a boolean context. logical operators The logical operators. Logical operators impose a boolean context on their arguments and return a boolean value. relational operators The comparison operators, also known as the relational operators. integer type Integer types. sql: Datatypes are database specific, but the mentioned types are provided by both PostgreSQL and MySQL. awk: Variables are untyped and implicit conversions are performed between numeric and string types. The numeric literal for zero, 0, evaluates as false, but the string "0" evaluates as true. Hence we can infer that awk has at least two distinct data types. float type Floating point decimal types. sql: Datatypes are database specific, but the mentioned types are provided by both PostgreSQL and MySQL. fixed type Fixed precision decimal types. sql: Datatypes are database specific, but the mentioned types are provided by both PostgreSQL and MySQL. arithmetic operations The arithmetic operators: addition, subtraction, multiplication, division, modulus, and exponentiation. integer division How to compute the quotient of two numbers. The quotient is always an integer. integer division by zero What happens when an integer is divided by zero. pig: Division by zero evaluates to null. Recall that PigStorage stores nulls in files as empty strings. float division How to perform floating point division, even if the operands are integers. float division by zero The result of dividing a float by zero. pig: Division by zero evaluates to null. Recall that PigStorage stores nulls in files as empty strings. power How to raise a number to a power. sqrt How to get the square root of a number. sqrt -1 The result of taking the square root of negative one. transcendental functions The standard transcendental functions of mathematics. float truncation How to truncate floats to integers. The functions (1) round towards zero, (2) round to the nearest integer, (3) round towards positive infinity, and (4) round towards negative infinity. How to get the absolute value of a number is also illustrated. absolute value The absolute value of a number. random number How to create a unit random float. Strings types The available string types. pig: A chararray is a string of Unicode characters. Like in Java the characters are UTF-16 encoded. A bytearray is a string of bytes. Data imported into Pig is of type bytearray unless declared otherwise. literal The syntax for string literals. sql: MySQL also has double quoted string literals. PostgreSQL and most other database use double quotes for identifiers. In a MySQL double quoted string double quote characters must be escaped with reduplication but single quote characters do not need to be escaped. pig: Single quoted string literals are of type chararray. There is no syntax for a bytearray literal. length How to get the length of a string. escapes Escape sequences which are available in string literals. sql: Here is a portable way to include a newline character in a SQL string: select 'foo' || chr(10) || 'bar'; MySQL double and single quoted strings support C-style backslash escapes. Backslash escapes are not part of the SQL standard. Their interpretation can be disabled at the session level with SET sql_mode='NO_BACKSLASH_ESCAPES'; concatenation How to concatenate strings. split How to split a string into an array of substrings. sql: How to split a string into multiple rows of data: => create temp table foo ( bar text ); CREATE TABLE => insert into foo select regexp_split_to_table('do re mi', ' '); INSERT 0 3 case manipulation How to uppercase a string; how to lower case a string; how to capitalize the first character. strip How to remove whitesapce from the edges of a string. index of substring How to get the leftmost index of a substring in a string. extract substring How to extract a substring from a string. sprintf How to create a string from a format. Regular Expressions match pig: Pig does not directly support a regex match test. The technique illustrated is to extract a subgroup and see whether the resulting tuple has anything in it. This can be done in the by clause of a filter statement, but not as the first operand of a conditional expression. substitute How to perform substitution on a string. awk: sub and the global variant gsub return the number of substitutions performed. extract subgroup Date and Time Arrays literal The syntax for an array literal. sql: The syntax for arrays is specific to PostgreSQL. MySQL does not support arrays. Defining a column to be an array violates first normal form. pig: Pig tuples can be used to store a sequence of data in a field. Pig tuples are heterogeneous; the components do not need to be of the same type. size How to get the number of elements in an array. lookup How to get the value in an array by index. update How to change a value in an array. iteration How to iterate through the values of an array. Dictionaries literal pig: Dictionaries are called maps in Pig. The keys must be character arrays, but the values can be any type. Tables order by How to sort the rows in table using the values in one of the columns. order by multiple columns How to sort the rows in a table using the values in multiple columns. If the values in the first column are the same, the values in the seconds column are used as a tie breaker. limit offset Relational Algebra In a mapping operation the output relation has the same number of rows as the input relation. A mapping operation can be specified with a function which accepts an input record and returns an output record. In a filtering operation the output relation has a less than or equal number of rows as the input relation. A filtering operation can be specified with a function which accepts an input record and returns a boolean value. input data format The data formats the language can operate on. set field delimiter For languages which can operate on field and record delimited files, how to set the field delimiter. sql: The PostgreSQL copy command requires superuser privilege unless the input source is stdin. Here is an example of how to use the copy command without superuser privilege: $ ( echo "copy pwt from stdin with delimiter ':';"; cat /tmp/pw ) | psql The copy command is not part of the SQL standard. MySQL uses the following: load data infile '/etc/passwd' into table pwt fields terminated by ':'; Both PostgreSQL and MySQL will use tab characters if a field separator is not specified. MySQL permits the record terminator to changed from the default newline, but PostgreSQL does not. select column by name How to select fields by name. select column by position select all columns rename columns filter rows split rows An aggregation operation is similar to a filtering operation in that it accepts an input relation and produces an output relation with less than or equal number of rows. An aggregation is defined by two functions: a partitioning function which accepts a record and produces a partition value, and a reduction function which accepts a set of records which share a partition value and produces an output record. select distinct How to remove duplicate rows from the output set. Removing duplicates can be accomplished with an aggregation operation in which the partition value is the entire row and a reduction function which returns the first row in the set of rows sharing the partition value. inner join In an inner join, only tuples from the input relations which satisfy a join predicate are used in the output relation. A special but common case is when the join predicate consists of an equality test or a conjunction of two or more equality tests. Such a join is called an equi-join. awk: If awk is available at the command line then chances are good that join is also available. null treatment in joins How rows which have a null value for the join column are handled. Both SQL and Pig do not include such rows in the output relation unless an outer join (i.e. a left, right, or full join) is specified. Even in the case of an outer join the rows with a null join column value are not joined with any rows from the other relations, even if there are also rows in the other relation with null join column values. Instead the columns that derive from the other input relation will have null values in the output relation. self join A self join is when a relation is joined with itself. If a relation contained a list of people and their parents, then a self join could be used to find a persons grandparents. pig: An alias be used in a JOIN statement more than once. Thus to join a relation with itself it must first be copied with a FOREACH statement. left join How to include rows from the input relation listed on the left (i.e. listed first) which have values in the join column which don't match any rows from the input relation listed on the right (i.e. listed second). The term in short for left outer join. As an example, a left join between customers and orders would have a row for every order placed and the customer who placed it. In addition it would have rows for customers who haven't placed any orders. Such rows would have null values for the order information. sql: Here is a complete example with the schemas and data used in the left join: create table customers ( id int, name text ); insert into customers values ( 1, 'John' ), ( 2, 'Mary' ), (3, 'Jane'); create table orders ( id int, customer_id int, amount numeric(9, 2)); insert into orders values ( 1, 2, 12.99 ); insert into orders values ( 2, 3, 5.99 ); insert into orders values ( 3, 3, 12.99 ); select * from customers c left join orders o on c.id = o.customer_id; id | name | id | customer_id | amount ----+------+----+-------------+-------- 1 | John | | | 2 | Mary | 1 | 2 | 12.99 3 | Jane | 2 | 3 | 5.99 3 | Jane | 3 | 3 | 12.99 (4 rows) left outer join is synonymous with left join. The following query is identical to the one above: select * from customers c left outer join orders o on c.id = o.customer_id; pig: For a complete example assume the following data is in /tmp/customers.txt: 1:John 2:Mary 3:Jane and /tmp/orders.txt: 1:2:12.99 2:3:5.99 3:3:12.99 Here is the Pig session: customers = LOAD '/tmp/customers.txt' USING PigStorage(':') AS (id:int, name:chararray); orders = LOAD '/tmp/orders.txt' USING PigStorage(':') AS (id:int, customer_id:int, amount:float); j = join customers by id left, orders by customer_id; dump j; Here is the output: (1,John,,,) (2,Mary,1,2,12.99) (3,Jane,2,3,5.99) (3,Jane,3,3,12.99) full join A full join is a join in which rows with null values for the join condition from both input relations are included in the output relation. Left joins, right joins, and full joins are collectively called outer joins. sql: We illustrate a full join by using the schema and data from the left join example and adding an order with a null customer_id: insert into orders values ( 4, null, 7.99); select * from customers c full join orders o on c.id = o.customer_id; id | name | id | customer_id | amount ----+------+----+-------------+-------- 1 | John | | | 2 | Mary | 1 | 2 | 12.99 3 | Jane | 2 | 3 | 5.99 3 | Jane | 3 | 3 | 12.99 | | 4 | | 7.99 (5 rows) pig: For a complete example assume the following data is in /tmp/customers.txt: 1:John 2:Mary 3:Jane and /tmp/orders.txt: 1:2:12.99 2:3:5.99 3:3:12.99 4::7.99 Here is the Pig session: customers = LOAD '/tmp/customers.txt' USING PigStorage(':') AS (id:int, name:chararray); orders = LOAD '/tmp/orders.txt' USING PigStorage(':') AS (id:int, customer_id:int, amount:float); j = join customers by id full, orders by customer_id; dump j; Here is the output: (1,John,,,) (2,Mary,1,2,12.99) (3,Jane,2,3,5.99) (3,Jane,3,3,12.99) (,,4,,7.99) cross join A cross join is a join with no join predicate. It is also called a Cartesian product. If the input relations have N1, N2, …, Nm rows respectively, then the output relation has $$\prod_{i=1}^{m} N_i$$ rows. Aggregation group by sql: The columns in the select clause of a select statement with a group by clause must be expressions built up of columns listed in the group by clause and aggregation functions. Aggregation functions can contain expressions containing columns not in the group by clause as arguments. pig: The output relation of a GROUP BY operation is always a relation with two fields. The first is the partition value, and the second is a bag containing all the tuples in the input relation which have the partition value. group by multiple columns How to group by multiple columns. The output relation will have a row for each distinct tuple of column values. pig: Tuples must be used to group on multiple fields. Tuple syntax is used in the GROUP BY statement and the first field in the output relation will be tuple. The FLATTEN function can be used to replace the tuple field with multiple fields, one for each component of the tuple. aggregation functions The aggregation functions. sql: Rows for which the expression given as an argument of an aggregation function is null are excluded from a result. In particular, COUNT(foo) is the number of rows for which the foo column is not null. COUNT(*) always returns the number of rows in the input set, including even rows with columns that are all null. pig: The Pig aggregation functions operate on bags. The GROUP BY operator produces a relation of tuples in which the first field is the partition value and the second is a bag of all the tuples which have the partition value. Since $1 references the second component of a tuple, it is often the argument to the aggregation functions. A join is an operation on m input relations. If the input relations have n1, n2, …, nm columns respectively, then the output relation has $$\sum_{i=1}^{m} n_i$$ columns. Functions define function How to define a function. sql: To be able to write PL/pgSQL functions on a PostgreSQL database, someone with superuser privilege must run the following command: create language plpgsql; invoke function How to invoke a function. drop function How to remove a function. sql: PL/pgSQL permits functions with the same name and different parameter types. Resolution happens at invocation using the types of the arguments. When dropping the function the parameter types must be specified. There is no statement for dropping multiple functions with a common name. Execution Control if How to execute code conditionally. while How to implement a while loop. for How to implement a C-style for loop. Files Library and Namespaces Reflection SQL PostgreSQL 9.1: The SQL Language MySQL 5.6 Reference Manual SQL has been the leading query language for relational databases since the early 1980s. It received its first ISO standardization in 1986. SQL statements are classified into three types: data manipulation language (DML), data definition language (DDL), and data control language (DCL). DDL defines and alters the database schema. DCL controls the privileges of database users. DML queries and modifies the data in the tables. Awk awk - pattern-directed scanning and processing language POSIX specification for awk POSIX specification for join POSIX specification for sort Awk has been included on all Unix systems since 7th Edition Unix in 1979. It provides a concise language for performing transformations on files. An entire program can be provided to the awk interpreter as the first argument on the command line. Because awk string literals use double quotes, single quotes are usually used to quote the awk program for the benefit of the shell. Here's an example which prints the default shell used by root: awk 'BEGIN{FS=":"} $1=="root" {print $7}' /etc/passwd An awk script is sequence of pattern-action pairs. Awk will iterate through the lines of standard input or the lines of the specified input files, testing each pattern against the line and executing the corresponding action if the pattern matches. Patterns are usually slash delimited regular expressions, e.g. /lorem/, and logical expressions built up from them using the logical operators &&, ||, and !. If an action is provided without an accompanying pattern, the action is executed once for every line of input. The keywords BEGIN and END are special patterns which cause the following action to be executed once at the start and end of execution, respectively. Pig Apache Pig docs piggybank.jar piggybank-0.3-amzn.jar Pig is language for specifying Hadoop map reduce jobs. Pig scripts are shorter than equivalent Java source code, especially if joins are required. There are products such as Hive which can convert an SQL statement to a map reduce job, but Pig has an advantage over Hive in that it can handle a greater variety of data formats in the input files. Although Pig is intended to be used with a Hadoop grid, Hadoop is not required to run a Pig job. Running a Pig job locally is a convenient way to test a Pig job before running it on a grid. In addition to some numeric and string data types, Pig provides three compound data types: bag, tuples, and map. A bag is an array of tuples. It is equivalent to a database table; it is the database type which Pig uses to hold data which it reads in from files. Pig has a limited type of variable called an alias. The only data type which can be stored in an alias is a bag. When a bag is stored in an alias it is called a relation or an outer bag. A bag can also be stored in the field of a tuple, in which case it is called an inner bag. pig relational operators: Pig provides the following 15 operators for manipulating relations: Most of the above operators create a new relation from existing relations. Exceptions are LOAD and MAPREDUCE which create relations from external files, STORE which writes a relation to a file, and SPLIT which can create more than one relation. piggybank UDFs: It is easy to write user defined functions (UDFs) in Java and make them available to pig. piggybank.jar and piggybank-0.3-amzn.jar are two publicly available libraries of UDFs. If the Piggybank jar is in the home directory when the Pig script is run, the functions can be made available with the following code at the top of the Pig script: REGISTER /PATH/TO/piggybank.jar; REGISTER /PATH/TO/piggybank-0.3-amzn.jar; DEFINE DATE_TIME org.apache.pig.piggybank.evaluation.datetime.DATE_TIME(); DEFINE EXTRACT org.apache.pig.piggybank.evaluation.string.EXTRACT(); DEFINE FORMAT org.apache.pig.piggybank.evaluation.string.FORMAT(); DEFINE FORMAT_DT org.apache.pig.piggybank.evaluation.datetime.FORMAT_DT(); DEFINE REPLACE org.apache.pig.piggybank.evaluation.string.REPLACE();
http://hyperpolyglot.org/data
CC-MAIN-2016-36
refinedweb
3,538
56.86
We have created a JavaScript bridge for Android using V8 and in iOS using JavaScriptCore. This bridge allows you to load JavaScript and create Xamarin.Forms control on the fly. With XF pages written in TypeScript, it is possible to keep all the code in npm kind of repository and hot reload your application without having to go through app store approval process. Xaml is great, no doubt, but TSX in VS Code is even more powerful. Take a look at following code, import Bind from "@web-atoms/core/dist/core/Bind"; import XNode from "@web-atoms/core/dist/core/XNode"; import { AtomXFControl } from "@web-atoms/core/dist/xf/controls/AtomXFControl"; // Existing Xamarin.Forms Definitions import XF from "@web-atoms/xf-controls/dist/clr/XF"; import AtomXFContentPage from "@web-atoms/xf-controls/dist/pages/AtomXFContentPage"; import ListViewModel from "./ListViewModel"; export default class List extends AtomXFContentPage { public viewModel: ListViewModel; public create() { this.viewModel = this.resolve(ListViewModel); this.render( <XF.ContentPage <XF.ListView itemsSource={Bind.oneWay(() => this.viewModel.movies.value)}> <XF.ListView.itemTemplate> <XF.DataTemplate> <XF.ViewCell> <XF.Label text={Bind.oneWay((x) => x.data.name)}/> </XF.ViewCell> </XF.DataTemplate> </XF.ListView.itemTemplate> </XF.ListView> </XF.ContentPage> ); } } NuGet + NPM, you can use most of both the worlds. It is very easy to expose services from C# to JavaScript. You can easily create new component within JavaScript (TypeScript) and you can also write similar component in C# to improve performance. C# Code is little painful to debug in production as line numbers are missing. Not only that, JavaScript stack traces retains line numbers in production with source map making life lot easier. This is by far the biggest benefit, in typical production application, you cannot change version immediately, but with Web Atoms, you can dynamically change version and even allow single user to change version. This allows you to investigate bugs and platform related issues easily. Of course, JavaScript engine execution and lot of data transfer between JavaScript, CLR and native platform is expensive, and it is little slow compared to pure C# code. But at the same time, you can always tweak your code easily. We first roll out beta in JavaScript, we let it stabilize and then we move that part of code to C#. Only when you make changes to C# code, you have to republish app to the store. Since your app only contains JavaScript engine code, your app becomes smaller, all your Views/View Models/Services stay on web server. We have organized documentation and we are still in process of making more documentation available easily. Integration requires little effort, but once the application is setup, it is extremely easy to build/test and deploy. This is link to web atoms website, and checkout more XF Samples to know more.
https://forums.xamarin.com/discussion/comment/404268
CC-MAIN-2020-24
refinedweb
462
53.71
Back to: ASP.NET Web API Tutorials For Begineers and Professionals HTTP Client Message Handler with Real-Time examples In this article, I am going to discuss HTTP Client Message Handler with real-time examples. As we already discussed in HTTP Message Handler Article that a Message Handler is a class that receives an HTTP request and returns an HTTP response. The Message handler is derived from the abstract HttpMessageHandler class. There are two types of HTTP Message Handlers as follows - The Server Side HTTP Message Handlers – we already discussed - Client Side HTTP Message Handlers – will discuss in this article HTTP Client Message Handlers The HttpClient class uses a message handler to process the requests on the client side. The default handler provided by the dot net framework is HttpClientHandler. This HTTP Client Message Handler sends the request over the network and also gets the response from the server. As a developer if you want, then you can also create your own custom message handlers and then insert the custom message handlers into the pipeline in the client side as shown in the below image. Creating a Custom HTTP Client Message Handler: Let us discuss how to create a Custom HTTP Client Message Handler. To create a custom HTTP Client message handler, what we need to do is, we need to create a custom class and that class should be derived from the System.Net.Http.DelegatingHandler class. Then the class should override the SendAsync method. The signature of the SendAsync method as following: The SendAsync method takes an HttpRequestMessage as input and asynchronously returns an HttpResponseMessage. A typical implementation does the following: - Process the request message. - Call the base.SendAsync method to send the request to the inner handler. - The inner handler returns a response message. (This step is asynchronous.) - Process the response message and returns the response to the caller. Creating a Custom Message Handler: The following example shows the creation of a custom message handler which adds a custom header to the outgoing request: using System.Net.Http; using System.Threading.Tasks; namespace ClientSideMessageHandler.Models { class MessageHandler1 : DelegatingHandler { private int _count = 0; protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { System.Threading.Interlocked.Increment(ref _count); request.Headers.Add("X-Custom-Header", _count.ToString()); return base.SendAsync(request, cancellationToken); } } } The call to the base.SendAsync method is asynchronous. If your handler going to do some work after this call, then use the await keyword to resume execution after the method completes. The following example shows a handler that logs error codes. The example shows how to get at the response inside the handler. using System.IO; using System.Net.Http; using System.Threading.Tasks; namespace ClientSideMessageHandler.Models { a custom message handlers to HttpClient pipeline, we need to use the HttpClientFactory.Create method as shown below. HttpClient client = HttpClientFactory.Create(new MessageHandler1(), new MessageHandler2()); The Message handlers are called in the order that we pass them into the Create method of the HttpClientFactory class. The reason is handlers are nested the response message travels in the other direction. That is, the last handler is the first to get the response message. In the next article, I will discuss How to Implement the Token Based Authentication in ASP.NET Web API. SUMMARY In this article, I try to explain HTTP Client Message Handler with some examples. I hope this article will help you with your need. I would like to have your feedback. Please post your feedback, question, or comments about this article. 3 thoughts on “HTTP Client Message Handler” Pingback: My Homepage Hey Guy! Please Add Button Next Lesson and Previou Lesson At the end of the article, then next article link is provided and at the start previous article link provided.
https://dotnettutorials.net/lesson/http-client-message-handler/
CC-MAIN-2020-05
refinedweb
627
58.79
Tutorial: Set Up a Development Endpoint and Notebook to Author ETL Scripts Interactively The goal of this tutorial is to create an environment in which you can create ETL (extract, transform, and load) scripts that can easily be ported to run as AWS Glue jobs. AWS Glue lets you create a development endpoint, spin up an Amazon EC2 cluster to run Apache Zeppelin notebooks, and create and test AWS Glue scripts. In this scenario, you query publicly available airline flight data. The following concepts can help you understand the steps in this tutorial. A development endpoint is set up similar to the AWS Glue serverless environment. When you use a development endpoint, you can develop ETL scripts that can be ported to run using AWS Glue. One use of this endpoint is to create a notebook. An Apache Zeppelin notebook is a web-based notebook that enables interactive data analytics. The Zeppelin notebook is provisioned on an Amazon EC2 instance with access to AWS Glue libraries. Charges for using Amazon EC2 are separate from AWS Glue. You can view your Amazon EC2 instances in the Amazon EC2 console (). An AWS CloudFormation stack is used to create the environment for the notebook. You can view the AWS CloudFormation stack in the AWS CloudFormation console (). In this example, you create a development endpoint that can be used to query flight data that is stored in Amazon Simple Storage Service (Amazon S3). Prerequisites Set up your environment to use development endpoints and notebook servers. For more information, see Setting Up Your Environment for Development Endpoints. Sign in to the AWS Management Console and open the AWS Glue console at. Run a crawler to catalog the flights public data set located at s3://athena-examples/flight/parquet/). For more information about creating crawlers, see the Add crawler tutorial on the AWS Glue console. Configure the crawler to create tables in a database named flightsdb. Also define the table name prefix as flights. When the crawler run completes, verify that the flightsparquet table is available in your AWS Glue Data Catalog. Note The flight table data comes from Flights data provided by the U.S. Department of Transportation, Bureau of Transportation Statistics. Desaturated from original. Step 1: To Create a Development Endpoint In the AWS Glue console, navigate to the development endpoints list. Choose Add endpoint. Specify an endpoint name; for example, demo-endpoint. Choose an IAM role with permissions similar to the IAM role that you use to run AWS Glue ETL jobs. For more information, see Step 2: Create an IAM Role for AWS Glue. Specify an Amazon VPC, a subnet, and security groups. This information is used to create a development endpoint to securely connect to your data resources and issue Apache Spark commands. Consider the following suggestions when filling in the properties of your endpoint: If you already set up a connection to your data stores, you can use the same connection to determine the Amazon VPC, subnet, and security groups for your development endpoint. Otherwise, specify these parameters individually. Ensure that your Amazon VPC has Edit DNS hostnames set to yes. This parameter can be set in the Amazon VPC console (). For more information, see Setting Up DNS in Your VPC. For this tutorial, ensure that the Amazon VPC you select has an Amazon S3 VPC endpoint. For information about how to create an Amazon S3 VPC endpoint, see Amazon VPC Endpoints for Amazon S3. Select a public subnet for your development endpoint. You can make a subnet a public subnet by adding a route to an internet gateway. For IPv4 traffic, create a route with Destination 0.0.0.0/0and Target the internet gateway ID. Your subnet’s route table should be associated with an internet gateway, not a NAT gateway. This information can be set in the Amazon VPC console (). For example: For more information, see Route tables for Internet Gateways. For information about how to create an internet gateway, see Internet Gateways. Ensure that you choose a security group that has an inbound self-reference rule. This information can be set in the Amazon VPC console (). For example: For more information about how to set up your subnet, see Setting Up Your Environment for Development Endpoints. The public SSH key that you use for your development endpoint should not be an Amazon EC2 key pair. Generate the key with ssh-keygen, which typically can be found in a bash shell on a Mac or Git for Windows. The key is a 2048-bit SSH-2 RSA key. Choose Create. After the development endpoint is created, wait for its provisioning status to move to Ready. Then proceed to the next step. Step 2: To Create an Apache Zeppelin Notebook Server To perform this procedure, you must have permission to create resources in AWS CloudFormation, Amazon EC2, and other services. For more information about required user permissions, see Step 3: Attach a Policy to IAM Users That Access AWS Glue. In the AWS Glue console, navigate to the development endpoints list. Choose Actions, Create notebook server. To create the notebook, an Amazon EC2 instance is created using an AWS CloudFormation stack on your development endpoint. A Zeppelin notebook HTTP server is started on port 443. Specify the AWS CloudFormation stack server name, for example demo-cf. Choose an IAM role with a trust relationship to Amazon EC2. For more information, see Step 5: Create an IAM Role for Notebooks. Create or use an existing Amazon EC2 key pair with the Amazon EC2 console (). Remember where your private key is downloaded. This key is different from the SSH key you used when creating your development endpoint. The keys that Amazon EC2 uses are 2048-bit SSH-2 RSA keys. For more information about Amazon EC2 keys, see Amazon EC2 Key Pairs. Choose a user name and password to access your Apache Zeppelin notebook. Choose an Amazon S3 path for your notebook state to be stored in. Choose Create. You can view the status of the AWS CloudFormation stack in the AWS CloudFormation console Events tab (). You can view the Amazon EC2 instances created by AWS CloudFormation in the Amazon EC2 console (). Search for instances that are tagged with key aws-glue-dev-endpoint with a value of the name of the development endpoint. After the notebook is created, its status is changed to CREATE_COMPLETE in the Amazon EC2 console. Details about your notebook also appear in the development endpoint details page. When it's complete, go to the next step. Step 3: To Connect to Your Apache Zeppelin Notebook In the AWS Glue console, navigate to the development endpoints list. Choose the development endpoint name to open its details page. Details about your notebook server are also described on this page. You use these details to connect your Apache Zeppelin notebook from your web browser. On your local computer, open a terminal window. Leave the terminal window open while you use the notebook. Navigate to the folder where you downloaded your Amazon EC2 private key. To protect your Amazon EC2 private key from accidental overwriting, type the following:Copy chmod 400 private-key For example:Copy chmod 400 my-name.pem Open a web browser, and type the Notebook URL in the browser address bar to access the notebook using HTTPS on port 443. For example:Copy The Zeppelin notebook opens in your web browser. Log in to the notebook using the user name and password you provided when you created the notebook server. Create a new note, name it demo note. For the Default Interpreter, choose Spark. Verify that your notebook is set up correctly by typing the statement spark.versionand running it. It returns the version of Apache Spark that is running on your notebook server. Type the following script into your notebook and run it. This script reads the schema from the flightsparquet table and displays the same. It also displays data from the table.Copy from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.transforms import SelectFields glueContext = GlueContext(spark.sparkContext) datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "flightsdb", table_name = "flightsparquet", transformation_ctx = "datasource0") datasource0.printSchema() df = datasource0.toDF() df.show() This script returns output similar to the following example:Copy 2.1.0 -----------------+-------+-------+----------+---------------+----------+-----------------------+--------------------+ geolocation|classid|topicid|questionid|datavaluetypeid|locationid|stratificationcategory1| stratification1| ... -----------------+-------+-------+----------+---------------+----------+-----------------------+--------------------+ .8405711220004...| OWS| OWS1| Q036| VALUE| 1| Income| Data not reported| ... .8405711220004...| OWS| OWS1| Q037| VALUE| 1| Age (years)| 55 - 64| ... .8405711220004...| FV| FV1| Q018| VALUE| 1| Education| College graduate| ... .8405711220004...| FV| FV1| Q018| VALUE| 1| Education|Less than high sc...| ... .8405711220004...| FV| FV1| Q019| VALUE| 1| Income| $25,000 - $34,999| ... When a notebook server is run, Apache Zeppelin does not emit error messages on failure. To debug issues with your notebook, you can view Zeppelin logs. In a terminal window, navigate to the folder where you downloaded your Amazon EC2 private key. To access Zeppelin logs, from a terminal window, type the SSH to EC2 server command found on the details page. For example:Copy ssh -i private-key.pem ec2-user@ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com Then navigate to zeppelin/logsfor your user. When you're finished, close your web browser and any open terminal windows.
http://docs.aws.amazon.com/glue/latest/dg/tutorial-development-endpoint-notebook.html
CC-MAIN-2017-39
refinedweb
1,539
58.69
Your message dated Wed, 6 Jun 2007 11:03:22 +0200 with message-id <20070606090322.GA7341@.intersec.eu> and subject line Bug#427722: pthread_kill() declaration disappears when compiling with -ansthread_kill() declaration disappears when compiling with -ansi - From: "Daniel F. Smith" <dfsmith@almaden.ibm.com> - Date: Tue, 5 Jun 2007 19:04:27 -0700 - Message-id: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com> - Reply-to: dfsmith@almaden.ibm.comPackage: libc6-dev Version: 2.5-9+b1 When compiling with the -ansi flag in gcc, pthread_kill() is implicitly defined. The old behavior worked with -ansi. See this example. cat <<EOF >test.c #include <pthread.h> #include <signal.h> void *start(void *arg) {return NULL;} void test(void) { pthread_t t; (void)pthread_create(&t,NULL,start,NULL); pthread_kill(t,SIGKILL); } EOF $ gcc -Wall -c test.c (no warnings) $ gcc -ansi -Wall -c test.c test.c: In function 'test': test.c:9: warning: implicit declaration of function 'pthread_kill' $ uname -a Linux porter 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 2007 i686 GNU/Linux $ ls -l /lib/libc.so.6 lrwxrwxrwx 1 root root 11 Jun 4 13:26 /lib/libc.so.6 -> libc-2.5.so $ dpkg -s libc6 | grep Version Version: 2.5-9+b1 $ ls -l /usr/include/signal.h -rw-r--r-- 1 root root 13312 May 30 03:04 /usr/include/signal.h $ ls -l /usr/include/bits/pthreadtypes.h -rw-r--r-- 1 root root 4395 May 30 03:04 /usr/include/bits/pthreadtypes.h $ tail -20 /usr/include/signal.h | head -6 #if defined __USE_POSIX199506 || defined __USE_UNIX98 /* Some of the functions for handling signals in threaded programs must be defined here. */ # include <bits/pthreadtypes.h> # include <bits/sigthread.h> #endif /* use Unix98 */ --- End Message --- --- Begin Message --- - To: dfsmith@almaden.ibm.com, 427722-done@bugs.debian.org - Subject: Re: Bug#427722: pthread_kill() declaration disappears when compiling with -ansi - From: Pierre Habouzit <madcoder@debian.org> - Date: Wed, 6 Jun 2007 11:03:22 +0200 - Message-id: <20070606090322.GA7341@.intersec.eu> - In-reply-to: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com> - References: <[🔎] 20070606020427.GA14395@porter.almaden.ibm.com>On Tue, Jun 05, 2007 at 07:04:27PM -0700, Daniel F. Smith wrote: > Package: libc6-dev > Version: 2.5-9+b1 > > When compiling with the -ansi flag in gcc, pthread_kill() is implicitly > defined. The old behavior worked with -ansi. See this example. Old behaviour is wrong. pthread_kill is defined in IEEE Std 2003.1 with threads extensions. Using -ansi, you use plain old C, with no POSIX extensions, hence need to "define" some features see features_test_macros(7). To have pthread_kill you need to -D_POSIX_C_SOURCE=199506 or -D_XOPEN_SOURCE=500 at the strict minimum. (see man page and/or <features.h> for explanations). -- ·O· Pierre Habouzit ··O madcoder@debian.org OOO Attachment: pgpNi34wp3Gcd.pgp Description: PGP signature --- End Message ---
https://lists.debian.org/debian-glibc/2007/06/msg00104.html
CC-MAIN-2019-39
refinedweb
470
55.4
Tcl Programming/Examples Most of these example scripts first appeared in the Tclers' Wiki . The author (Richard Suchenwirth) declares them to be fully in the public domain. The following scripts are plain Tcl, they don't use the Tk GUI toolkit (there's a separate chapter for those).] Tcl's lists are well suited to represent sets. Here's typical set operations. If you use the tiny testing framework explained earlier, the e.g. lines make the self-test; otherwise they just illustrate how the operations should work. proc set'contains {set el} {expr {[lsearch -exact $set $el]>=0}} e.g. {set'contains {A B C} A} -> 1 e.g. {set'contains {A B C} D} -> 0 proc set'add {_set args} { upvar 1 $_set set foreach el $args { if {![set'contains $set $el]} {lappend set $el} } set set } set example {1 2 3} e.g. {set'add example 4} -> {1 2 3 4} e.g. {set'add example 4} -> {1 2 3 4} proc set'remove {_set args} { upvar 1 $_set set foreach el $args { set pos [lsearch -exact $set $el] set set [lreplace $set $pos $pos] } set set } e.g. {set'remove example 3} -> {1 2 4} proc set'intersection {a b} { foreach el $a {set arr($el) ""} set res {} foreach el $b {if {[info exists arr($el)]} {lappend res $el}} set res e.g. {set'intersection {1 2 3 4} {2 4 6 8}} -> {2 4} proc set'union {a b} { foreach el $a {set arr($el) ""} foreach el $b {set arr($el) ""} lsort [array names arr] } e.g. {set'union {1 3 5 7} {2 4 6 8}} -> {1 2 3 4 5 6 7 8} proc set'difference {a b} { eval set'remove a $b } e.g. {set'difference {1 2 3 4 5} {2 4 6}} -> {1 3 5} Hex-dumping a file[edit] The following example code opens a file, configures it to binary translation (i.e. line-ends \r\n are not standardized to \n as usual in C), and prints as many lines as needed which each contain 16 bytes in hexadecimal notation, plus, where possible, the ASCII character. proc file'hexdump filename { set fp [open $filename] fconfigure $fp -translation binary set n 0 while {![eof $fp]} { set bytes [read $fp 16] regsub -all {[^\x20-\xfe]} $bytes . ascii puts [format "%04X %-48s %-16s" $n [hexdump $bytes] $ascii] incr n 16 } close $fp } proc hexdump string { binary scan $string H* hex regexp -all -inline .. $hex } The "main routine" is a single line that dumps all files given on the command line: foreach file $argv {file'hexdump $file} Sample output, the script applied to itself: ...> tclsh hexdump.tcl hexdump.tcl 0000 0d 0a 20 70 72 6f 63 20 66 69 6c 65 27 68 65 78 .. proc file'hex 0010 64 75 6d 70 20 66 69 6c 65 6e 61 6d 65 20 7b 0d dump filename {. 0020 0a 20 20 20 20 73 65 74 20 66 70 20 5b 6f 70 65 . set fp [ope 0030 6e 20 24 66 69 6c 65 6e 61 6d 65 5d 0d 0a 20 20 n $filename].. ... Roman numerals[edit] Roman numerals are an additive (and partially subtractive) system with the following letter values: I=1 V=5 X=10 L=50 C=100 D=500 M=1000; MCMXCIX = 1999 Here's some Tcl routines for dealing with Roman numerals. Sorting roman numerals: I,V,X already come in the right order; for the others we have to introduce temporary collation transformations, which we'll undo right after sorting: proc roman:sort list { set map {IX VIIII L Y XC YXXXX C Z D {\^} ZM {\^ZZZZ} M _} foreach {from to} $map { regsub -all $from $list $to list } set list [lsort $list] foreach {from to} [lrevert $map] { regsub -all $from $list $to list } set list } Roman numerals from integer: proc roman:numeral {i} { set res "" foreach {value roman} { 1000 M 900 CM 500 D 400 CD 100 C 90 XC 50 L 40 XL 10 X 9 IX 5 V 4 IV 1 I} { while {$i>=$value} { append res $roman incr i -$value } } set res } Roman numerals parsed into integer: proc roman:get {s} { array set r_v {M 1000 D 500 C 100 L 50 X 10 V 5 I 1} set last 99999; set res 0 foreach i [split [string toupper $s] ""] { if [catch {set val $r_v($i)}] { error "un-Roman digit $i in $s" } incr res $val if {$val>$last} {incr res [expr -2*$last]} set last $val } set res } Custom control structures[edit] As "control structures" are really nothing special in Tcl, just a set of commands, it is easier than in most other languages to create one's own. For instance, if you would like to simplify the for loop for {set i 0} {$i < $max} {incr i} {...} for the typical simple cases so you can write instead loop i 0 $max {...} here is an implementation that even returns a list of the results of each iteration: proc loop {_var from to body} { upvar 1 $_var var set res {} for {set var $from} {$var < $to} {incr var} {lappend res [uplevel 1 $body]} return $res } using this, a string reverse function can be had as a one-liner: proc sreverse {str} { join [loop i 0 [string length $str] {string index $str end-$i}] "" } Range-aware switch[edit] Another example is the following range-aware switch variation. A range (numeric or strings) can be given as from..to, and the associated scriptlet gets executed if the tested value lies inside that range. Like in switch, fall-through collapsing of several cases is indicated by "-", and "default" as final condition fires if none else did. Different from switch, numbers are compared by numeric value, no matter whether given as decimal, octal or hex. proc rswitch {value body} { set go 0 foreach {cond script} $body { if {[regexp {(.+)\.\.(.+)} $cond -> from to]} { if {$value >= $from && $value <= $to} {incr go} } else { if {$value == $cond} {incr go} } if {$go && $script ne "-"} { #(2) uplevel 1 $script break } } if {$cond eq "default" && !$go} {uplevel 1 $script} ;#(1) } Testing: % foreach i {A K c z 0 7} { puts $i rswitch $i { A..Z {puts upper} a..z {puts lower} 0..9 {puts digit} } } A upper K upper c lower z lower 0 digit 7 digit % rswitch 0x2A {42 {puts magic} default {puts df}} magic The K combinator[edit] A very simple control structure (one might also call it a result dispatcher) is the K combinator, which is almost terribly simple: proc K {a b} {return $a} It can be used in all situations where you want to deliver a result that is not the last. For instance, reading a file in one go: proc readfile filename { set f [open $filename] set data [read $f] close $f return $data } can be simplified, without need for the data variable, to: proc readfile filename { K [read [set f [open $filename]]] [close $f] } Another example, popping a stack: proc pop _stack { upvar 1 $_stack stack K [lindex $stack end] [set stack [lrange $stack 0 end-1]] } This is in some ways similar to LISP's PROG1 construct: evaluate the contained expressions, and return the result of the first one. Rational numbers[edit] Rational numbers, a.k.a. fractions, can be thought of as pairs of integers {numerator denominator}, such that their "real" numerical value is numerator/denominator (and not in integer nor "double" division!). They can be more precise than any "float" or "double" numbers on computers, as those can't exactly represent any fractions whose denominator isn't a power of 2 — consider 1⁄3 which can not at any precision be exactly represented as floating-point number to base 2, nor as decimal fraction (base 10), even if bignum. An obvious string representation of a rational is of course "n/d". The following "constructor" does that, plus it normalizes the signs, reduces to lowest terms, and returns just the integer n if d==1: proc rat {n d} { if {!$d} {error "denominator can't be 0"} if {$d<0} {set n [- $n]; set d [- $d]} set g [gcd $n $d] set n [/ $n $g] set d [/ $d $g] expr {$d==1? $n: "$n/$d" } } Conversely, this "deconstructor" splits zero or more rational or integer strings into num and den variables, such that [ratsplit 1/3 a b] assigns 1 to a and 3 to b: proc ratsplit args { foreach {r _n _d} $args { upvar 1 $_n n $_d d foreach {n d} [split $r /] break if {$d eq ""} {set d 1} } } #-- Four-species math on "rats": proc rat+ {r s} { ratsplit $r a b $s c d rat [+ [* $a $d] [* $c $b]] [* $b $d] } proc rat- {r s} { ratsplit $r a b $s c d rat [- [* $a $d] [* $c $b]] [* $b $d] } proc rat* {r s} { ratsplit $r a b $s c d rat [* $a $c] [* $b $d] } proc rat/ {r s} { ratsplit $r a b $s c d rat [* $a $d] [* $b $c] } Arithmetical helper functions can be wrapped with func if they only consist of one call of expr: proc func {name argl body} {proc $name $argl [list expr $body]} #-- Greatest common denominator: func gcd {u v} {$u? [gcd [% $v $u] $u]: abs($v)} #-- Binary expr operators exported: foreach op {+ * / %} {func $op {a b} \$a$op\$b} #-- "-" can have 1 or 2 operands: func - {a {b ""}} {$b eq ""? -$a: $a-$b} #-- a little tester reports the unexpected: proc ? {cmd expected} { catch {uplevel 1 $cmd} res if {$res ne $expected} {puts "$cmd -> $res, expected $expected"} } #-- The test suite should silently pass when this file is sourced: ? {rat 42 6} 7 ? {rat 1 -2} -1/2 ? {rat -1 -2} 1/2 ? {rat 1 0} "denominator can't be 0" ? {rat+ 1/3 1/3} 2/3 ? {rat+ 1/2 1/2} 1 ? {rat+ 1/2 1/3} 5/6 ? {rat+ 1 1/2} 3/2 ? {rat- 1/2 1/8} 3/8 ? {rat- 1/2 1/-8} 5/8 ? {rat- 1/7 1/7} 0 ? {rat* 1/2 1/2} 1/4 ? {rat/ 1/4 1/4} 1 ? {rat/ 4 -6} -2/3 Docstrings[edit] Languages like Lisp and Python have the docstring feature, where a string in the beginning of a function can be retrieved for on-line (or printed) documentation. Tcl doesn't have this mechanism built-in (and it would be hard to do it exactly the same way, because everything is a string), but a similar mechanism can easily be adopted, and it doesn't look bad in comparison: - Common Lisp: (documentation 'foo 'function) - Python: foo.__doc__ - Tcl: docstring foo If the docstring is written in comments at the top of a proc body, it is easy to parse it out. In addition, for all procs, even without docstring, you get the "signature" (proc name and arguments with defaults). The code below also serves as usage example: } proc docstring procname { # reports a proc's args and leading comments. # Multiple documentation lines are allowed. set res "{usage: $procname [uplevel 1 [list info args $procname]]}" # This comment should not appear in the docstring foreach line [split [uplevel 1 [list info body $procname]] \n] { if {[string trim $line] eq ""} continue if ![regexp {\s*#(.+)} $line -> line] break lappend res [string trim $line] } join $res \n } proc args procname { # Signature of a proc: arguments with defaults set res "" foreach a [info args $procname] { if [info default $procname $a default] { lappend a $default } lappend res $a } set res } Testing: % docstring docstring usage: docstring procname reports a proc's args and leading comments. Multiple documentation lines are allowed. % docstring args usage: args procname Signature of a proc: arguments with defaults Factorial[edit] Factorial (n!) is a popular function with super-exponential growth. Mathematically put, 0! = 1 n! = n (n-1)! if n >0, else undefined In Tcl, we can have it pretty similarly: proc fact n {expr {$n<2? 1: $n * [fact [incr n -1]]}} But this very soon crosses the limits of integers, giving wrong results. A math book showed me the Stirling approximation to n! for large n (at Tcl's precisions, "large" is > 20 ...), so I built that in: proc fact n {expr { $n<2? 1: $n>20? pow($n,$n)*exp(-$n)*sqrt(2*acos(-1)*$n): wide($n)*[fact [incr n -1]]} } Just in case somebody needs approximated large factorials... But for n>143 we reach the domain limit of floating point numbers. In fact, the float limit is at n>170, so an intermediate result in the Stirling formula must have busted at 144. For such few values it is most efficient to just look them up in a pre-built table, as Tcllib's math::factorial does. How big is A4?[edit] Letter and Legal paper formats are popular in the US and other places. In Europe and elsewhere, the most widely used paper format is called A4. To find out how big a paper format is, one can measure an instance with a ruler, or look up appropriate documentation. The A formats can also be deduced from the following axioms: - A0 has an area of one square meter A(n)has half the area of A(n-1) - The ratio between the longer and the shorter side of an A format is constant How much this ratio is, can easily be computed if we consider that A(n) is produced from A(n-1) by halving it parallel to the shorter side, so 2a : b = b : a, 2 a2 = b2, b=sqrt(2) a, hence b : a = sqrt(2) : 1 So here is my Tcl implementation, which returns a list of height and width in centimeters (10000 cm2 = 1 m2) with two fractional digits, which delivers a sufficient precision of 1/10 mm: } proc paperA n { set w [expr {sqrt(10000/(pow(2,$n) * sqrt(2)))}] set h [expr {$w * sqrt(2)}] format "%.2f %.2f" $h $w } % paperA 4 29.73 21.02 Bit vectors[edit] Here is a routine for querying or setting single bits in vectors, where bits are addressed by non-negative integers. Implementation is as a "little-endian" list of integers, where bits 0..31 are in the first list element, 32..63 in the second, etc. Usage: bit varName position ?bitval? If bitval is given, sets the bit at numeric position position to 1 if bitval != 0, else to 0; in any case returns the bit value at specified position. If variable varName does not exist in caller's scope, it will be created; if it is not long enough, it will be extended to hold at least $position+1 bits, e.g. bit foo 32 will turn foo into a list of two integers, if it was only one before. All bits are initialized to 0. proc bit {varName pos {bitval {}}} { upvar 1 $varName var if {![info exist var]} {set var 0} set element [expr {$pos/32}] while {$element >= [llength $var]} {lappend var 0} set bitpos [expr {1 << $pos%32}] set word [lindex $var $element] if {$bitval != ""} { if {$bitval} { set word [expr {$word | $bitpos}] } else { set word [expr {$word & ~$bitpos}] } lset var $element $word } expr {($word & $bitpos) != 0} } #---------------------- now testing... if {[file tail [info script]] == [file tail $argv0]} { foreach {test expected} { {bit foo 5 1} 1 {set foo} 32 {bit foo 32 1} {32 1} } { catch {eval $test} res puts $test:$res/$expected } } This may be used for Boolean properties of numerically indexed sets of items. Example: An existence map of ZIP codes between 00000 and 99999 can be kept in a list of 3125 integers (where each element requires about 15 bytes overall), while implementing the map as an array would take 100000 * 42 bytes in worst case, but still more than a bit vector if the population isn't extremely sparse — in that case, a list of 1-bit positions, retrieved with lsearch, might be more efficient in memory usage. Runtime of bit vector accesses is constant, except when a vector has to be extended to much larger length. Bit vectors can also be used to indicate set membership (set operations would run faster if processing 32 bits on one go with bitwise operators ( &, |, ~, ^)) — or pixels in a binary imary image, where each row could be implemented by a bitvector. Here's a routine that returns the numeric indices of all set bits in a bit vector: proc bits bitvec { set res {} set pos 0 foreach word $bitvec { for {set i 0} {$i<32} {incr i} { if {$word & 1<<$i} {lappend res $pos} incr pos } } set res } % bit foo 47 1 1 % bit foo 11 1 1 % set foo 2048 32768 % bits $foo 11 47 Sieve of Erastothenes: The following procedure exercises the bit vector functions by letting bits represent integers, and unsetting all that are divisible. The numbers of the bits finally still set are supposed to be primes, and returned: proc sieve max { set maxroot [expr {sqrt($max)}] set primes [string repeat " 0xFFFFFFFF" [expr {($max+31)/32}]] bit primes 0 0; bit primes 1 0 for {set i [expr $max+1]} {$i<=(($max+31)/32)*32} {incr i} { bit primes $i 0 ;# mask out excess bits } for {set i 2} {$i<=$maxroot} {incr i} { if {[bit primes $i]} { for {set j [expr $i<<1]} {$j<=$max} {incr j $i} { bit primes $j 0 } } } bits $primes } % time {set res [sieve 10000]} 797000 microseconds per iteration Here's code to count the number of 1-bits in a bit vector, represented as an integer list. It does so by adding the values of the hex digits: proc bitcount intlist { array set bits { 0 0 1 1 2 1 3 2 4 1 5 2 6 2 7 3 8 1 9 2 a 2 b 3 c 2 d 3 e 3 f 4 } set sum 0 foreach int $intlist { foreach nybble [split [format %x $int] ""] { incr sum $bits($nybble) } } set sum } Stacks and queues[edit] Stacks and queues are containers for data objects with typical access methods: - push: add one object to the container - pop: retrieve and remove one object from the container In Tcl it is easiest to implement stacks and queues with lists, and the push method is most naturally lappend, so we only have to code a single generic line for all stacks and queues: interp alias {} push {} lappend It is pop operations in which stacks, queues, and priority queues differ: - in a stack, the most recently pushed object is retrieved and removed (last in first out, LIFO) - in a (normal) queue, it is the least recently pushed object (first in first out, FIFO) - in a priority queue, the object with the highest priority comes first. Priority (a number) has to be assigned at pushing time — by pushing a list of two elements, the item itself and the priority, e.g.. push toDo [list "go shopping" 2] push toDo {"answer mail" 3} push toDo {"Tcl coding" 1} ;# most important thing to do In a frequent parlage, priority 1 is the "highest", and the number increases for "lower" priorities — but you could push in an item with 0 for "ultrahigh" ;-) Popping a stack can be done like this: proc pop name { upvar 1 $name stack set res [lindex $stack end] set stack [lrange $stack 0 end-1] set res } Popping a queue is similarly structured, but with so different details that I found no convenient way to factor out things: proc qpop name { upvar 1 $name queue set res [lindex $queue 0] set queue [lrange $queue 1 end] set res } Popping a priority queue requires sorting out which item has highest priority. Sorting can be done when pushing, or when popping, and since our push is so nicely generic I prefer the second choice (as the number of pushs and pops should be about equal, it does not really matter). Tcl's lsort is stable, so items with equal priority will remain in the order in which they were queued: proc pqpop name { upvar 1 $name queue set queue [lsort -real -index 1 $queue] qpop queue ;# fall back to standard queue, now that it's sorted } A practical application is e.g. in state space searching, where the kind of container of the to-do list determines the strategy: - stack is depth-first - (normal) queue is breadth-first - priority queue is any of the more clever ways: A*, Greedy, ... Recent-use lists: A variation that can be used both in a stack or queue fashion is a list of values in order of their last use (which may come handy in an editor to display the last edited files, for instance). Here, pushing has to be done by dedicated code because a previous instance would have to be removed: proc rupush {listName value} { upvar 1 $listName list if {![info exist list]} {set list {}} set pos [lsearch $list $value] set list [lreplace $list $pos $pos] lappend list $value } % rupush tmp hello hello % rupush tmp world hello world % rupush tmp again hello world again % rupush tmp world hello again world The first element is the least recently, the last the most recently used. Elements are not removed by the popping, but (if necessary) when re-pushing. (One might truncate the list at front if it gets too long). Functions[edit]]} (I binary arithmetic operators as Tcl commands goes quite easy too: foreach op {+ * / %} {func $op {a b} "\$a $op \$b"} For "-", we distinguish unary and binary form: func - {a {b ""}} {$b eq ""? -$a: $a-$b} Having the modulo operator exposed, gcd now looks nicer: func gcd {u v} {$u? [gcd [% $v $u] $u]: abs($v)} For unary not I prefer that name to "!", as it might also stand for factorial — and see the shortest function body I ever wrote :^) : func not x {!$x}]} Experiments with Boolean functions[edit] "NAND is not AND." Here are some Tcl codelets to demonstrate how all Boolean operations can be expressed in terms of the single NAND operator, which returns true if not both his two inputs are true (NOR would have done equally well). We have Boolean operators in expr, so here goes: proc nand {A B} {expr {!($A && $B)}} The only unary operator NOT can be written in terms of nand: proc not {A} {nand $A $A} .. and everything else can be built from them too: proc and {A B} {not [nand $A $B]} proc or {A B} {nand [not $A] [not $B]} proc nor {A B} {not [or $A $B]} proc eq {A B} {or [and $A $B] [nor $A $B]} proc ne {A B} {nor [and $A $B] [nor $A $B]} Here's some testing tools — to see whether an implementation is correct, look at its truth table, here done as the four results for A,B combinations 0,0 0,1 1,0 1,1 — side note: observe how easily functions can be passed in as arguments: proc truthtable f { set res {} foreach A {0 1} { foreach B {0 1} { lappend res [$f $A $B] } } set res } % truthtable and 0 0 0 1 % truthtable nand 1 1 1 0 % truthtable or 0 1 1 1 % truthtable nor 1 0 0 0 % truthtable eq 1 0 0 1 To see how efficient the implementation is (in terms of NAND units used), try this, which relies on the fact that Boolean functions contain no lowercase letters apart from the operator names: proc nandcount f { regsub -all {[^a-z]} [info body $f] " " list set nums [string map {nand 1 not 1 and 2 nor 4 or 3 eq 6} $list] expr [join $nums +] } As a very different idea, having nothing to do with NAND as elementary function, the following generic code "implements" Boolean functions very intuitively, by just giving their truth table for look-up at runtime: proc booleanFunction {truthtable a b} { lindex $truthtable [expr {!!$a+!!$a+!!$b}] } interp alias {} and {} booleanFunction {0 0 0 1} interp alias {} or {} booleanFunction {0 1 1 1} interp alias {} nand {} booleanFunction {1 1 1 0} Solving cryptarithms[edit] Cryptarithms are puzzles where digits are represented by letters, and the task is to find out which. The following "General Problem Solver" (for small values of General) uses heavy metaprogramming: it - builds up a nest of foreachs suiting the problem, - quick kills (with continue) to force unique values for the variables, and - returns the first solution found, or else an empty string: proc solve {problem {domain0 {0 1 2 3 4 5 6 7 8 9}}} { set vars [lsort -u [split [regsub -all {[^A-Z]} $problem ""] ""]] set map {= ==} set outers {} set initials [regexp -all -inline {[^A-Z]([A-Z])} /$problem] set pos [lsearch $domain0 0] set domain1 [lreplace $domain0 $pos $pos] foreach var $vars { append body "foreach $var \$domain[expr [lsearch $initials $var]>=0] \{\n" lappend map $var $$var foreach outer $outers { append body "if {$$var eq $$outer} continue\n" } lappend outers $var append epilog \} } set test [string map $map $problem] append body "if {\[expr $test\]} {return \[subst $test\]}" $epilog if 1 $body } This works fine on some well-known cryptarithms: % solve SEND+MORE=MONEY 9567+1085==10652 % solve SAVE+MORE=MONEY 9386+1076==10462 % solve YELLOW+YELLOW+RED=ORANGE 143329+143329+846==287504 Database experiments[edit] A simple array-based database[edit] There are lots of complex databases around. Here I want to explore how a database can be implemented in the Tcl spirit of simplicity, and how far that approach takes us. Consider the following model: - A database is a set of records - A record is a nonempty set of fields with a unique ID - A field is a pair of tag and nonempty value, both being strings Fields Let Note that, as we never specified what fields a record shall contain, we can add whatever we see fit. For easier handling, it's a good idea to classify records somehow (we'll want to store more than books), so we add set db($id,isa) book Retrieving But real columns may have empty fields, which we don't want to store. Retrieving fields that may not physically exist needs a tolerant access function: proc db'get {_db id field} { upvar $_db db if {[array names db $id,$field]=="$id,$field"} { return $db($id,$field) } else {return ""} } When:: Databases are supposed to exist between sessions, so here's how to save a database to a file: set fp [open Library.db w] puts $fp [list array set db [array get db]] close $fp and loading a database is even easier (on re-loading, better unset the array before): source Library.db If: T But neither can you filter the keys you will get with a glob pattern, nor may you add or delete array elements in the loop — the search will be immediately terminated. Tables as lists of lists[edit] Tables are understood here as rectangular (matrix) arrangements of data in rows (one row per "item"/"record") and columns (one column per "field"/"element"). They are for instance the building blocks of relational databases and spreadsheets. In Tcl, a sensible implementation for compact data storage would be as a list of lists. This way, they are "pure values" and can be passed e.g. through functions that take a table and return a table. No con-/destructors are needed, in contrast to the heavierweight matrix in Tcllib. I know there are many table implementations in Tcl, but like so often I wanted to build one "with my bare hands" and as simple as possible. As you see below, many functionalities can be "implemented" by just using Tcl's list functions. A nice table also has a header line, that specifies the field names. So to create such a table with a defined field structure, but no contents yet, one just assigns the header list: set tbl { {firstname lastname phone}} Note the double bracing, which makes sure tbl is a 1-element list. Adding "records" to the table is as easy as lappend tbl {John Smith (123)456-7890} Make sure the fields (cells) match those in the header. Here single bracing is correct. If a field content contains spaces, it must be quoted or braced too: lappend tbl {{George W} Bush 234-5678} Sorting a table can be done with lsort -index, taking care that the header line stays on top: proc tsort args { set table [lindex $args end] set header [lindex $table 0] set res [eval lsort [lrange $args 0 end-1] [list [lrange $table 1 end]]] linsert $res 0 $header } Removing a row (or contiguous sequence of rows) by numeric index is a job for lreplace: set tbl [lreplace $tbl $from $to] Simple printing of such a table, a row per line, is easy with puts [join $tbl \n] Accessing fields in a table is more fun with the field names than the numeric indexes, which is made easy by the fact that the field names are in the first row: proc t@ {tbl field} {lsearch [lindex $tbl 0] $field} % t@ $tbl phone 2 You can then access cells: puts [lindex $tbl $rownumber [t@ $tbl lastname]] and replace cell contents like this: lset tbl $rownumber [t@ $tbl phone] (222)333-4567 Here is how to filter a table by giving pairs of field name and glob-style expression — in addition to the header line, all rows that satisfy at least one of those come through (you can force AND behavior by just nesting such calls): proc trows {tbl args} { set conditions {} foreach {field condition} $args { lappend conditions [t@ $tbl $field] $condition } set res [list [lindex $tbl 0]] foreach row [lrange $tbl 1 end] { foreach {index condition} $conditions { if [string match $condition [lindex $row $index]] { lappend res $row break; # one hit is sufficient } } } set res } % trows $tbl lastname Sm* {firstname lastname} phone {John Smith (123)456-7890} This filters (and, if wanted, rearranges) columns, sort of what is called a "view": proc tcols {tbl args} { set indices {} foreach field $args {lappend indices [t@ $tbl $field]} set res {} foreach row $tbl { set newrow {} foreach index $indices {lappend newrow [lindex $row $index]} lappend res $newrow } set res } Programming Languages Laboratory[edit] In the following few chapters you'll see how easy it is to emulate or explore other programming languages with Tcl. GOTO: a little state machine[edit] The GOTO "jumping" instruction is considered harmful in programming for many years now, but still it might be interesting to experiment with. Tcl has no goto command, but it can easily be created. The following code was created in the Tcl chatroom, instigated by the quote: "A computer is a state machine. Threads are for people who can't program state machines." So here is one model of a state machine in ten lines of code. The "machine" itself takes a list of alternating labels and state code; if a state code does not end in a goto or break, the same state will be repeated as long as not left, with goto or break (implicit endless loop). The goto command is defined "locally", and deleted after leaving the state machine — it is not meaningfully used outside of it. Execution starts at the first of the states. proc statemachine states { array set S $states proc goto label { uplevel 1 set this $label return -code continue } set this [lindex $states 0] while 1 {eval $S($this)} rename goto {} } Testing: a tiny state machine that greets you as often as you wish, and ends if you only hit Return on the "how often?" question: statemachine { 1 { puts "how often?" gets stdin nmax if {$nmax eq ""} {goto 3} set n 0 goto 2 } 2 { if {[incr n] > $nmax} {goto 1} puts "hello" } 3 {puts "Thank you!"; break} } Playing Assembler[edit] In this weekend fun project to emulate machine language, I picked those parts of Intel 8080A/8085 Assembler (because I had a detailed reference handy) that are easily implemented and still somehow educational (or nostalgic ;-). Of course this is no real assembler. The memory model is constant-size instructions (strings in array elements), which are implemented as Tcl procs. So an "assembler" program in this plaything will run even slower than in pure Tcl, and consume more memory — while normally you associate speed and conciseness with "real" assembler code. But it looks halfway like the real thing: you get sort of an assembly listing with symbol table, and can run it — I'd hardly start writing an assembler in C, but in Tcl it's fun for a sunny Sunday afternoon... } namespace eval asm { proc asm body { variable mem catch {unset mem} ;# good for repeated sourcing foreach line [split $body \n] { foreach i {label op args} {set $i ""} regexp {([^;]*);} $line -> line ;# strip off comments regexp {^ *(([A-Z0-9]+):)? *([A-Z]*) +(.*)} [string toupper $line]\ -> - label op args puts label=$label,op=$op,args=$args if {$label!=""} {set sym($label) $PC} if {$op==""} continue if {$op=="DB"} {set mem($PC) [convertHex $args]; incr PC; continue} if {$op=="EQU"} {set sym($label) [convertHex $args]; continue} if {$op=="ORG"} {set PC [convertHex $args]; continue} regsub -all ", *" $args " " args ;# normalize commas set mem($PC) "$op $args" incr PC } substituteSymbols sym dump sym } proc convertHex s { if [regexp {^([0-9A-F]+)H$} [string trim $s] -> s] {set s [expr 0x$s]} set s } proc substituteSymbols {_sym} { variable mem upvar $_sym sym foreach i [array names mem] { set tmp [lindex $mem($i) 0] foreach j [lrange $mem($i) 1 end] { if {[array names sym $j] eq $j} {set j $sym($j)} lappend tmp $j } set mem($i) $tmp } } proc dump {_sym} { variable mem upvar $_sym sym foreach i [lsort -integer [array names mem]] { puts [format "%04d %s" $i $mem($i)] } foreach i [lsort [array names sym]] { puts [format "%-10s: %04x" $i $sym($i)] } } proc run { {pc 255}} { variable mem foreach i {A B C D E Z} {set ::$i 0} while {$pc>=0} { incr pc #puts "$mem($pc)\tA:$::A B:$::B C:$::C D:$::D E:$::E Z:$::Z" eval $mem($pc) } } #----------------- "machine opcodes" implemented as procs proc ADD {reg reg2} {set ::Z [incr ::$reg [set ::$reg2]]} proc ADI {reg value} {set ::Z [incr ::$reg $value]} proc CALL {name} {[string tolower $name] $::A} proc DCR {reg} {set ::Z [incr ::$reg -1]} proc INR {reg} {set ::Z [incr ::$reg]} proc JMP where {uplevel 1 set pc [expr $where-1]} proc JNZ where {if $::Z {uplevel 1 JMP $where}} proc JZ where {if !$::Z {uplevel 1 JMP $where}} proc MOV {reg adr} {variable mem; set ::$reg $mem($adr)} proc MVI {reg value} {set ::$reg $value} } Now testing: asm::asm { org 100 ; the canonical start address in CP/M jmp START ; idiomatic: get over the initial variable(s) DONE: equ 0 ; warm start in CP/M ;-) MAX: equ 5 INCR: db 2 ; a variable (though we won't vary it) ;; here we go... START: mvi c,MAX ; set count limit mvi a,0 ; initial value mov b,INCR LOOP: call puts ; for now, fall back to Tcl for I/O inr a add a,b ; just to make adding 1 more complicated dcr c ; counting down.. jnz LOOP ; jump on non-zero to LOOP jmp DONE ; end of program end } The mov b,INCR part is an oversimplification. For a real 8080, one would have to say LXI H,INCR ; load double registers H+L with the address INCR MOV B,M ; load byte to register B from the address pointed to in HL Since the pseudo-register M can also be used for writing back, it cannot be implemented by simply copying the value. Rather, one could use read and write traces on variable M, causing it to load from, or store to, mem($HL). Maybe another weekend... Functional programming (Backus 1977)[edit] John Backus turned 80 these days. For creating FORTRAN and the BNF style of language description, he received the ACM Turing Award in 1977. In his Turing Award lecture, Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs. (Comm. ACM 21.8, Aug. 1978, 613-641) he developed an amazing framework for functional programming, from theoretical foundations to implementation hints, e.g. for installation, user privileges, and system self-protection. In a nutshell, his FP system comprises - a experiments, especially on weekends. I started with Backus' first Functional Program example, Def Innerproduct = (Insert +) o (ApplyToAll x) o Transpose which which is shorter and simpler, but meddles more directly with the stack. An important functional form is the conditional, which at Backus looks like p1 -> f; p2 -> g; h meaning,} Reusable functional components[edit] Say you want to make a multiplication table for an elementary school kid near you. Easily done in a few lines of Tcl code: proc multable {rows cols} { set res "" for {set i 1} {$i <= $rows} {incr i} { for {set j 1} {$j <= $cols} {incr j} { append res [format %4d [expr {$i*$j}]] } append res \n } set res } The code does not directly puts its results, but returns them as a string — you might want to do other things with it, e.g. save it to a file for printing. Testing: % multable 3 10 1 2 3 4 5 6 7 8 9 10 2 4 6 8 10 12 14 16 18 20 3 6 9 12 15 18 21 24 27 30 Or print the result directly from wish: catch {console show} puts "[multable 3 10]" Here's a different way to do it à la functional programming: proc multable2 {rows cols} { formatMatrix %4d [outProd * [iota 1 $rows] [iota 1 $cols]] } The body is nice and short, but consists of all unfamiliar commands. They are however better reusable than the multable proc above. The first formats a matrix (a list of lists to Tcl) with newlines and aligned columns for better display: proc formatMatrix {fm matrix} { join [lmap row $matrix {join [lmap i $row {format $fm $i}] ""}] \n } Short again, and slightly cryptic, as is the "outer product" routine, which takes a function f and two vectors, and produces a matrix where f was applied to every pair of a x b — in APL they had special compound operators for this job, in this case "°.x": proc outProd {f a b} { lmap i $a {lmap j $b {$f $i $j}} } Again, lmap (the collecting foreach) figures prominently, so here it is in all its simplicity: proc lmap {_var list body} { upvar 1 $_var var set res {} foreach var $list {lappend res [uplevel 1 $body]} set res } #-- We need multiplication from expr exposed as a function: proc * {a b} {expr {$a * $b}} #-- And finally, iota is an integer range generator: proc iota {from to} { set res {} while {$from <= $to} {lappend res $from; incr from} set res } With these parts in place, we can see that multable2 works as we want: % multable2 3 10 1 2 3 4 5 6 7 8 9 10 2 4 6 8 10 12 14 16 18 20 3 6 9 12 15 18 21 24 27 30 So why write six procedures, where one did the job already? A matter of style and taste, in a way — multable is 10 LOC and depends on nothing but Tcl, which is good; multable2 describes quite concisely what it does, and builds on a few other procs that are highly reusable. Should you need a unit matrix (where the main diagonal is 1, and the rest is 0), just call outProd with a different function (equality, ==): % outProd == [iota 1 5] [iota 1 5] {1 0 0 0 0} {0 1 0 0 0} {0 0 1 0 0} {0 0 0 1 0} {0 0 0 0 1} which just requires expr's equality to be exposed too: proc == {a b} {expr {$a == $b}} One of the fascinations of functional programming is that one can do the job in a simple and clear way (typically a one-liner), while using a collection of reusable building-blocks like lmap and iota. And formatMatrix and outProd are so general that one might include them in some library, while the task of producing a multiplication table may not come up any more for a long time... Modelling an RPN language[edit] Tcl follows strictly the Polish notation, where an operator or function always precedes its arguments. It is however easy to build an interpreter for a language in Reverse Polish Notation (RPN) like Forth, Postscript, or Joy, and experiment with it. The "runtime engine" is just called "r" (not to be confused with the R language), and it boils down to a three-way switch done for each word, in eleven lines of code: - "tcl" evaluates the top of stack as a Tcl script - known words in the ::Carray are recursively evaluated in "r" - other words are just pushed Joy's rich quoting for types ([list], {set}, "string", 'char) conflict with the Tcl parser, so lists in "r" are {braced} if their length isn't 1, and (parenthesized) if it is — but the word shall not be evaluated now. This looks better to me than /slashing as in Postscript. As everything is a string, and to Tcl "a" is {a} is a , Joy's polymorphy has to be made explicit. I added converters between characters and integers, and between strings and lists (see the dictionary below). For Joy's sets I haven't bothered yet — they are restricted to the domain 0..31, probably implemented with bits in a 32-bit word. Far as this is from Joy, it was mostly triggered by the examples in Manfred von Thun's papers, so I tongue-in-cheek still call it "Pocket Joy" — it was for me, at last, on the iPaq... The test suite at end should give many examples of what one can do in "r". } proc r args { foreach a $args { dputs [info level]:$::S//$a if {$a eq "tcl"} { eval [pop] } elseif [info exists ::C($a)] { eval r $::C($a) } else {push [string trim $a ()]} } set ::S } # That's it. Stack (list) and Command array are global variables: set S {}; unset C #-- A tiny switchable debugger: proc d+ {} {proc dputs s {puts $s}} proc d- {} {proc dputs args {}} d- ;#-- initially, debug mode off Definitions are in Forth style — ":" as initial word, as they look much more compact than Joy's DEFINE n == args; proc : {n args} {set ::C($n) $args} expr functionality is exposed for binary operators and one-arg functions: proc 2op op { set t [pop] push [expr {[pop]} $op {$t}] } foreach op {+ - * / > >= != <= <} {: $op [list 2op $op] tcl} : = {2op ==} tcl proc 1f f {push [expr $f\([pop])]} foreach f {abs double exp int sqrt sin cos acos tan} {: $f [list 1f $f] tcl} interp alias {} pn {} puts -nonewline #----- The dictionary has all one-liners: : . {pn "[pop] "} tcl : .s {puts $::S} tcl : ' {push [scan [pop] %c]} tcl ;# char -> int : ` {push [format %c [pop]]} tcl ;# int -> char : and {2op &&} tcl : at 1 - swap {push [lindex [pop] [pop]]} tcl : c {set ::S {}} tcl ;# clear stack : choice {choice [pop] [pop] [pop]} tcl : cleave {cleave [pop] [pop] [pop]} tcl : cons {push [linsert [pop] 0 [pop]]} tcl : dup {push [set x [pop]] $x} tcl : dupd {push [lindex $::S end-1]} tcl : emit {pn [format %c [pop]]} tcl : even odd not : explode {push [split [pop] ""]} tcl ;# string -> char list : fact 1 (*) primrec : filter split swap pop : first {push [lindex [pop] 0]} tcl : fold {rfold [pop] [pop] [pop]} tcl : gcd swap {0 >} {swap dupd rem swap gcd} (pop) ifte : has swap in : i {eval r [pop]} tcl : ifte {rifte [pop] [pop] [pop]} tcl : implode {push [join [pop] ""]} tcl ;# char list -> string : in {push [lsearch [pop] [pop]]} tcl 0 >= : map {rmap [pop] [pop]} tcl : max {push [max [pop] [pop]]} tcl : min {push [min [pop] [pop]]} tcl : newstack c : not {1f !} tcl : odd 2 rem : of swap at : or {2op ||} tcl : pop (pop) tcl : pred 1 - : primrec {primrec [pop] [pop] [pop]} tcl : product 1 (*) fold : qsort (lsort) tcl : qsort1 {lsort -index 0} tcl : rem {2op %} tcl : rest {push [lrange [pop] 1 end]} tcl : reverse {} swap (swons) step : set {set ::[pop] [pop]} tcl : $ {push [set ::[pop]]} tcl : sign {0 >} {0 <} cleave - : size {push [llength [pop]]} tcl : split {rsplit [pop] [pop]} tcl : step {step [pop] [pop]} tcl : succ 1 + : sum 0 (+) fold : swap {push [pop] [pop]} tcl : swons swap cons : xor != Helper functions written in Tcl: proc rifte {else then cond} { eval r dup $cond eval r [expr {[pop]? $then: $else}] } proc choice {z y x} { push [expr {$x? $y: $z}] } proc cleave { g f x} { eval [list r $x] $f [list $x] $g } proc max {x y} {expr {$x>$y?$x:$y}} proc min {x y} {expr {$x<$y? $x:$y}} proc rmap {f list} { set res {} foreach e $list { eval [list r $e] $f lappend res [pop] } push $res } proc step {f list} { foreach e $list {eval [list r ($e)] $f} } proc rsplit {f list} { foreach i {0 1} {set $i {}} foreach e $list { eval [list r $e] $f lappend [expr {!![pop]}] $e } push $0 $1 } proc primrec {f init n} { if {$n>0} { push $n while {$n>1} { eval [list r [incr n -1]] $f } } else {push $init} } proc rfold {f init list} { push $init foreach e $list {eval [list r $e] $f} } #------------------ Stack routines proc push args { foreach a $args {lappend ::S $a} } proc pop {} { if [llength $::S] { K [lindex $::S end] \ [set ::S [lrange $::S 0 end-1]] } else {error "stack underflow"} } proc K {a b} {set a} #------------------------ The test suite: proc ? {cmd expected} { catch {uplevel 1 $cmd} res if {$res ne $expected} {puts "$cmd->$res, not $expected"} } ? {r 2 3 +} 5 ? {r 2 *} 10 ? {r c 5 dup *} 25 : sqr dup * : hypot sqr swap sqr + sqrt ? {r c 3 4 hypot} 5.0 ? {r c {1 2 3} {dup *} map} { {1 4 9}} ? {r size} 3 ? {r c {2 5 3} 0 (+) fold} 10 ? {r c {3 4 5} product} 60 ? {r c {2 5 3} 0 {dup * +} fold} 38 ? {r c {1 2 3 4} dup sum swap size double /} 2.5 ? {r c {1 2 3 4} (sum) {size double} cleave /} 2.5 : if0 {1000 >} {2 /} {3 *} ifte ? {r c 1200 if0} 600 ? {r c 600 if0} 1800 ? {r c 42 sign} 1 ? {r c 0 sign} 0 ? {r c -42 sign} -1 ? {r c 5 fact} 120 ? {r c 1 0 and} 0 ? {r c 1 0 or} 1 ? {r c 1 0 and not} 1 ? {r c 3 {2 1} cons} { {3 2 1}} ? {r c {2 1} 3 swons} { {3 2 1}} ? {r c {1 2 3} first} 1 ? {r c {1 2 3} rest} { {2 3}} ? {r c {6 1 5 2 4 3} {3 >} filter} { {6 5 4}} ? {r c 1 2 {+ 20 * 10 4 -} i} {60 6} ? {r c 42 succ} 43 ? {r c 42 pred} 41 ? {r c {a b c d} 2 at} b ? {r c 2 {a b c d} of} b ? {r c 1 2 pop} 1 ? {r c A ' 32 + succ succ `} c ? {r c {a b c d} reverse} { {d c b a}} ? {r c 1 2 dupd} {1 2 1} ? {r c 6 9 gcd} 3 ? {r c true yes no choice} yes ? {r c false yes no choice} no ? {r c {1 2 3 4} (odd) split} { {2 4} {1 3}} ? {r c a {a b c} in} 1 ? {r c d {a b c} in} 0 ? {r c {a b c} b has} 1 ? {r c {a b c} e has} 0 ? {r c 3 4 max} 4 ? {r c 3 4 min} 3 ? {r c hello explode reverse implode} olleh : palindrome dup explode reverse implode = ? {r c hello palindrome} 0 ? {r c otto palindrome} 1 #-- reading (varname $) and setting (varname set) global Tcl vars set tv 42 ? {r c (tv) $ 1 + dup (tv) set} 43 ? {expr $tv==43} 1 Tacit programming[edit] The J programming language is the "blessed successor" to APL, where "every function is an infix or prefix operator", x?y (dyadic) or ?y (monadic), for ? being any pre- or user-defined function). "Tacit programming" (tacit: implied; indicated by necessary connotation though not expressed directly) is one of the styles possible in J, and means coding by combining functions, without reference to argument names. This idea may have been first brought up in Functional programming (Backus 1977), if not in Forth and Joy, and it's an interesting simplification compared to the lambda calculus. For instance, here's a breathtakingly short J program to compute the mean of a list of numbers: mean=.+/%# Let's chew this, byte by byte :) =. is assignment to a local variable ("mean") which can be called +/%# is the "function body" + (dyadic) is addition / folds the operator on its left over the list on its right +/ hence being the sum of a list % (dyadic) is division, going double on integer arguments when needed # (monadic) is tally, like Tcl's [llength] resp. [string length] Only implicitly present is a powerful function combinator called "fork". When J parses three operators in a row, gfh, where f is dyadic and g and h are monadic, they are combined like the following Tcl version does: proc fork {f g h x} {$f [$g $x] [$h $x]} In other words, f is applied to the results of applying g and h to the single argument. Note that +/ is considered one operator, which applies the "adverb" folding to the "verb" addition (one might well call it "sum"). When two operands occur together, the "hook" pattern is implied, which might in Tcl be written as: proc hook {f g x} {$f $x [$g $x]} As KBK pointed out in the Tcl chatroom, the "hook" pattern corresponds to Schönfinkel/Curry's S combinator (see Hot Curry and Combinator Engine), while "fork" is called S' there. Unlike in earlier years when I was playing APL, this time my aim was not to parse and emulate J in Tcl — I expected hard work for a dubitable gain, and this is a weekend fun project after all. I rather wanted to explore some of these concepts and how to use them in Tcl, so that in slightly more verbose words I could code (and call) Def mean = fork /. sum llength following Backus' FP language with the "Def" command. So let's get the pieces together. My "Def" creates an interp alias, which is a good and simple Tcl way to compose partial scripts (the definition, here) with one or more arguments, also known as "currying": proc Def {name = args} {eval [list interp alias {} $name {}] $args} The second parameter, "=", is for better looks only and evidently never used. Testing early and often is a virtue, as is documentation — to make the following code snippets clearer, I tuned my little tester for better looks, so that the test cases in the source code also serve as well readable examples — they look like comments but are code! The cute name "e.g." was instigated by the fact that in J, "NB." is used as comment indicator, both being well known Latin abbreviations: proc e.g. {cmd -> expected} { catch {uplevel 1 $cmd} res if {$res != $expected} {puts "$cmd -> $res, not $expected"} } Again, the " ->" argument is for eye-candy only — but it feels better to me at least. See the examples soon to come. For recursive functions and other arithmetics, func makes better reading, by accepting expr language in the body: proc func {name argl body} {proc $name $argl [list expr $body]} We'll use this to turn expr's infix operators into dyadic functions, plus the "slashdot" operator that makes division always return a real number, hence the dot : foreach op {+ — * /} {func $op {a b} "\$a $op \$b"} e.g. {+ 1 2} -> 3 e.g. {/ 1 2} -> 0 ;# integer division func /. {a b} {double($a)/$b} e.g. {/. 1 2} -> 0.5 ;# "real" division #-- Two abbreviations for frequently used list operations: proc head list {lindex $list 0} e.g. {head {a b c}} -> a proc tail list {lrange $list 1 end} e.g. {tail {a b c}} -> {b c} For "fold", this time I devised a recursive version: func fold {neutral op list} { $list eq [] ? $neutral : [$op [head $list] [fold $neutral $op [tail $list]]] } e.g. {fold 0 + {1 2 3 4}} -> 10 #-- A "Def" alias does the same job: Def sum = fold 0 + e.g. {sum {1 2 3 4}} -> 10 #-- So let's try to implement "mean" in tacit Tcl! Def mean = fork /. sum llength e.g. {mean {1 2 3 40}} -> 11.5 Tacit enough (one might have picked fancier names like +/ for "sum" and # as alias for llength), but in principle it is equivalent to the J version, and doesn't name a single argument. Also, the use of llength demonstrates that any good old Tcl command can go in here, not just the artificial Tacit world that I'm just creating... In the next step, I want to reimplement the "median" function, which for a sorted list returns the central element if its length is odd, or the mean of the two elements adjacent to the (virtual) center for even length. In J, it looks like this: median=.(mean@:\{~medind@#)@sortu medind=.((<.,>.)@half) ` half @.(2&|) half=.-:@<: NB. halve one less than rt. argument sortu=.\{~/: NB. sort upwards which may better explain why I wouldn't want to code in J :^) J has ASCIIfied the zoo of APL strange character operators, at the cost of using braces and brackets as operators too, without regard for balancing, and extending them with dots and colons, so e.g. - monadic: negate; dyadic: minus -. monadic: not -: monadic: halve J code sometimes really looks like an accident in a keyboard factory... I won't go into all details of the above code, just some: @ ("atop") is strong linkage, sort of functional composition <. (monadic) is floor() >. (monadic) is ceil() (<.,>.) is building a list of the floor and the ceiling of its single argument, the comma being the concatenation operator here, comparable to Backus' "construction" or Joy's cleave. The pattern a ` b @. c is a kind of conditional in J, which could in Tcl be written if {[$c $x]} {$a $x} else {$b $x} but my variant of the median algorithm doesn't need a conditional — for lists of odd length it just uses the central index twice, which is idempotent for "mean", even if a tad slower. J's "from" operator { takes zero or more elements from a list, possibly repeatedly. For porting this, lmap is a good helper, even though not strictly functional: proc lmap {_v list body} { upvar 1 $_v v set res {} foreach v $list {lappend res [uplevel 1 $body]} set res } e.g. {lmap i {1 2 3 4} {* $i $i}} -> {1 4 9 16} #-- So here's my 'from': proc from {indices list} {lmap i $indices {lindex $list $i}} e.g. {from {1 0 0 2} {a b c}} -> {b a a c} We furtheron borrow some more content from expr: func ceil x {int(ceil($x))} func floor x {int(floor($x))} e.g. {ceil 1.5} -> 2 e.g. {floor 1.5} -> 1 e.g. {fork list floor ceil 1.5} -> {1 2} We'll need functional composition, and here's a recursive de-luxe version that takes zero or more functions, hence the name o*: func o* {functions x} { $functions eq []? $x : [[head $functions] [o* [tail $functions] $x]] } e.g. {o* {} hello,world} -> hello,world Evidently, identity as could be written proc I x {set x} is the neutral element of variadic functional composition, when called with no functions at all. If composite functions like 'fork' are arguments to o*, we'd better let unknown know that we want auto-expansion of first word: proc know what {proc unknown args $what\n[info body unknown]} know { set cmd [head $args] if {[llength $cmd]>1} {return [eval $cmd [tail $args]]} } Also, we need a numeric sort that's good for integers as well as reals ("Def" serves for all kinds of aliases, not just combinations of functions): Def sort = lsort -real e.g. {sort {2.718 10 1}} -> {1 2.718 10} e.g. {lsort {2.718 10 1}} -> {1 10 2.718} ;# lexicographic #-- And now for the median test: Def median = o* {mean {fork from center sort}} Def center = o* {{fork list floor ceil} {* 0.5} -1 llength} func -1 x {$x — 1} e.g. {-1 5} -> 4 ;# predecessor function, when for integers #-- Trying the whole thing out: e.g. {median {1 2 3 4 5}} -> 3 e.g. {median {1 2 3 4}} -> 2.5 As this file gets tacitly sourced, I am pretty confident that I've reached my goal for this weekend — even though my median doesn't remotely look like the J version: it is as "wordy" as Tcl usually is. But the admittedly still very trivial challenge was met in truly function-level style, concerning the definitions of median, center and mean — no variable left behind. And that is one, and not the worst, Tcl way of Tacit programming... vector (all of same dimensions, element-wise) Here's experiments how to do this in Tcl. First lmap is a collecting foreach — it maps the specified body over a list: proc lmap {_var list body} { upvar 1 $_var var set res {} foreach var $list {lappend res [uplevel 1 $body]} set res } #-- We need basic scalar operators from expr factored out: foreach op {+ - * / % ==} {proc $op {a b} "expr {\$a $op \$b}"} The following generic wrapper takes one binary operator (could be any suitable function) and two arguments, which may be scalars, vectors, or even matrices (lists of lists), as it recurses as often as needed. Note that as my lmap above only takes one list, the two-list case had to be made explicit with foreach. proc vec {op a b} { if {[llength $a] == 1 && [llength $b] == 1} { $op $a $b } elseif {[llength $a]==1} { lmap i $b {vec $op $a $i} } elseif {[llength $b]==1} { lmap i $a {vec $op $i $b} } elseif {[llength $a] == [llength $b]} { set res {} foreach i $a j $b {lappend res [vec $op $i $j]} set res } else {error "length mismatch [llength $a] != [llength $b]"} } Tests are done with this minimal "framework": proc e.g. {cmd -> expected} { catch $cmd res if {$res ne $expected} {puts "$cmd -> $res, not $expected"} } Scalar + Scalar e.g. {vec + 1 2} -> 3 Scalar + Vector e.g. {vec + 1 {1 2 3 4}} -> {2 3 4 5} Vector / Scalar e.g. {vec / {1 2 3 4} 2.} -> {0.5 1.0 1.5 2.0} Vector + Vector e.g. {vec + {1 2 3} {4 5 6}} -> {5 7 9} Matrix * Scalar e.g. {vec * {{1 2 3} {4 5 6}} 2} -> {{2 4 6} {8 10 12}} Multiplying a 3x3 matrix with another: e.g. {vec * {{1 2 3} {4 5 6} {7 8 9}} {{1 0 0} {0 1 0} {0 0 1}}} -> \ {{1 0 0} {0 5 0} {0 0 9}} The dot product of two vectors is a scalar. That's easily had too, given a sum function: proc sum list {expr [join $list +]+0} sum [vec * {1 2} {3 4}] should result in 11 (= (1*3)+(2*4)). Here's a little application for this: a vector factorizer, that produces the list of divisors for a given integer. For this we again need a 1-based integer range generator: proc iota1 x { set res {} for {set i 1} {$i<=$x} {incr i} {lappend res $i} set res } e.g. {iota1 7} -> {1 2 3 4 5 6 7} #-- We can compute the modulo of a number by its index vector: e.g. {vec % 7 [iota1 7]} -> {0 1 1 3 2 1 0} #-- and turn all elements where the remainder is 0 to 1, else 0: e.g. {vec == 0 [vec % 7 [iota1 7]]} -> {1 0 0 0 0 0 1} At this point, a number is prime if the sum of the latest vector is 2. But we can also multiply out the 1s with the divisors from the i ndex vector: e.g. {vec * [iota1 7] [vec == 0 [vec % 7 [iota1 7]]]} -> {1 0 0 0 0 0 7} #-- Hence, 7 is only divisible by 1 and itself, hence it is a prime. e.g. {vec * [iota1 6] [vec == 0 [vec % 6 [iota1 6]]]} -> {1 2 3 0 0 6} So 6 is divisible by 2 and 3; non-zero elements in (lrange $divisors 1 end-1) gives the "proper" divisors. And three nested calls to vec are sufficient to produce the divisors list :) Just for comparison, here's how it looks in J: iota1=.>:@i. iota1 7 1 2 3 4 5 6 7 f3=.iota1*(0&=@|~iota1) f3 7 1 0 0 0 0 0 7 f3 6 1 2 3 0 0 6 Integers as Boolean functions[edit] Boolean functions, in which arguments and result are in the domain {true, false}, or {1, 0} as expr has it, and operators are e.g. {AND, OR, NOT} resp. {&&, ||, !}, can be represented by their truth table, which for example for {$a && $b} looks like: a b a&&b 0 0 0 1 0 0 0 1 0 1 1 1 As all but the last column just enumerate all possible combinations of the arguments, first column least-significant, the full representation of a&&b is the last column, a sequence of 0s and 1s which can be seen as binary integer, reading from bottom up: 1 0 0 0 == 8. So 8 is the associated integer of a&&b, but not only of this — we get the same integer for !(!a || !b), but then again, these functions are equivalent. To try this in Tcl, here's a truth table generator that I borrowed from a little proving engine, but without the lsort used there — the order of cases delivered makes best sense when the first bit is least significant: } proc truthtable n { # make a list of 2**n lists, each with n truth values 0|1 set res {} for {set i 0} {$i < (1<<$n)} {incr i} { set case {} for {set j 0} {$j <$n} {incr j } { lappend case [expr {($i & (1<<$j)) != 0}] } lappend res $case } set res } Now we can write n(f), which, given a Boolean function of one or more arguments, returns its characteristic number, by iterating over all cases in the truth table, and setting a bit where appropriate: proc n(f) expression { set vars [lsort -unique [regsub -all {[^a-zA-Z]} $expression " "]] set res 0 set bit 1 foreach case [truthtable [llength $vars]] { foreach $vars $case break set res [expr $res | ((($expression)!=0)*$bit)] incr bit $bit ;#-- <<1, or *2 } set res } Experimenting: % n(f) {$a && !$a} ;#-- contradiction is always false 0 % n(f) {$a || !$a} ;#-- tautology is always true 3 % n(f) {$a} ;#-- identity is boring 2 % n(f) {!$a} ;#-- NOT 1 % n(f) {$a && $b} ;#-- AND 8 % n(f) {$a || $b} ;#-- OR 14 % n(f) {!($a && $b)} ;#-- de Morgan's laws: 7 % n(f) {!$a || !$b} ;#-- same value = equivalent 7 So the characteristic integer is not the same as the Goedel number of a function, which would encode the structure of operators used there. % n(f) {!($a || $b)} ;#-- interesting: same as unary NOT 1 % n(f) {!$a && !$b} 1 Getting more daring, let's try a distributive law: % n(f) {$p && ($q || $r)} 168 % n(f) {($p && $q) || ($p && $r)} 168 Daring more: what if we postulate the equivalence? % n(f) {(($p && $q) || ($p && $r)) == ($p && ($q || $r))} 255 Without proof, I just claim that every function of n arguments whose characteristic integer is 2^(2^n) — 1 is a tautology (or a true statement — all bits are 1). Conversely, postulating non-equivalence turns out to be false in all cases, hence a contradiction: % n(f) {(($p && $q) || ($p && $r)) != ($p && ($q || $r))} 0 So again, we have a little proving engine, and simpler than last time. In the opposite direction, we can call a Boolean function by its number and provide one or more arguments — if we give more than the function can make sense of, non-false excess arguments lead to constant falsity, as the integer can be considered zero-extended: proc f(n) {n args} { set row 0 set bit 1 foreach arg $args { set row [expr {$row | ($arg != 0)*$bit}] incr bit $bit } expr !!($n &(1<<$row)) } Trying again, starting at OR (14): % f(n) 14 0 0 0 % f(n) 14 0 1 1 % f(n) 14 1 0 1 % f(n) 14 1 1 1 So f(n) 14 indeed behaves like the OR function — little surprise, as its truth table (the results of the four calls), read bottom-up, 1110, is decimal 14 (8 + 4 + 2). Another test, inequality: % n(f) {$a != $b} 6 % f(n) 6 0 0 0 % f(n) 6 0 1 1 % f(n) 6 1 0 1 % f(n) 6 1 1 0 Trying to call 14 (OR) with more than two args: % f(n) 14 0 0 1 0 % f(n) 14 0 1 1 0 53 % f(n) 14 1 1 1 0 The constant 0 result is a subtle indication that we did something wrong :) Implication (if a then b, a -> b) can in expr be expressed as $a <= $b — just note that the "arrow" seems to point the wrong way. Let's try to prove "Modus Barbara" — "if a implies b and b implies c, then a implies c": % n(f) {(($a <= $b) && ($b <= $c)) <= ($a <= $c)} 255 With less abstract variable names, one might as well write % n(f) {(($Socrates <= $human) && ($human <= $mortal)) <= ($Socrates <= $mortal)} 255 But this has been verified long ago, by Socrates' death :^) Let unknown know[edit] To extend Tcl, i.e. to make it understand and do things that before raised an error, the easiest way is to write a proc. Any proc must however be called in compliance with Tcl's fundamental syntax: first word is the command name, then the arguments separated by whitespace. Deeper changes are possible with the unknown command, which is called if a command name is, well, unknown, and in the standard version tries to call executables, to auto-load scripts, or do other helpful things (see the file init.tcl). One could edit that file (not recommended), or rename unknown to something else and provide one's own unknown handler, that falls through to the original proc if unsuccessful, as shown in Radical language modification. Here is a simpler way that allows to extend unknown "in place" and incrementally: We let unknown "know" what action it shall take under what conditions. The know command is called with a condition that should result in an integer when given to expr, and a body that will be executed if cond results in nonzero, returning the last result if not terminated with an explicit return. In both cond and body you may use the variable args that holds the problem command unknown was invoked with. proc know what { if ![info complete $what] {error "incomplete command(s) $what"} proc unknown args $what\n[info body unknown] } ;# RS The extending code what is prepended to the previous unknown body. This means that subsequent calls to know stack up, last condition being tried first, so if you have several conditions that fire on the same input, let them be "known" from generic to specific. Here's a little debugging helper, to find out why "know" conditions don't fire: proc know? {} {puts [string range [info body unknown] 0 511]} Now testing what new magic this handful of code allows us to do. This simple example invokes expr if the "command" is digestible for it: % know {if {![catch {expr $args} res]} {return $res}} % 3+4 7 If we had no if[edit])!=0]] Tcl can bring them to life... Brute force meets Goedel[edit] Never afraid of anything (as long as everything is a string), a discussion in the Tcl chatroom brought me to try the following: let the computer write ("discover") its own software, only given specifications of input and output. In truly brute force, up to half a million programs are automatically written and (a suitable subset of them) tested to find the one that passes the tests. To make things easier, this flavor of "software" is in a very simple RPN language similar to, but much smaller than, the one presented in Playing bytecode: stack-oriented like Forth, each operation being one byte (ASCII char) wide, so we don't even need whitespace in between. Arguments are pushed on the stack, and the result of the "software", the stack at end, is returned. For example, in ebc ++ 1 2 3 execution of the script "++" should sum its three arguments (1+(2+3)), and return 6. Here's the "bytecode engine" (ebc: execute byte code), which retrieves the implementations of bytecodes from the global array cmd: proc ebc {code argl} { set ::S $argl foreach opcode [split $code ""] { eval $::cmd($opcode) } set ::S } Let's now populate the bytecode collection. The set of all defined bytecodes will be the alphabet of this little RPN language. It may be interesting to note that this language has truly minimal syntax — the only rule is: each script ("word") composed of any number of bytecodes is well-formed. It just remains to check whether it does what we want. Binary expr operators can be treated generically: foreach op {+ - * /} { set cmd($op) [string map "@ $op" {swap; push [expr {[pop] @ [pop]}]}] } #-- And here's some more hand-crafted bytecode implementations set cmd(d) {push [lindex $::S end]} ;# dup set cmd(q) {push [expr {sqrt([pop])}]} set cmd(^) {push [swap; expr {pow([pop],[pop])}]} set cmd(s) swap #-- The stack routines imply a global stack ::S, for simplicity interp alias {} push {} lappend ::S proc pop {} {K [lindex $::S end] [set ::S [lrange $::S 0 end-1]]} proc K {a b} {set a} proc swap {} {push [pop] [pop]} Instead of enumerating all possible bytecode combinations beforehand (which grows exponentially by alphabet and word length), I use this code from Mapping words to integers to step over their sequence, uniquely indexed by an increasing integer. This is something like the Goedel number of the corresponding code. Note that with this mapping, all valid programs (bytecode sequences) correspond to one unique non-negative integer, and longer programs have higher integers associated: proc int2word {int alphabet} { set word "" set la [llength $alphabet] while {$int > 0} { incr int -1 set word [lindex $alphabet [expr {$int % $la}]]$word set int [expr {$int/$la}] } set word } Now out for discovery! The toplevel proc takes a paired list of inputs and expected output. It tries in brute force all programs up to the specified maximum Goedel number and returns the first one that complies with all tests: proc discover0 args { set alphabet [lsort [array names ::cmd]] for {set i 1} {$i<10000} {incr i} { set code [int2word $i $alphabet] set failed 0 foreach {inputs output} $args { catch {ebc $code $inputs} res if {$res != $output} {incr failed; break} } if {!$failed} {return $code} } } But iterating over many words is still pretty slow, at least on my 200 MHz box, and many useless "programs" are tried. For instance, if the test has two inputs and wants one output, the stack balance is -1 (one less out than in). This is provided e.g. by one the binary operators +-*/. But the program "dd" (which just duplicates the top of stack twice) has a stack balance of +2, and hence can never pass the example test. So, on a morning dogwalk, I thought out this strategy: - measure the stack balance for each bytecode - iterate once over very many possible programs, computing their stack balance - partition them (put into distinct subsets) by stack balance - perform each 'discovery' call only on programs of matching stack balance Here's this version. Single bytecodes are executed, only to measure their effect on the stack. The balance of longer programs can be computed by just adding the balances of their individual bytecodes: proc bc'stack'balance bc { set stack {1 2} ;# a bytecode will consume at most two elements expr {[llength [ebc $bc $stack]]-[llength $stack]} } proc stack'balance code { set res 0 foreach bc [split $code ""] {incr res $::balance($bc)} set res } The partitioning will run for some seconds (depending on nmax — I tried with several ten thousand), but it's needed only once. The size of partitions is further reduced by excluding programs which contain redundant code, that will have no effect, like swapping the stack twice, or swapping before an addition or multiplication. A program without such extravaganzas is shorter and yet does the same job, so it will have been tested earlier anyway. proc partition'programs nmax { global cmd partitions balance #-- make a table of bytecode stack balances set alphabet [array names cmd] foreach bc $alphabet { set balance($bc) [bc'stack'balance $bc] } array unset partitions ;# for repeated sourcing for {set i 1} {$i<=$nmax} {incr i} { set program [int2word $i $alphabet] #-- "peephole optimizer" - suppress code with redundancies set ok 1 foreach sequence {ss s+ s*} { if {[string first $sequence $program]>=0} {set ok 0} } if {$ok} { lappend partitions([stack'balance $program]) $program } } set program ;# see how far we got } The discoverer, Second Edition, determines the stack balance of the first text, and tests only those programs of the same partition: proc discover args { global partitions foreach {in out} $args break set balance [expr {[llength $out]-[llength $in]}] foreach code $partitions($balance) { set failed 0 foreach {input output} $args { catch {ebc $code $input} res if {$res != $output} {incr failed; break} } if {!$failed} {return $code} } } But now for the trying. The partitioning helps very much in reducing the number of candidates. For the 1000 programs with Goedel numbers 1..1000, it retains only a fraction for each stack balance: -2: 75 -1: 155 (this and 0 will be the most frequently used) 0: 241 1: 274 2: 155 3: 100 Simple starter — discover the successor function (add one): % discover 5 6 7 8 dd/+ Not bad: duplicate the number twice, divide by itself to get the constant 1, and add that to the original number. However, it fails to work if we add the successor of 0 as another test case: % discover 5 6 7 8 0 1 Nothing coming — because zero division made the last test fail. If we give only this test, another solution is found: % discover 0 1 d^ "Take x to the x-th" power" — pow(0,0) gives indeed 1, but that's not the generic successor function. More experiments to discover the hypot() function: % discover {4 3} 5 d/+ Hm — the 3 is duplicated, divided by itself (=1), which is added to 4. Try to swap the inputs: % discover {3 4} 5 q+ Another dirty trick: get square root of 4, add to 3 — presto, 5. The correct hypot() function would be d*sd*+q but my program set (nmax=30000) ends at 5-byte codes, so even by giving another test to force discovery of the real thing, it would never reach a 7-byte code. OK, I bite the bullet, set nmax to 500000, wait 5 minutes for the partitioning, and then: % discover {3 4} 5 {11 60} 61 sd/+ Hm.. cheap trick again — it was discovered that the solution is just the successor of the second argument. Like in real life, test cases have to be carefully chosen. So I tried with another a^2+b^2=c^2 set, and HEUREKA! (after 286 seconds): % discover {3 4} 5 {8 15} 17 d*sd*+q After partitioning, 54005 programs had the -1 stack balance, and the correct result was on position 48393 in that list... And finally, with the half-million set of programs, here's a solution for the successor function too: % discover 0 1 4711 4712 ddd-^+ "d-" subtracts top of stack from itself, pushing 0; the second duplicate to the 0-th power gives 1, which is added to the original argument. After some head-scratching, I find it plausible, and possibly it is even the simplest possible solution, given the poorness of this RPN language. Lessons learned: - Brute force is simple, but may demand very much patience (or faster hardware) - The sky, not the skull is the limit what all we can do with Tcl :) Object orientation[edit] OO (Object Orientation) is a style in programming languages popular since Smalltalk, and especially C++, Java, etc. For Tcl, there have been several OO extensions/frameworks (incr Tcl, XOTcl, stooop, Snit to name a few) in different flavors, but none can be considered as standard followed by a majority of users. However, most of these share the features - classes can be defined, with variables and methods - objects are created as instances of a class - objects are called with messages to perform a method Of course, there are some who say: "Advocating object-orientated programming is like advocating pants-oriented clothing: it covers your behind, but often doesn't fit best" ... Bare-bones OO[edit] Quite a bunch of what is called OO can be done in pure Tcl without a "framework", only that the code might look clumsy and distracting. Just choose how to implement instance variables: - in global variables or namespaces - or just as parts of a transparent value, with TOOT The task of frameworks, be they written in Tcl or C, is just to hide away gorey details of the implementation — in other words, sugar it :) On the other hand, one understands a clockwork best when it's outside the clock, and all parts are visible — so to get a good understanding of OO, it might be most instructive to look at a simple implementation. As an example, here's a Stack class with push and pop methods, and an instance variable s — a list that holds the stack's contents: namespace eval Stack {set n 0} proc Stack::Stack {} { #-- constructor variable n set instance [namespace current]::[incr n] namespace eval $instance {variable s {}} interp alias {} $instance {} ::Stack::do $instance } The interp alias makes sure that calling the object's name, like ::Stack::1 push hello is understood and rerouted as a call to the dispatcher below: ::Stack::do ::Stack::1 push hello The dispatcher imports the object's variables (only s here) into local scope, and then switches on the method name: proc Stack::do {self method args} { #-- Dispatcher with methods upvar #0 ${self}::s s switch -- $method { push {eval lappend s $args} pop { if ![llength $s] {error "stack underflow"} K [lindex $s end] [set s [lrange $s 0 end-1]] } default {error "unknown method $method"} } } proc K {a b} {set a} A framework would just have to make sure that the above code is functionally equivalent to, e.g. (in a fantasy OO style): class Stack { variable s {} method push args {eval :) Now testing in an interactive tclsh: % set s [Stack::Stack] ;#-- constructor ::Stack::1 ;#-- returns the generated instance name % $s push hello hello % $s push world hello world % $s pop world % $s pop hello % $s pop stack underflow ;#-- clear enough error message % namespace delete $s ;#-- "destructor" TOOT: transparent OO for Tcl[edit] Transparent OO for Tcl, or TOOT for short, is a very amazing combination of Tcl's concept of transparent values, and the power of OO concepts. In TOOT, the values of objects are represented as a list of length 3: the class name (so much for "runtime type information" :-), a "|" as separator and indicator, and the values of the object, e.g. {class | {values of the object}} Here's my little take on toot in a nutshell. Classes in C++ started out as structs, so I take a minimal struct as example, with generic get and set methods. We will export the get and set methods: namespace eval toot {namespace export get set} proc toot::struct {name members} { namespace eval $name {namespace import -force ::toot::*} #-- membership information is kept in an alias: interp alias {} ${name}::@ {} lsearch $members } The two generic accessor functions will be inherited by "struct"s proc toot::get {class value member} { lindex $value [${class}::@ $member] } The set method does not change the instance (it couldn't, as it sees it only "by value") — it just returns the new composite toot object, for the caller to do with it what he wants: proc toot::set {class value member newval} { ::set pos [${class}::@ $member] list $class | [lreplace $value $pos $pos $newval] } For the whole thing to work, here's a simple overloading of unknown — see "Let unknown know". It augments the current unknown code, at the top, with a handler for {class | values} method args patterns, which converts it to the form ::toot::(class)::(method) (class) (values) (args) and returns the result of calling that form: proc know what {proc unknown args $what\n[info body unknown]} Now to use it (I admit the code is no easy reading): know { set first [lindex $args 0] if {[llength $first]==3 && [lindex $first 1] eq "|"} { set class [lindex $first 0] return [eval ::toot::${class}::[lindex $args 1] \ $class [list [lindex $first 2]] [lrange $args 2 end]] } } Testing: we define a "struct" named foo, with two obvious members: toot::struct foo {bar grill} Create an instance as pure string value: set x {foo | {hello world}} puts [$x get bar] ;# -> hello (value of the "bar" member) Modify part of the foo, and assign it to another variale: set y [$x set grill again] puts $y ;# -> foo | {hello again} Struct-specific methods can be just procs in the right namespace. The first and second arguments are the class (disregarded here, as the dash shows) and the value, the rest is up to the coder. This silly example demonstrates member access and some string manipulation: proc toot::foo::upcase {- values which string} { string toupper [lindex $values [@ $which]]$string } puts [$y upcase grill !] ;# -> AGAIN! A little deterministic Turing machine[edit] At university, I never learned much about Turing machines. Only decades later, a hint in the Tcl chatroom pointed me to , an assignment to implement a Deterministic Turing Machine (i.e. one with at most one rule per state and input character), which gives clear instructions and two test cases for input and output, so I decided to try my hand in Tcl. Rules in this little challenge are of the form a bcD e, where - a is the state in which they can be applied - b is the character that must be read from tape if this rule is to apply - c is the character to write to the tape - D is the direction to move the tape after writing (R(ight) or L(eft)) - e is the state to transition to after the rule was applied Here's my naive implementation, which takes the tape just as the string it initially is. I only had to take care that when moving beyond its ends, I had to attach a space (written as _) on that end, and adjust the position pointer when at the beginning. Rules are also taken as strings, whose parts can easily be extracted with string index — as it's used so often here, I alias it to @. } proc dtm {rules tape} { set state 1 set pos 0 while 1 { set char [@ $tape $pos] foreach rule $rules { if {[@ $rule 0] eq $state && [@ $rule 2] eq $char} { #puts rule:$rule,tape:$tape,pos:$pos,char:$char #-- Rewrite tape at head position. set tape [string replace $tape $pos $pos [@ $rule 3]] #-- Move tape Left or Right as specified in rule. incr pos [expr {[@ $rule 4] eq "L"? -1: 1}] if {$pos == -1} { set pos 0 set tape _$tape } elseif {$pos == [string length $tape]} { append tape _ } set state [@ $rule 6] break } } if {$state == 0} break } #-- Highlight the head position on the tape. string trim [string replace $tape $pos $pos \[[@ $tape $pos]\]] _ } interp alias {} @ {} string index Test data from set rules { {1 00R 1} {2 01L 0} {1 __L 2} {2 10L 2} {2 _1L 0} {1 11R 1} } set tapes { 0 10011 1111 } set rules2 { {3 _1L 2} {1 _1R 2} {1 11L 3} {2 11R 2} {3 11R 0} {2 _1L 1} } set tapes2 _ Testing: foreach tape $tapes {puts [dtm $rules $tape]} puts * puts [dtm $rules2 $tapes2] reports the results as wanted in the paper, on stdout: >tclsh turing.tcl [_]1 1[0]100 [_]10000 * 1111[1]1 Streams[edit]: - read $fpreturns the whole contents, which then can be processed; - while {[gets $fp line]>-1} {...}reads line by line, interleaved with processing The second construct may be less efficient, but is robust for gigabyte-sized files. A simpler example is pipes in Unix/DOS (use TYPE for cat there):==""} { remember fp [set fp [open $filename]] } if {[gets $fp res]<0} { remember fp [close $fp] ;# which returns an empty string ;-) } elseif {$res==""} {set res " "} ;# not end of stream! set res } proc remember {argn value} { # - rewrite a proc's default arg with given ↵ Enter after every line, and q↵ Enter] } } Discussion:... Playing with Laws of Form[edit] After many years, I re-read G. Spencer-Brown, "Laws of Form". New York: E.P. Dutton 1979 which is sort of a mathematical thriller, if you will. Bertrand Russell commented that the author "has revealed a new calculus, of great power and simplicity" (somehow sounds like Tcl ;^). In a very radical simplification, a whole world is built up by two operators, juxtaposition without visible symbol (which could be likened to or) and a overbar-hook (with the meaning of not) that I can't type here — it's a horizontal stroke over zero or more operands, continued at right by a vertical stroke going down to the baseline. In these Tcl experiments, I use "" for "" and angle-brackets <> for the overbar-hook (with zero or more operands in between). One point that was new for me is that the distinction between operators and operands is not cast in stone. Especially constants (like "true" and "false" in Boolean algebras) can be equally well expressed as neutral elements of operators, if these are considered variadic, and having zero arguments. This makes sense, even in Tcl, where one might implement them as proc and args { foreach arg $args {if {![uplevel 1 expr $arg]} {return 0}} return 1 } proc or args { foreach arg $args {if {[uplevel 1 expr $arg]} {return 1}} return 0 } which, when called with no arguments, return 1 or 0, respectively. So [or] == 0 and [and] == 1. In Spencer-Brown's terms, [] (which is "", the empty string with no arguments) is false ("nil" in LISP), and [<>] is the negation of "", i.e. true. His two axioms are: <><> == <> "to recall is to call -- (1 || 1) == 1" <<>> == "to recross is not to cross -- !!0 == 0" and these can be implemented by a string map that is repeated as long as it makes any difference (sort of a trampoline) to simplify any expression consisting only of operators and constants (which are operators with zero arguments): proc lf'simplify expression { while 1 { set res [string map {<><> <> <<>> ""} $expression] if {$res eq $expression} {return $res} set expression $res } } Testing: % lf'simplify <<><>><> <> which maps <><> to <>, <<>> to "", and returns <> for "true". % lf'simplify <a>a <a>a In the algebra introduced here, with a variable "a", no further simplification was so far possible. Let's change that — "a" can have only two values, "" or <>, so we might try to solve the expression by assuming all possible values for a, and see if they differ. If they don't, we have found a fact that isn't dependent on the variable's value, and the resulting constant is returned, otherwise the unsolved expression: proc lf'solve {expression var} { set results {} foreach value {"" <>} { set res [lf'simplify [string map [list $var $value] $expression]] if {![in $results $res]} {lappend results $res} if {[llength $results] > 1} {return $expression} } set results } with a helper function in that reports containment of an element in a list: proc in {list element} {expr {[lsearch -exact $list $element] >= 0}} Testing: % lf'solve <a>a a <> which means, in expr terms, {(!$a || $a) == 1}, for all values of a. In other words, a tautology. All of Boole's algebra can be expressed in this calculus: * (1) not a == !$a == <a> * (2) a or b == $a || $b == ab * (3) a and b == $a && $b == <<a><b>> * (4) a implies b == $a <= $b == <a>b We can test it with the classic "ex contradictione quodlibet" (ECQ) example — "if p and not p, then q" for any q: % lf'solve <<p><<p>>>q p q So formally, q is true, whatever it is :) If this sounds overly theoretic, here's a tricky practical example in puzzle solving, Lewis Carroll's last sorites (pp. 123f.). The task is to conclude something from the following premises: - The only animals in this house are cats - Every animal is suitable for a pet, that loves to gaze at the moon - When I detest an animal, I avoid it - No animals are carnivorous, unless they prowl at night - No cat fail to kill mice - No animals ever take to me, except what are in this house - Kangaroos are not suitable for pets - None but carnivora kill mice - I detest animals that do not take to me - Animals that prowl at night always love to gaze at the moon These are encoded to the following one-letter predicates: - a - avoided by me - c - cat - d - detested by me - h - house, in this - k - kill mice - m - moon, love to gaze at - n - night, prowl at - p - pet, suitable for - r - (kanga)roo - t - take to me - v - (carni)vorous So the problem set can be restated, in Spencer-Brown's terms, as <h>c <m>p <d>a <v>n <c>k <t>h <r><p> <k>v td <n>m I first don't understand why all premises can be just written in a row, which amounts to implicit "or", but it seems to work out well. As we've seen that <x>x is true for any x, we can cancel out such tautologies. For this, we reformat the expression to a list of values of type x or !x, that is in turn dumped into a local array for existence checking. And when both x and !x exist, they are removed from the expression: proc lf'cancel expression { set e2 [string map {"< " ! "> " ""} [split $expression ""]] foreach term $e2 {if {$term ne ""} {set a($term) ""}} foreach var [array names a ?] { if [info exists a(!$var)] { set expression [string map [list <$var> "" $var ""] $expression] } } set expression } puts [lf'cancel {<h>c <m>p <d>a <v>n <c>k <t>h <r><p> <k>v td <n>m}] which results in: - a <r> translated back: "I avoid it, or it's not a kangaroo", or, reordered, "<r> a" which by (4) means, "All kangaroos are avoided by me". A little IRC chat bot[edit] Here is a simple example of a "chat bot" — a program that listens on an IRC chatroom, and sometimes also says something, according to its programming. The following script - connects to channel #tcl on IRC - listens to what is said - if someone mentions its name (minibot), tries to parse the message and answer. #!/usr/bin/env tclsh set ::server irc.freenode.org set ::chan #tcl set ::me minibot proc recv {} { gets $::fd line puts $line # handle PING messages from server if {[lindex [split $line] 0] eq "PING"} { send "PONG [info hostname] [lindex [split $line] 1]"; return } if {[regexp {:([^!]*)![^ ].* +PRIVMSG ([^ :]+) +(.*[Mm]inibot)(.+)} $line -> \ nick target msg cmd]} { if {$nick eq "ijchain"} {regexp {<([^>]+)>(.+)} $msg -> nick msg} set hit 0 foreach pattern [array names ::patterns] { if [string match "*$pattern*" $cmd] { set cmd [string trim $cmd {.,:? }] if [catch {mini eval $::patterns($pattern) $cmd} res] { set res $::errorInfo } foreach line [split $res \n] { send "PRIVMSG $::chan :$line" } incr hit break } } if !$hit {send "PRIVMSG $::chan :Sorry, no idea."} } } #----------- Patterns for response: set patterns(time) {clock format [clock sec] ;#} set patterns(expr) safeexpr proc safeexpr args {expr [string map {\[ ( \] ) expr ""} $args]} set patterns(eggdrop) {set _ "Please check" ;#} set patterns(toupper) string set patterns(Windows) {set _ "I'd prefer not to discuss Windows..." ;#} set {patterns(translate "good" to Russian)} {set _ \u0425\u043E\u0440\u043E\u0448\u043E ;#} set patterns(Beijing) {set _ \u5317\u4EAC ;#} set patterns(Tokyo) {set _ \u4E1C\u4EAC ;#} set {patterns(your Wiki page)} {set _ ;#} set patterns(zzz) {set _ "zzz well!" ;#} set patterns(man) safeman proc safeman args {return[lindex $args 1].htm} set {patterns(where can I read about)} gotowiki proc gotowiki args {return "Try[lindex $args end]"} set patterns(thank) {set _ "You're welcome." ;#} set patterns(worry) worry proc worry args { return "Why do [string map {I you my your your my you me} $args]?" } #-- let the show begin... :^) interp create -safe mini foreach i {safeexpr safeman gotowiki worry} { interp alias mini $i {} $i } proc in {list element} {expr {[lsearch -exact $list $element]>=0}} proc send str {puts $::fd $str;flush $::fd} set ::fd [socket $::server 6667] fconfigure $fd -encoding utf-8 send "NICK minibot" send "USER $::me 0 * :PicoIRC user" send "JOIN $::chan" fileevent $::fd readable recv vwait forever Examples from the chat: suchenwi minibot, which is your Wiki page? <minibot> suchenwi ah, thanks suchenwi minibot expr 6*7 <minibot> 42 suchenwi minibot, what's your local time? <minibot> Sun Oct 21 01:26:59 (MEZ) - Mitteleurop. Sommerzeit 2007
http://en.wikibooks.org/wiki/Programming:Tcl_Tcl_examples
CC-MAIN-2014-10
refinedweb
15,544
60.08
NAME posix_openpt -- open a pseudo-terminal device LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <stdlib.h> #include <fcntl.h> int posix_openpt(int oflag); DESCRIPTION The posix_openpt() function allocates a new pseudo-terminal and establishes a connection with its master device. A slave device shall be created in /dev/pts. After the pseudo-terminal has been allocated, the slave device should have the proper permissions before it can be used (see grantpt(3)). The name of the slave device can be determined by calling ptsname(3). posix_openpt() function shall fail when oflag contains other values. RETURN VALUES Upon successful completion, the posix_openpt() function shall allocate a new pseudo-terminal device and return a non-negative integer representing a file descriptor, which is connected to its master device. Otherwise, -1 shall be returned and errno set to indicate the error. ERRORS The posix_openpt() function shall fail if: [ENFILE] The system file table is full. [EINVAL] The value of oflag is not valid. [EAGAIN] Out of pseudo-terminal resources. SEE ALSO pts(4), ptsname(3), tty(4) STANDARDS The posix_openpt() function conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY The posix_openpt() function appeared in FreeBSD 5.0. In FreeBSD 8.0, this function was changed to a system call. NOTES The flag O_NOCTTY is included for compatibility; in FreeBSD, opening a terminal does not cause it to become a process's controlling terminal. AUTHORS Ed Schouten <ed@FreeBSD.org>
http://manpages.ubuntu.com/manpages/precise/man2/posix_openpt.2freebsd.html
CC-MAIN-2015-18
refinedweb
239
59.3
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, Just checked the R21 documentation and introduces a new import maxon line. Does this mean the import c4d will be deprecated in the future and slowly replaced by the import maxon? import maxon import c4d hello @bentraje Don't forget to mark your thread as question and mark it solved after The core of cinema4D have been rewrite. (started some years ago now) and more and more of the old code is replaced by the new API. So you have what we call "classic API" that is c4d and the modules below, and the Maxon API that is the new one. import c4d -> Classic API import maxon -> Maxon API That's the same with c++, the Maxon API will replace the Classic API in the future, but we don't know how much time it will take and how much time the classic API will remain before being removed. For sure, you should use, as much as possible, the Maxon API. Cheers Manuel Gotcha. Thanks for the confirmation. How usable is the Maxon API (import maxon) at the moment? Can it select an object and modify its parameter? Maxon API (import maxon) I'm sorry I can't find any MaxonAPI example codes. Most the example scripts in the Github are still in Classic API (import API)? Classic API (import API) the new maxon module is new to me too, but here are some of my impressions: maxon C++ volume CustomdataTags\MeshAttributes VariableTags CustomdataTags Cheers zipit'm not going to enter into details but the Maxon API will also help us to bring the C++ functionalities really faster than before. So the python will be closer than what we have in C++ MeshAttributes allow you to store any kind of information on points. Before you were limited to c4d Datatype. It's not only about replacing old stuff, it's giving the developers the same tool we are using and a lot more possibilities. There's not too much examples in python for now because we didn't wrote them yet. Keep asking questions, we will write examples, improve documentation etc.. @bentraje It look obvious but in the documentation the Maxon API is what's on the right. So not too much for now but, as with the c++ questions, we will more and more answer with Maxon API code. Cheers Manuel. @zipit It seems like so (i.e. you won't need the maxon module unless using the volume framework) @m_magalhaes I check the documentation previously, but I have problem looking for stuff. For instance, I want the equivalent of doc.SearchObject() in Maxon API or modiying parameters Cube[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] = 10 doc.SearchObject() Cube[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] = 10 Is this possible in Maxon API currently? @m_magalhaes said in Import C4D vs Import Maxon: Hi, think there was a misunderstanding. I did not (want to) say that the new API is 'just a concept'. I am at least somewhat aware of its role on the C++ side of things. I was just trying to convey that it is a more fundamental change than just adding the new functionality X and that what is now exposed to us in the maxon module mostly deals with these more fundamental paradigm changes. I was (trying to) directly answer one of the questions of the OP: If he has to worry about the new API on day-2-day tasks like selecting an object. My answer was: He doesn't Sorry for my misunderstood @zipit (my french side you know ^^) @bentraje don't try too much to search for equivalent for now. I'm not talking about python but about that Classic/Maxon API : For example, BaseContainer is used almost everywhere in Cinema4D. It's not something you can migrate to DataDictionary like nothing ^^ But, in some case, you can use DataDictionary for parts of your codes but you still need BaseContainer to access object's parameters. Some of part of the classic API are using part of the maxon API GetImageSettingsDictionary return a maxon.DataDictionary or the same with GetAutoWeightDictionary maxon.DataDictionary We are moving toward the Maxon API, it's going to take time. Don't be surprise and if you have question, just ask. Cheers, Manuel @zipit @m_magalhaes Yes, zipit was correct on my concern: If he has to worry about the new API on day-2-day tasks like selecting an object If he has to worry about the new API on day-2-day tasks like selecting an object Guess, I don't have to for now. Thanks for the clarification.
https://plugincafe.maxon.net/topic/11766/import-c4d-vs-import-maxon
CC-MAIN-2021-21
refinedweb
810
62.27
> Hello there, I am a beginner, and i am trying to make my first app for mobile with unity: a basic calculator. I am getting the input value of a number, which i store in a List of integers. Each time a new digit is entered or removed, i display the number in a gameObject button with a text. Before it is displayed, i convert to an int and format it with commas so that it is more readable. Problem: with big numbers (over 10 digits i believe), these 2 last functions seem to fail because value is too large. I don't know about a function that would allow me to bypass this limitation, and was hoping that some of you add a better knowledge at this than me ! using UnityEngine; using System; using System.Collections; using System.Collections.Generic; using System.Globalization; using UnityEngine.UI; public class NumberBox : MonoBehaviour { public Text textDisplayNumber; int convertedNumber; public List<int> numberListRaw = new List<int>(); void Update () { CollectingNumberInput (); } void CollectingNumberInput(){ if (Input.GetKeyDown(KeyCode.Keypad1)){ numberListRaw.Add (1); DisplayNumber (); } if (Input.GetKeyDown(KeyCode.Keypad2)){ numberListRaw.Add (2); DisplayNumber (); } if (Input.GetKeyDown(KeyCode.Keypad3)){ numberListRaw.Add (3); DisplayNumber (); } if (Input.GetKeyDown(KeyCode.Backspace)){ numberListRaw.RemoveAt (numberListRaw.Count - 1); DisplayNumber (); } } void DisplayNumber(){ textDisplayNumber.text = ""; for (int i = 0; i< numberListRaw.Count; i++){ textDisplayNumber.text += numberListRaw [i]; } convertedNumber = Int32.Parse (textDisplayNumber.text); textDisplayNumber.text = String.Format("{0:n0}", convertedNumber); } } Answer by hexagonius · Jan 03, 2017 at 10:27 AM Use an int64 (long) instead of the Int32 That works, thanks ! With this, i am getting int numbers up to 19 digits. I am willing this app to be played on Iphone, so does it mean that i am going to be dependent on whether IOS is 32 or 64 bits for it to work ? no problem here. the larger values are just placed across multiple registers in RAM. Ah, thanks a ton for your kind knowledge ! All the. \n Not picked up when reading string from file's name 1 Answer Parse RestAPI in Unity WebGL build not working on Google Chrome 1 Answer How to parse googlesheet json 0 Answers parsing a double to bigint also getting playerprefs bigint 0 Answers Monodevelop .Tostring() 0 Answers
https://answers.unity.com/questions/1293479/overflowexception-value-is-too-large.html
CC-MAIN-2019-22
refinedweb
371
58.38
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to import excel file in ODOO 9 I developed a custom import option for update the one2many table, for example I have a binary field for selecting xls or csv, then I have one button for import, this function will need to reterive the csv or xls data to create one2many records. In this code .csv is working fine but can't import .xls file. How to import .xls file. def records_import(self, cr, uid, ids, context=None): supplier_obj = self.pool.get('res.partner') for price in self.browse(cr, uid, ids, context=context): fileName, fileExtension = os.path.splitext(price.file_to_import_fname) if fileExtension != '.csv': raise osv.except_osv(_("Warning !"), _("Change the file type as CSV")) price_file = unicode(base64.decodestring(price.import_payment), 'windows-1252', 'strict').split('\n') line_count = 0 ss = len(price_file) for line in price_file: line_count += 1 if line_count > 1: if line_count < ss -1: line_contents = line.split(',') if not line_contents: break Name = line_contents[0].strip() Amount = line_contents[1].strip() supplier_id = supplier_obj.search(cr, uid, [('name', '=', Name)]) if supplier_id == []: supplier = { 'name':Name } new_supplier = supplier_obj.create(cr, uid, supplier) else: for supp in supplier_obj.browse(cr, uid, supplier_id): new_supplier = supp.id price_line_exist = self.pool.get('sample.lines').search(cr, uid, [('name', '=', new_supplier),('payment_id', '=', price.id),('amount', '=', Amount)]) if price_line_exist == []: price_upload_dict = { 'name' : new_supplier, 'amount': Amount } self.pool.get('sample.lines').create(cr, uid, price_upload_dict) self.write(cr, uid, price.id,{'upload':True},context=context) return {} You need to use a xls/xlsx library to allow you to read that files formats and better use xlrd that read both. Install this package Here some examples: You need to know from where(rows and cols) you will read the data Hi Axel, I am a beginner so can you help how I could install this library to our local v9 instance? Any howto link would do where I could learn installing libraries. Thanks in advance, Peter Hi Péter Mikó In python the most used install method of libraries is using pip that is used to install libraries from if the library does not install from pypi, then you need to check the library docs about how to install it. To install anything from pypi you could issue the command pip install library_name to install for example pip install xlrd there are a lot of resources out there of how to install libraries in python, also if you have an already running odoo v9 instance, you already had installed some libraries for that
https://www.odoo.com/forum/help-1/question/how-to-import-excel-file-in-odoo-9-92826
CC-MAIN-2019-04
refinedweb
438
58.89
A CodeView record linking to a .pdb 7.0 file. More... #include "util/misc/pdb_structures.h" A CodeView record linking to a .pdb 7.0 file. This format provides an indirect link to debugging data by referencing an external .pdb file by its name, UUID, and age. This structure may be pointed to by MINIDUMP_MODULE::CvRecord. For more information about this structure and format, see Matching Debug Information, PDB Files. The revision of the .pdb file. A .pdb file’s age indicates incremental changes to it. When a .pdb file is created, it has age 1, and subsequent updates increase this value. The path or file name of the .pdb file associated with the module. This is a NUL-terminated string. On Windows, it will be encoded in the code page of the system that linked the module. On other operating systems, UTF-8 may be used.
https://crashpad.chromium.org/doxygen/structcrashpad_1_1CodeViewRecordPDB70.html
CC-MAIN-2019-13
refinedweb
147
69.89
Time to slice and dice Most of the time, the data you work with won’t be perfectly prepared for training models. In this section we’ll explore the various features that 🤗 Datasets provides to clean up your datasets. Slicing and dicing our data Similar to Pandas, 🤗 Datasets provides several functions to manipulate the contents of Dataset and DatasetDict objects. We already encountered the Dataset.map() method in Chapter 3, and in this section we’ll explore some of the other functions at our disposal. For this example we’ll use the Drug Review Dataset that’s hosted on the UC Irvine Machine Learning Repository, which contains patient reviews on various drugs, along with the condition being treated and a 10-star rating of the patient’s satisfaction. First we need to download and extract the data, which can be done with the wget and unzip commands: !wget "" !unzip drugsCom_raw.zip Since TSV is just a variant of CSV that uses tabs instead of commas as the separator, we can load these files by using the csv loading script and specifying the delimiter argument in the load_dataset() function as follows: from datasets import load_dataset data_files = {"train": "drugsComTrain_raw.tsv", "test": "drugsComTest_raw.tsv"} # \t is the tab character in Python drug_dataset = load_dataset("csv", data_files=data_files, delimiter="\t") A good practice when doing any sort of data analysis is to grab a small random sample to get a quick feel for the type of data you’re working with. In 🤗 Datasets, we can create a random sample by chaining the Dataset.shuffle() and Dataset.select() functions together: drug_sample = drug_dataset["train"].shuffle(seed=42).select(range(1000)) # Peek at the first few examples drug_sample[:3] {'Unnamed: 0': [87571, 178045, 80482], 'drugName': ['Naproxen', 'Duloxetine', 'Mobic'], 'condition': ['Gout, Acute', 'ibromyalgia', 'Inflammatory Conditions'], 'review': ['"like the previous person mention, I'm a strong believer of aleve, it works faster for my gout than the prescription meds I take. No more going to the doctor for refills.....Aleve works!"', '"I have taken Cymbalta for about a year and a half for fibromyalgia pain. It is great\r\nas a pain reducer and an anti-depressant, however, the side effects outweighed \r\nany benefit I got from it. I had trouble with restlessness, being tired constantly,\r\ndizziness, dry mouth, numbness and tingling in my feet, and horrible sweating. I am\r\nbeing weaned off of it now. Went from 60 mg to 30mg and now to 15 mg. I will be\r\noff completely in about a week. The fibro pain is coming back, but I would rather deal with it than the side effects."', '"I have been taking Mobic for over a year with no side effects other than an elevated blood pressure. I had severe knee and ankle pain which completely went away after taking Mobic. I attempted to stop the medication however pain returned after a few days."'], 'rating': [9.0, 3.0, 10.0], 'date': ['September 2, 2015', 'November 7, 2011', 'June 5, 2013'], 'usefulCount': [36, 13, 128]} Note that we’ve fixed the seed in Dataset.shuffle() for reproducibility purposes. Dataset.select() expects an iterable of indices, so we’ve passed range(1000) to grab the first 1,000 examples from the shuffled dataset. From this sample we can already see a few quirks in our dataset: - The Unnamed: 0column looks suspiciously like an anonymized ID for each patient. - The conditioncolumn includes a mix of uppercase and lowercase labels. - The reviews are of varying length and contain a mix of Python line separators ( \r\n) as well as HTML character codes like &\#039;. Let’s see how we can use 🤗 Datasets to deal with each of these issues. To test the patient ID hypothesis for the Unnamed: 0 column, we can use the Dataset.unique() function to verify that the number of IDs matches the number of rows in each split: for split in drug_dataset.keys(): assert len(drug_dataset[split]) == len(drug_dataset[split].unique("Unnamed: 0")) This seems to confirm our hypothesis, so let’s clean up the dataset a bit by renaming the Unnamed: 0 column to something a bit more interpretable. We can use the DatasetDict.rename_column() function to rename the column across both splits in one go: drug_dataset = drug_dataset.rename_column( original_column_name="Unnamed: 0", new_column_name="patient_id" ) drug_dataset DatasetDict({ train: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'], num_rows: 161297 }) test: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'], num_rows: 53766 }) }) ✏️ Try it out! Use the Dataset.unique() function to find the number of unique drugs and conditions in the training and test sets. Next, let’s normalize all the condition labels using Dataset.map(). As we did with tokenization in Chapter 3, we can define a simple function that can be applied across all the rows of each split in drug_dataset: def lowercase_condition(example): return {"condition": example["condition"].lower()} drug_dataset.map(lowercase_condition) AttributeError: 'NoneType' object has no attribute 'lower' Oh no, we’ve run into a problem with our map function! From the error we can infer that some of the entries in the condition column are None, which cannot be lowercased as they’re not strings. Let’s drop these rows using Dataset.filter(), which works in a similar way to Dataset.map() and expects a function that receives a single example of the dataset. Instead of writing an explicit function like: def filter_nones(x): return x["condition"] is not None and then running drug_dataset.filter(filter_nones), we can do this in one line using a lambda function. In Python, lambda functions are small functions that you can define without explicitly naming them. They take the general form: lambda <arguments> : <expression> where lambda is one of Python’s special keywords, <arguments> is a list/set of comma-separated values that define the inputs to the function, and <expression> represents the operations you wish to execute. For example, we can define a simple lambda function that squares a number as follows: lambda x : x * x To apply this function to an input, we need to wrap it and the input in parentheses: (lambda x: x * x)(3) 9 Similarly, we can define lambda functions with multiple arguments by separating them with commas. For example, we can compute the area of a triangle as follows: (lambda base, height: 0.5 * base * height)(4, 8) 16.0 Lambda functions are handy when you want to define small, single-use functions (for more information about them, we recommend reading the excellent Real Python tutorial by Andre Burgaud). In the 🤗 Datasets context, we can use lambda functions to define simple map and filter operations, so let’s use this trick to eliminate the None entries in our dataset: drug_dataset = drug_dataset.filter(lambda x: x["condition"] is not None) With the None entries removed, we can normalize our condition column: drug_dataset = drug_dataset.map(lowercase_condition) # Check that lowercasing worked drug_dataset["train"]["condition"][:3] ['left ventricular dysfunction', 'adhd', 'birth control'] It works! Now that we’ve cleaned up the labels, let’s take a look at cleaning up the reviews themselves. Creating new columns Whenever you’re dealing with customer reviews, a good practice is to check the number of words in each review. A review might be just a single word like “Great!” or a full-blown essay with thousands of words, and depending on the use case you’ll need to handle these extremes differently. To compute the number of words in each review, we’ll use a rough heuristic based on splitting each text by whitespace. Let’s define a simple function that counts the number of words in each review: def compute_review_length(example): return {"review_length": len(example["review"].split())} Unlike our lowercase_condition() function, compute_review_length() returns a dictionary whose key does not correspond to one of the column names in the dataset. In this case, when compute_review_length() is passed to Dataset.map(), it will be applied to all the rows in the dataset to create a new review_length column: drug_dataset = drug_dataset.map(compute_review_length) # Inspect the first training example drug_dataset["train"][0] {'patient_id': 206461, 'drugName': 'Valsartan', 'condition': 'left ventricular dysfunction', 'review': '"It has no side effect, I take it in combination of Bystolic 5 Mg and Fish Oil"', 'rating': 9.0, 'date': 'May 20, 2012', 'usefulCount': 27, 'review_length': 17} As expected, we can see a review_length column has been added to our training set. We can sort this new column with Dataset.sort() to see what the extreme values look like: drug_dataset["train"].sort("review_length")[:3] {'patient_id': [103488, 23627, 20558], 'drugName': ['Loestrin 21 1 / 20', 'Chlorzoxazone', 'Nucynta'], 'condition': ['birth control', 'muscle spasm', 'pain'], 'review': ['"Excellent."', '"useless"', '"ok"'], 'rating': [10.0, 1.0, 6.0], 'date': ['November 4, 2008', 'March 24, 2017', 'August 20, 2016'], 'usefulCount': [5, 2, 10], 'review_length': [1, 1, 1]} As we suspected, some reviews contain just a single word, which, although it may be okay for sentiment analysis, would not be informative if we want to predict the condition. 🙋 An alternative way to add new columns to a dataset is with the Dataset.add_column() function. This allows you to provide the column as a Python list or NumPy array and can be handy in situations where Dataset.map() is not well suited for your analysis. Let’s use the Dataset.filter() function to remove reviews that contain fewer than 30 words. Similarly to what we did with the condition column, we can filter out the very short reviews by requiring that the reviews have a length above this threshold: drug_dataset = drug_dataset.filter(lambda x: x["review_length"] > 30) print(drug_dataset.num_rows) {'train': 138514, 'test': 46108} As you can see, this has removed around 15% of the reviews from our original training and test sets. ✏️ Try it out! Use the Dataset.sort() function to inspect the reviews with the largest numbers of words. See the documentation to see which argument you need to use sort the reviews by length in descending order. The last thing we need to deal with is the presence of HTML character codes in our reviews. We can use Python’s html module to unescape these characters, like so: import html text = "I'm a transformer called BERT" html.unescape(text) "I'm a transformer called BERT" We’ll use Dataset.map() to unescape all the HTML characters in our corpus: drug_dataset = drug_dataset.map(lambda x: {"review": html.unescape(x["review"])}) As you can see, the Dataset.map() method is quite useful for processing data — and we haven’t even scratched the surface of everything it can do! The map() method's superpowers The Dataset.map() method takes a batched argument that, if set to True, causes it to send a batch of examples to the map function at once (the batch size is configurable but defaults to 1,000). For instance, the previous map function that unescaped all the HTML took a bit of time to run (you can read the time taken from the progress bars). We can speed this up by processing several elements at the same time using a list comprehension. When you specify batched=True the function receives a dictionary with the fields of the dataset, but each value is now a list of values, and not just a single value. The return value of Dataset.map() should be the same: a dictionary with the fields we want to update or add to our dataset, and a list of values. For example, here is another way to unescape all HTML characters, but using batched=True: new_drug_dataset = drug_dataset.map( lambda x: {"review": [html.unescape(o) for o in x["review"]]}, batched=True ) If you’re running this code in a notebook, you’ll see that this command executes way faster than the previous one. And it’s not because our reviews have already been HTML-unescaped — if you re-execute the instruction from the previous section (without batched=True), it will take the same amount of time as before. This is because list comprehensions are usually faster than executing the same code in a for loop, and we also gain some performance by accessing lots of elements at the same time instead of one by one. Using Dataset.map() with batched=True will be essential to unlock the speed of the “fast” tokenizers that we’ll encounter in Chapter 6, which can quickly tokenize big lists of texts. For instance, to tokenize all the drug reviews with a fast tokenizer, we could use a function like this: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") def tokenize_function(examples): return tokenizer(examples["review"], truncation=True) As you saw in Chapter 3, we can pass one or several examples to the tokenizer, so we can use this function with or without batched=True. Let’s take this opportunity to compare the performance of the different options. In a notebook, you can time a one-line instruction by adding %time before the line of code you wish to measure: %time tokenized_dataset = drug_dataset.map(tokenize_function, batched=True) You can also time a whole cell by putting %%time at the beginning of the cell. On the hardware we executed this on, it showed 10.8s for this instruction (it’s the number written after “Wall time”). ✏️ Try it out! Execute the same instruction with and without batched=True, then try it with a slow tokenizer (add use_fast=False in the AutoTokenizer.from_pretrained() method) so you can see what numbers you get on your hardware. Here are the results we obtained with and without batching, with a fast and a slow tokenizer: This means that using a fast tokenizer with the batched=True option is 30 times faster than its slow counterpart with no batching — this is truly amazing! That’s the main reason why fast tokenizers are the default when using AutoTokenizer (and why they are called “fast”). They’re able to achieve such a speedup because behind the scenes the tokenization code is executed in Rust, which is a language that makes it easy to parallelize code execution. Parallelization is also the reason for the nearly 6x speedup the fast tokenizer achieves with batching: you can’t parallelize a single tokenization operation, but when you want to tokenize lots of texts at the same time you can just split the execution across several processes, each responsible for its own texts. Dataset.map() also has some parallelization capabilities of its own. Since they are not backed by Rust, they won’t let a slow tokenizer catch up with a fast one, but they can still be helpful (especially if you’re using a tokenizer that doesn’t have a fast version). To enable multiprocessing, use the num_proc argument and specify the number of processes to use in your call to Dataset.map(): slow_tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False) def slow_tokenize_function(examples): return slow_tokenizer(examples["review"], truncation=True) tokenized_dataset = drug_dataset.map(slow_tokenize_function, batched=True, num_proc=8) You can experiment a little with timing to determine the optimal number of processes to use; in our case 8 seemed to produce the best speed gain. Here are the numbers we got with and without multiprocessing: Those are much more reasonable results for the slow tokenizer, but the performance of the fast tokenizer was also substantially improved. Note, however, that won’t always be the case — for values of num_proc other than 8, our tests showed that it was faster to use batched=True without that option. In general, we don’t recommend using Python multiprocessing for fast tokenizers with batched=True. Using num_proc to speed up your processing is usually a great idea, as long as the function you are using is not already doing some kind of multiprocessing of its own. All of this functionality condensed into a single method is already pretty amazing, but there’s more! With Dataset.map() and batched=True you can change the number of elements in your dataset. This is super useful in many situations where you want to create several training features from one example, and we will need to do this as part of the preprocessing for several of the NLP tasks we’ll undertake in Chapter 7. 💡 In machine learning, an example is usually defined as the set of features that we feed to the model. In some contexts, these features will be the set of columns in a Dataset, but in others (like here and for question answering), multiple features can be extracted from a single example and belong to a single column. Let’s have a look at how it works! Here we will tokenize our examples and truncate them to a maximum length of 128, but we will ask the tokenizer to return all the chunks of the texts instead of just the first one. This can be done with return_overflowing_tokens=True: def tokenize_and_split(examples): return tokenizer( examples["review"], truncation=True, max_length=128, return_overflowing_tokens=True, ) Let’s test this on one example before using Dataset.map() on the whole dataset: result = tokenize_and_split(drug_dataset["train"][0]) [len(inp) for inp in result["input_ids"]] [128, 49] So, our first example in the training set became two features because it was tokenized to more than the maximum number of tokens we specified: the first one of length 128 and the second one of length 49. Now let’s do this for all elements of the dataset! tokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True) ArrowInvalid: Column 1 named condition expected length 1463 but got length 1000 Oh no! That didn’t work! Why not? Looking at the error message will give us a clue: there is a mismatch in the lengths of one of the columns, one being of length 1,463 and the other of length 1,000. If you’ve looked at the Dataset.map() documentation, you may recall that it’s the number of samples passed to the function that we are mapping; here those 1,000 examples gave 1,463 new features, resulting in a shape error. The problem is that we’re trying to mix two different datasets of different sizes: the drug_dataset columns will have a certain number of examples (the 1,000 in our error), but the tokenized_dataset we are building will have more (the 1,463 in the error message). That doesn’t work for a Dataset, so we need to either remove the columns from the old dataset or make them the same size as they are in the new dataset. We can do the former with the remove_columns argument: tokenized_dataset = drug_dataset.map( tokenize_and_split, batched=True, remove_columns=drug_dataset["train"].column_names ) Now this works without error. We can check that our new dataset has many more elements than the original dataset by comparing the lengths: len(tokenized_dataset["train"]), len(drug_dataset["train"]) (206772, 138514) We mentioned that we can also deal with the mismatched length problem by making the old columns the same size as the new ones. To do this, we will need the overflow_to_sample_mapping field the tokenizer returns when we set return_overflowing_tokens=True. It gives us a mapping from a new feature index to the index of the sample it originated from. Using this, we can associate each key present in our original dataset with a list of values of the right size by repeating the values of each example as many times as it generates new features: def tokenize_and_split(examples): result = tokenizer( examples["review"], truncation=True, max_length=128, return_overflowing_tokens=True, ) # Extract mapping between new and old indices sample_map = result.pop("overflow_to_sample_mapping") for key, values in examples.items(): result[key] = [values[i] for i in sample_map] return result We can see it works with Dataset.map() without us needing to remove the old columns: tokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True) tokenized_dataset DatasetDict({ train: Dataset({ features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'], num_rows: 206772 }) test: Dataset({ features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'], num_rows: 68876 }) }) We get the same number of training features as before, but here we’ve kept all the old fields. If you need them for some post-processing after applying your model, you might want to use this approach. You’ve now seen how 🤗 Datasets can be used to preprocess a dataset in various ways. Although the processing functions of 🤗 Datasets will cover most of your model training needs, there may be times when you’ll need to switch to Pandas to access more powerful features, like DataFrame.groupby() or high-level APIs for visualization. Fortunately, 🤗 Datasets is designed to be interoperable with libraries such as Pandas, NumPy, PyTorch, TensorFlow, and JAX. Let’s take a look at how this works. From Datasets to DataFrames and back To enable the conversion between various third-party libraries, 🤗 Datasets provides a Dataset.set_format() function. This function only changes the output format of the dataset, so you can easily switch to another format without affecting the underlying data format, which is Apache Arrow. The formatting is done in place. To demonstrate, let’s convert our dataset to Pandas: drug_dataset.set_format("pandas") Now when we access elements of the dataset we get a pandas.DataFrame instead of a dictionary: drug_dataset["train"][:3] Let’s create a pandas.DataFrame for the whole training set by selecting all the elements of drug_dataset["train"]: train_df = drug_dataset["train"][:] 🚨 Under the hood, Dataset.set_format() changes the return format for the dataset’s __getitem__() dunder method. This means that when we want to create a new object like train_df from a Dataset in the "pandas" format, we need to slice the whole dataset to obtain a pandas.DataFrame. You can verify for yourself that the type of drug_dataset["train"] is Dataset, irrespective of the output format. From here we can use all the Pandas functionality that we want. For example, we can do fancy chaining to compute the class distribution among the condition entries: frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() And once we’re done with our Pandas analysis, we can always create a new Dataset object by using the Dataset.from_pandas() function as follows: from datasets import Dataset freq_dataset = Dataset.from_pandas(frequencies) freq_dataset Dataset({ features: ['condition', 'frequency'], num_rows: 819 }) ✏️ Try it out! Compute the average rating per drug and store the result in a new Dataset. This wraps up our tour of the various preprocessing techniques available in 🤗 Datasets. To round out the section, let’s create a validation set to prepare the dataset for training a classifier on. Before doing so, we’ll reset the output format of drug_dataset from "pandas" to "arrow": drug_dataset.reset_format() Creating a validation set Although we have a test set we could use for evaluation, it’s a good practice to leave the test set untouched and create a separate validation set during development. Once you are happy with the performance of your models on the validation set, you can do a final sanity check on the test set. This process helps mitigate the risk that you’ll overfit to the test set and deploy a model that fails on real-world data. 🤗 Datasets provides a Dataset.train_test_split() function that is based on the famous functionality from scikit-learn. Let’s use it to split our training set into train and validation splits (we set the seed argument for reproducibility): drug_dataset_clean = drug_dataset["train"].train_test_split(train_size=0.8, seed=42) # Rename the default "test" split to "validation" drug_dataset_clean["validation"] = drug_dataset_clean.pop("test") # Add the "test" set to our `DatasetDict` drug_dataset_clean["test"] = drug_dataset["test"] drug_dataset_clean DatasetDict({ train: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'], num_rows: 110811 }) validation: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'], num_rows: 27703 }) test: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'], num_rows: 46108 }) }) Great, we’ve now prepared a dataset that’s ready for training some models on! In section 5 we’ll show you how to upload datasets to the Hugging Face Hub, but for now let’s cap off our analysis by looking at a few ways you can save datasets on your local machine. Saving a dataset Although 🤗 Datasets will cache every downloaded dataset and the operations performed on it, there are times when you’ll want to save a dataset to disk (e.g., in case the cache gets deleted). As shown in the table below, 🤗 Datasets provides three main functions to save your dataset in different formats: For example, let’s save our cleaned dataset in the Arrow format: drug_dataset_clean.save_to_disk("drug-reviews") This will create a directory with the following structure: drug-reviews/ ├── dataset_dict.json ├── test │ ├── dataset.arrow │ ├── dataset_info.json │ └── state.json ├── train │ ├── dataset.arrow │ ├── dataset_info.json │ ├── indices.arrow │ └── state.json └── validation ├── dataset.arrow ├── dataset_info.json ├── indices.arrow └── state.json where we can see that each split is associated with its own dataset.arrow table, and some metadata in dataset_info.json and state.json. You can think of the Arrow format as a fancy table of columns and rows that is optimized for building high-performance applications that process and transport large datasets. Once the dataset is saved, we can load it by using the load_from_disk() function as follows: from datasets import load_from_disk drug_dataset_reloaded = load_from_disk("drug-reviews") drug_dataset_reloaded DatasetDict({ train: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'], num_rows: 110811 }) validation: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'], num_rows: 27703 }) test: Dataset({ features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'], num_rows: 46108 }) }) For the CSV and JSON formats, we have to store each split as a separate file. One way to do this is by iterating over the keys and values in the DatasetDict object: for split, dataset in drug_dataset_clean.items(): dataset.to_json(f"drug-reviews-{split}.jsonl") This saves each split in JSON Lines format, where each row in the dataset is stored as a single line of JSON. Here’s what the first example looks like: !head -n 1 drug-reviews-train.jsonl {"patient_id":141780,"drugName":"Escitalopram","condition":"depression","review":"\"I seemed to experience the regular side effects of LEXAPRO, insomnia, low sex drive, sleepiness during the day. I am taking it at night because my doctor said if it made me tired to take it at night. I assumed it would and started out taking it at night. Strange dreams, some pleasant. I was diagnosed with fibromyalgia. Seems to be helping with the pain. Have had anxiety and depression in my family, and have tried quite a few other medications that haven't worked. Only have been on it for two weeks but feel more positive in my mind, want to accomplish more in my life. Hopefully the side effects will dwindle away, worth it to stick with it from hearing others responses. Great medication.\"","rating":9.0,"date":"May 29, 2011","usefulCount":10,"review_length":125} We can then use the techniques from section 2 to load the JSON files as follows: data_files = { "train": "drug-reviews-train.jsonl", "validation": "drug-reviews-validation.jsonl", "test": "drug-reviews-test.jsonl", } drug_dataset_reloaded = load_dataset("json", data_files=data_files) And that’s it for our excursion into data wrangling with 🤗 Datasets! Now that we have a cleaned dataset for training a model on, here are a few ideas that you could try out: - Use the techniques from Chapter 3 to train a classifier that can predict the patient condition based on the drug review. - Use the summarizationpipeline from Chapter 1 to generate summaries of the reviews. Next, we’ll take a look at how 🤗 Datasets can enable you to work with huge datasets without blowing up your laptop!
https://huggingface.co/course/chapter5/3
CC-MAIN-2022-27
refinedweb
4,586
52.6
smoking. It is easier that you might think to fool yourself with data. It is quantified so there is less bias right? This series of videos shows you an analysis using pandas that demonstrates why this might not be true. Notes This dataset is listed in the datasets portion of our website. You can also download it directly here. We assume that it is saved in a downloads folder on a mac. Double check that you change the code if you save it somewhere else. import pandas as pd import matplotlib.pylab as plt df = pd.read_csv("~/Downloads/smoking.csv") Feedback? See an issue? Something unclear? Feel free to mention it here. If you want to be kept up to date, consider getting the newsletter.
https://calmcode.io/smoking/the-dataset.html
CC-MAIN-2020-40
refinedweb
125
78.25
What is ADL? I was all set to write a blog post about hidden friends, and then I realized that it was turning into a blog post largely ranting about ADL. So I figure I should rant about ADL first, and then talk about hidden friends in Part 2. First things first. ADL stands for “argument-dependent [name] lookup.” There. Now, on with the story! In the beginning Bjarne created namespaces In C, there are no namespaces. If I write a function named get_next (and I don’t mark it static to the current translation unit), then you cannot write a function named get_next anywhere in your part of the code — our two pieces of code won’t link together, because we’ll have multiple definitions of the get_next function. In C, we work around this by manually prefixing our function names: I’ll agree to name all of my functions ajo_<whatever>, and you’ll call your functions sak_<whatever>, and then the linker will be happy because ajo_get_next and sak_get_next are different names. C++ took this existing convention and baked it into the language. In C++, I can place all of my stuff under namespace ajo, and you can place all of your stuff under namespace sak. And then the linker will be happy because ajo::get_next and sak::get_next are different names. (That is, the name of the namespace becomes part of the entity’s name-mangling.) This is really great for scalability, because it means you don’t have to worry about what I write. We can each develop whatever we want in our own namespaces. Any unqualified get_next I use in my own code (inside namespace ajo) will naturally refer to ajo::get_next. // SakUtils.h namespace sak { struct bignum {}; int get_next(int); } // AjoUtils.h namespace ajo { struct bignum {}; int get_next(int); } // AjoExtra.cc #include "AjoUtils.h" #include "SakUtils.h" namespace ajo { void foo(int& x) { bignum b; // refers to ajo::bignum x = get_next(x); // calls ajo::get_next } } And then whenever I need to refer to an entity in a foreign namespace, such as the get_next which is a member of namespace sak, I simply namespace-qualify its name: namespace ajo { void bar(int& x) { sak::bignum b; // refers to sak::bignum x = sak::get_next(x); // calls sak::get_next } } But what do I do when the function I want to call doesn’t have a name? ADL arises to resolve conflict between namespaces and operators For this section, C++ experts will have to put themselves in “alternate-history mode.” Our code samples assume that ADL doesn’t yet exist. The problem with namespaces was recognized early on. See, C++ had also added operator overloading. So you could write things like this: // SakBigNum.h namespace sak { struct bignum { bignum operator++(); }; std::ostream& operator<<(std::ostream&, bignum); } // AjoBigNum.h namespace ajo { struct bignum { bignum operator++(); }; std::ostream& operator<<(std::ostream&,) } void bar(int& x) { sak::bignum b; // refers to sak::bignum ++b; // calls sak::bignum::operator++() std::cout << b; // UH-OH! } } Explicit namespace-qualification works fine for accessing sak::bignum and sak::get_next. And we don’t need any special rules to deal with the meaning of ++b: it “obviously” should call b’s member function operator++(). But what about operator<<? Sure, I could write the call above as sak::operator<<(std::cout, b); But if that’s the recommended solution, then I should just write sak::print(b) and stop using overloaded operators altogether! Notice that this is not a problem for the “standard” stream insertion operators, such as the ones for primitive types. When you call std::cout << 42, you’re not using ADL; you’re just calling the member function std::ostream::operator<<(int). “So we can blame iostreams for ADL?” Yeah, I won’t stop you from blaming iostreams. But, to be fair, the problem crops up anywhere you have an operator that can’t be a member because its arguments are in the “wrong” order. For example, std::operator+(const char *, const std::string&). The solution is argument-dependent lookup To solve the problem of sak::operator<<, the original C++98 standard grew a feature known as “Koenig lookup.” It was named after Andrew Koenig — although he says he did not invent it. Eventually, as the feature continued to evolve, Koenig’s name was dissociated from it; today it is known simply as “argument-dependent lookup” (ADL). For a glimpse into the wild and woolly pre-ADL world, see Bjarne Stroustrup’s P0262 “Name Space Management in C++ (revised)” (1993), particularly Appendix D. A rationale for the feature can be found in Koenig’s N0645 “Reconciling overloaded operators with namespaces” (January 1995). In September 1996, the draft standard (for what ultimately became C++98) gained a section with the stable-name [basic.lookup.koenig]. By October 2005, that section had been renamed to [basic.lookup.argdep]. Under ADL, whenever we see an unqualified call to a possibly overloaded operator — such as std::cout << b — we’ll look up the name of that operator not only in our current namespace ( namespace ajo), but also in all the namespaces associated with the types of the arguments to the operator (namely, namespace std and namespace sak). This allows lookup on std::cout << b to find sak::operator<<, and get it into the candidate set, whereupon overload resolution chooses sak::operator<<(std::ostream&, sak::bignum) as the best-matching candidate for this particular set of arguments. Koenig’s original proposal applied only to overloaded operators. But then in 1996 it was decided “to extend Koenig lookup to function names” — that is, to extend ADL to cover non-operator functions such as swap and get_next. // SakBigNum.h namespace sak { struct bignum { bignum operator++(); }; std::ostream& operator<<(std::ostream&, bignum); bignum get_next(bignum); } // AjoBigNum.h namespace ajo { struct bignum { bignum operator++(); }; std::ostream& operator<<(std::ostream&, bignum); bignum get_next) get_next(b); // calls ajo::get_next(bignum) } void bar(int& x) { sak::bignum b; // refers to sak::bignum ++b; // calls sak::bignum::operator++() std::cout << b; // calls sak::operator<<(ostream&, bignum) get_next(b); // calls sak::get_next(bignum) } } This makes sense if you think of free functions (like swap) as being part of the interface of a class. I can write myApple.eat() without redundant qualification; it seems reasonable that I should also be able to write eat(myApple) instead of having to type out my::eat(myApple). Sometimes, free functionality — whether it’s spelled operator<< or eat or swap — is intrinsically entangled with the class’s own interface. ADL reinforces that entanglement, for better and worse. When does ADL kick in? The compiler applies ADL whenever it’s doing name lookup (building a candidate set) for an unqualified function call. If the name of the thing-being-called has any ::-qualification at all, then ADL won’t kick in. Godbolt: namespace A { struct A { operator int(); }; void f(A); } namespace B { void f(int); void test() { A::A a; f(a); // ADL, calls A::f(A) B::f(a); // no ADL, calls B::f(int) } } Also, if the thing is not “a function call,” then ADL won’t kick in. (That is, we don’t try to apply Argument-Dependent Lookup to names that don’t have arguments.) ADL is defined in terms of the unqualified-id grammar production, which means that ADL does not apply to a redundantly parenthesized call such as (f)(a), because (f) is a primary-expression, not an unqualified-id. Several other rules in C++ are defined in terms of the more nebulous English term “name.” For example, return (x);still triggers copy elision when (x)is the “name” of a local variable; and (f)(a)will still treat fas the “name” of an overload set. However, because (f)is grammatically not an unqualified-id, that overload set will be constructed using regular unqualified lookup, not argument-dependent lookup. namespace A { struct A { operator int(); }; void f(A); } namespace B { void f(int); void f(double); void test() { A::A a; void (*fp)(int) = f; // OK, no ADL void (*gp)(A::A) = f; // ERROR, no ADL, fails to find A::f f(a); // ADL, calls A::f(A) (&f)(a); // no ADL, calls B::f(int) (f)(a); // no ADL, calls B::f(int) } } Finally, and perhaps most importantly, ADL won’t kick in if the thing being called is not a function! That is, before we do ADL for a call to f, we’ll do an ordinary unqualified lookup of f, which means we’ll look in our current scope and all enclosing scopes. If this ordinary unqualified lookup finds something called f, and that f is not a function (or a function template), then we’ll just use that f; we won’t let ADL drag in any other namespaces. It’s only if we find a function (or function template) named f, or if we don’t find anything at all, that we’ll move on to argument-dependent lookup. Godbolt: namespace A { struct A { operator int(); }; void f(A); void g(A); void h(A); int i(A); int j(A); } namespace B { void f(int); auto h = [](int) {}; using i = int; void test() { A::A a; f(a); // ADL, calls A::f(A) g(a); // ADL, calls A::g(A) h(a); // no ADL: lookup found B::h which is not a function int ia = i(a); // no ADL: lookup found B::i which is not a function int j = j(a); // no ADL, and ERROR! lookup found local variable j } } How does an ADL lookup behave? The first thing to know is that ADL looks only at the types of the arguments! (Assuming they have types at all. There are a couple of poorly-supported exceptions for untyped arguments. In this post, we will ignore those exceptions.) Every bit of information about the arguments, other than their types, is thrown away and never considered. namespace A { struct A {}; } namespace B { using T = A::A; } namespace C { B::T c; } namespace C { void test() { f(C::c); // HERE } } Here we invoke f with a value of type A::A. That’s all that matters. Sure, the value comes from evaluating a variable that was defined in namespace C, but that doesn’t matter. Sure, the variable was originally declared using a type alias B::T, but that doesn’t matter. All that ADL cares about is that the function argument, after evaluation, after looking through all the type aliases, is some value of type A::A. Also, ADL considers only function arguments, not template arguments. Godbolt: namespace A { struct A { operator int(); }; struct X {}; template<class T> void f(int); } namespace B { template<class T> void f(); void test() { A::A a; f<A::X>(); // OK, ADL doesn't consider A::f, calls B::f f<A::X>(a); // OK, ADL considers A::f because of A::A, calls A::f f<A::X>(42); // ERROR: ADL doesn't consider A::f } } If the call has multiple function arguments, then ADL will consider all of them. (In no particular order. Nothing in this algorithm will depend on the order.) From the set of argument types in the call, we break each type down further. Each argument type produces zero or more associated types and associated namespaces, via a complicated ad-hoc process. For the simplest cases, you can think of it as essentially “write down the name of the type as unambiguously as possible and then extract all the class-names and all the innermost namespace-names from that string.” For example, An argument of type int(or any primitive type) doesn’t give us any associated types. An argument of type NS::SomeClass(or NS::SomeClass*or NS::SomeClass&) gives us one associated type — NS::SomeClass— and one associated namespace — NS. An argument of type NN::NS::SomeClassgives us one associated type — NN::NS::SomeClass— and one associated namespace — NN::NS. Notice that it does not produce NNas an associated namespace. An argument of type SomeClass::NestedClassOrEnumgives us two associated types: SomeClass::NestedClassOrEnumitself, and the class SomeClassof which it is a member. An argument of type NA::A (*)(NB::B, NC::C)— that is, “pointer to function taking Band Cand returning A” — gives us three associated types ( NA::A, NB::B, and NC::C) and three associated namespaces ( NA, NB, and NC). An argument of type NS::SomeTemplate<NA::A, NB::B>gives us three associated types (itself, NA::A, and NB::B) and three associated namespaces ( NS, NA, and NB). An argument of type NS::SomeClass::SomeNestedTemplate<NA::A>gives us three associated types (itself, NA::A, and NS::SomeClass) and two associated namespaces ( NSand NA). (Godbolt.) An argument of type NA::A, where NA::Ainherits (even privately!) from NB::B, gives us two associated types ( NA::Aand NB::B) and two associated namespaces ( NAand NB). This list of rules is not exhaustive; and not every rule is applied recursively. For example, although class A::B::C has associated type A::B and class A::B has associated type A, that doesn’t imply that A::B::C must have associated type A — in fact it doesn’t! (Godbolt.) (See this blog post for more on that case.) Having created sets of associated namespaces and associated types for each argument, we merge them all together (and add our current namespace and all its parents, too, of course) and do a lookup for declarations of the name f in any of these namespaces. Our overload resolution for this call will consider all the function declarations that we found in any of those places. Wait, what does it mean to do a lookup “in an associated type”? When ADL performs lookup in an associated class type, what it’s considering are the (namespace-scope) friends of that class. It won’t consider the member functions of that class — not even the static member functions. Godbolt: namespace N { struct A { enum E { E0 }; friend void f(E); static void g(E); }; } namespace My { void f(int); void g(int); void test() { N::A::E e; f(e); // ADL considers N::f (friend of N::A) g(e); // ADL does not consider N::A::g } } The friend functions that are found by ADL might have been declared in the namespace enclosing the associated type, or they might be declared nowhere else (the so-called “hidden friend” idiom, about which I hope to write more later). However, when the associated type declares a friend function using explicit namespace-qualification (as in friend void NS::f(int)), ADL will ignore that declaration. So even though it is technically possible to befriend functions from other namespaces, those functions will not thereby become ADL candidates. Godbolt: namespace Unrelated { void f(int); } namespace NN { void f(int); namespace NA { struct A { enum E : int { E0 }; friend void f(int); friend void NN::f(int); friend void Unrelated::f(int); }; } } namespace B { void test() { NN::NA::A::E e; f(e); // OK: ADL considers NA::f which is an unqualified // ("namespace-scope") friend of NA::A, but not // the other two friends } } One last thing I wrote: Having created sets of associated namespaces and associated types for each argument, we merge them all together (and add our current namespace and all its parents, too, of course) and do a lookup for declarations of the name fin any of these namespaces. Our overload resolution for this call will consider all the function declarations that we found in any of those places. ADL will consider only function declarations (and, as usual, function templates). If our lookup in some associated namespace finds a non-function declaration of f, we’ll simply ignore that declaration. And remember: if our initial unqualified lookup found a non-function, then we won’t do ADL at all! Godbolt: namespace A { struct A { operator int() const; }; auto f = [](A, int) {}; void g(A, int); void h(A, int); } namespace B { struct B { operator int() const; }; void f(int, B); using g = int; void h(int, B); } namespace C { void f(int, int); void g(int, int); auto h = [](int, int) {}; void test() { A::A a; B::B b; f(a, b); // OK: ADL ignores the non-function A::f g(a, b); // OK: ADL ignores the non-function B::g h(a, b); // OK: no ADL } } Conclusion There you go — now you know (almost) everything there is to know about argument-dependent lookup! The parts I consciously neglected in this blog post are: The exact rules by which associated classes and associated namespaces are produced The special cases for arguments-with-no-type (see here) The role of using-directives and using-declarations The ways ADL is used behind the scenes by ranged- forin C++11 and structured binding in C++17 Idioms that rely on ADL, such as the std::swaptwo-step (2020-07-11), hidden friends, and niebloids (I hope to write more on each of these in future posts, and will link them from here when I do) For more information on on ADL, see these resources:
https://quuxplusone.github.io/blog/2019/04/26/what-is-adl/
CC-MAIN-2021-21
refinedweb
2,854
57.81
It’s cool working for an international company with an open philosophy, but our decentralised setup can cause some real headaches for sysadmins. One of these is giving fast access the source-code repository to our developers and support staff spread over 3 continents, all working on a common code base. Subversion is the existing version control system here, primarily for the tool support and well-understood workflow. But it’s not without its problems, not least that its chatty-on-the-wire nature causes problems when latency is introduced. And when your developers are in Sydney and your servers in St. Louis that’s about as high-latency as you’re going to get on the internet. While Subversion 1.5 introduced the concept of a write-through proxy, the devil is in the details. The documentation of how to do this is sparse, and developing a robust method of replication is “left as an exercise to the reader”. This post documents some of the considerations that need to be taken into account and the method we are using at Atlassian to get reliable high-speed Subversion servers in a distributed environment. The basic replication architecture is straight-forward: the slave server serves up checkouts and meta-data from a local cache but transparently proxies checkins to the master server. The concept of how checkins are replicated to the slaves is also simple enough: - A user checks-out a working copy from the slave and makes changes - User issues ‘svn commit’ which pushes the changes to the slave - The slave transparently pushes the commit to the master - The master completes the commit and invokes its post-commit hook - The post-commit hook contains code to push the update all the known slaves However the devil is in the details. The exact method of push-to-slave operation is poorly documented; there is a brief suggested method in the readme file that is unfortunately highly synchronous. As already mentioned, an alternative method using svnsync is “left as an exercise to the reader”. We need method method of doing this that minimises commit time while keeping all slaves up to date. The problem The problem with the documented SSH + dump/restore method is that it will tie up the committing client and the server for the entire time it takes to upload and import the incremental dump. But if that slave is unavailable for any reason it will hang until the TCP session times-out. Furthermore that slave will then be out of sync with the master repository and future commits will fail. What we need is a method where the slaves are updated asynchronously and will compensate for missed commits. The solution Enter svnsync. This allows mirroring of a subversion repository is a transaction-aware manner, only pulling down revisions it does not currently have. It performs its own local locking on the mirrored repository so collisions are not an issue. However there is still the question of when to run the updates. We could just poll the repository with a cron-script, but this creates a window where the slaves are out of sync unless the sync is run constantly, which would be wasteful. However a purely event-driven system suffers from the some of problems as the SSH dump/restore system above; if an update is missed the slave is out of sync until the next update is received. Furthermore if the event is implemented synchronously the post-commit script is tied-up. In the end I opted for a hybrid solution that where each slave runs a server that accepts a single UDP packet to trigger an update (allowing the post-commit script to fire-and-forget) with intermittent scheduled update to compensate for missed events. Setting up the mirror The first step is to initialise the svnsync mirror. This requires setting up new repository then initialising it from the master. To ensure repository integrity only a special svnsync user can write to the repository: [cc lang=’bash’ ] sudo su – svnsync svnadmin create /opt/svn/repositories/atlassian/private-mirror [/cc] Before synchronisation property-revisions must be enabled on the mirror. Again, only the special user can perform this action. Create the file `/opt/svn/repositories/atlassian.com/private-mirror/hooks/pre-revprop-change` and add the following: [cc lang=’bash’ ] #!/bin/sh USER=”$3″ if [ “$USER” = “svnsync” ]; then # Allow exit 0; fi echo “Only the svnsync user can change revprops” >&2 exit 1 [/cc] Then convert the repository to a synchronisable one by setting the remote source. Then perform the initial sync-up: [cc lang=’bash’ ] svnsync init svnsync sync [/cc] This copies the entire history of the master to the slave, so depending on your repository size it may time some time. Once this is done the following will update the mirror to latest the master revision: [cc lang=’bash’ ] svnsync sync [/cc] One problem you are likely to hit with this setup is that because we created a new repository from scratch it has a different UUID from the master. This is fine for checkouts but will fail on commits. However we can manually copy the UUID across from the master: [cc lang=’bash’ ] cd /opt/svn/repositories/atlassian.com/private-mirror/db/ scp svn.atlassian.com:/opt/svn/repositories/atlassian.com/private-mirror/db/uuid . [/cc] You should now have a working mirror which can be made available via the SVN 1.5 proxy in Apache (authentication is ignored in this example): [cc lang=’bash’ ] DAV svn SVNPath /opt/svn/repositories/atlassian.com/private-mirror SVNMasterURI [/cc] The next step is keep the mirror up-to-date …. The Update Event Server So we need a server that will accept UDP packets, fork off and monitor sub-processes, and trigger time-based events. We could probably monkey-up something with inetd and cron but I like to keep all the variables in one place so I implemented my own server that handles all the tasks in the same place. Of course, reinventing the wheel sucks so I turned to the Python Twisted framework which supplies all of the necessary pieces … [cc lang=’python’ ] import sys, re from twisted.internet.protocol import DatagramProtocol, ProcessProtocol from twisted.internet import reactor, task cmdline = [‘svnsync’, ‘sync’, ‘’] lockmsg = “Failed to get lock” _debug = False def debug(msg): if _debug: print >> sys.stderr, msg def error(msg): print >> sys.stderr, msg def log(msg): print >> sys.stdout, msg class SyncProcess(ProcessProtocol): def __init__(self): self.running = False def connectionMade(self): self.running = True log(“SVN sync process started”) def outReceived(self, data): log(“stdout> %s” % data) if data.find(lockmsg) > -1: error(“ERROR: The mirror repo has a lock on it”) def errReceived(self, data): log(“stderr> %s” % data) def inConnectionLost(self): debug(“inConnectionLost! stdin is closed! (we probably did it)”) def outConnectionLost(self): debug(“outConnectionLost! The child closed their stdout!”) def errConnectionLost(self): debug(“errConnectionLost! The child closed their stderr.”) def processEnded(self, status): self.running = False log(“Sync process ended, status %d” % status.value.exitCode) class SyncListener (DatagramProtocol): def __init__(self): self.prochandler = SyncProcess() self.timeout = task.LoopingCall(self.runsync) def startProtocol(self): print “Starting UDP server and timeout” self.timeout.start(120, now=False) def datagramReceived(self, data, (host, port)): log(“Received packet from %s:%d” % (host, port)) self.runsync() def runsync(self): if self.prochandler.running: log(“Not running sync as another process is present”) else: reactor.spawnProcess(self.prochandler, cmdline[0], cmdline, {}) reactor.listenUDP(9999, SyncListener()) reactor.run() [/cc] This server runs constantly on the slave server listening on port 9999. On receiving a packet it forks off an svnsync process (unless one is already running). Additionally, every two minutes it runs a sync regardless. The server is started via daemontools, which ensures that if the server quits for any reason is restarted. Triggering updates When the master receives a commit it triggers an update on each slave by sending a UDP packet to them. This is done in the post-commit script using the netcat network tool: [cc lang=’bash’ ] echo 1 | nc -w1 -u svn.sydney.atlassian.com 9999 [/cc] And that’s it, with the exception of some caveats … Locking It’s not clear how locking interacts with replication; however distributed locking is not something that should be taken lightly. For this reason I’ve disabled locking on both the master and slave repositories. This is just a matter of putting the following in the pre-lock hook: [cc lang=’bash’ ] #!/bin/sh # Disable locking as we are doing replication and it’s not clear how # they will interact. echo “Locking is disabled due to replication” >&2 exit -1 [/cc] This will return a meaningful error message if someone attempts to lock a file. Client version issue There is a [known issue]() with some versions of Subversion clients when adding files to replicated slaves. The list of clients I’ve tested is below: Distributed VCS The elephant in the room here is that none of this should really be necessary. There are now a number of version-control systems, commercial and open-source, that are distributed in nature and so don’t need this special treatment. With these systems commits are two phase, with a local checkin followed (optionally) by a merge to a remote repository (or a pull depending on your development model). This is undoubtedly the way of the future and there has already been discussion about trialling them internally at Atlassian. However there are two short-term issues that prevent an immediate migration: - Tool support. Fisheye, Crucible, Maven, IDEA; until these parts of our tool-chain have native support for these next-gen systems our workflow would have to be severely modified. - Developer process. Because a local commit does not automatically propagate changes to the master repository more discipline is required from developers. In practice this would probably require creating the role of a merge-master on each team who would make sure all working trees are regularly merged and conflicts resolved. Neither of these problems are insurmountable though, and I expect that in time distributed source-control will become the norm rather than the niche it currently is.
https://www.atlassian.com/blog/archives/subversion_replication_at_atla?_ga=2.217251286.1933127788.1517539727-1159165484.1517539727
CC-MAIN-2019-26
refinedweb
1,699
54.02
Hi there, I extracted my problems down to this short program "test.cc": #include <iostream> int main() { double d=10.0; cout << d << endl; return 0; Compiling it withCompiling it withQuote:} g++ -g -o test.o -c test.cc g++ test.o -o test and testing it, everything works out fine. But compiling it with g++ -g -o test.o -c test.cc g++ test.o -lpthread -o test only invoking "./test" from within the shell gives the desired output "10". Running it with ddd (or gdb), the variable d has value nan. My question is: what is wrong with linking against libpthread ? I just intentionally linked against it and dont need this library, but I would like to know if this is a fault or a feature. Thanks in advance Sven
http://www.verycomputer.com/181_43468004c208aae4_1.htm
CC-MAIN-2022-27
refinedweb
132
88.53
12 May 2009 15:03 [Source: ICIS news] TORONTO (ICIS news)--Bayer stands by its carbon monoxide (CO) pipeline project to connect two production sites in its home state of North Rhine-Westphalia, in northwest Germany, CEO Werner Wenning said on Tuesday. The pipeline’s safety concept exceeded existing standards and legal requirements, Wenning told shareholders in a speech during the company’s annual meeting in ?xml:namespace> An administrative court in The 67km pipeline would connect Bayer’s chemical production sites in Wenning, in his speech, also welcomed renewed support for the project by North Rhine-Westphalia's state parliament last month. “Now especially, at this time of severe economic crisis, we must take advantage of the opportunities and the strengths that locations such as North Rhine-Westphalia have to offer,” Wenning said. He said safeguarding jobs would continue to require more investment, particularly in production facilities. The competition between different locations for that investment would become even more intense, and Wenning also said that Bayer planned to spend €2.9bn ($3.9bn) on research and development in 2009, the highest R&D budget in the company’s history. “We are optimistic that we will emerge from this crisis even stronger than before, and we believe the [Bayer] group is on track for long-term success, thanks to the potential our portfolio holds for innovation and growth,” Wenning said. Bayer has proposed to increase its dividend by 3.7% to €1.40/share, for a total payout to shareholders of €1.07bn, it said. ($1 = €0.74) For more on Bayer visit ICIS company intelligence
http://www.icis.com/Articles/2009/05/12/9215533/bayer-stands-by-germany-co-pipeline-project-ceo.html
CC-MAIN-2014-10
refinedweb
266
52.29
How to use PowerShell or VBScript scripting with Hyper-V Microsoft's Hyper-V includes scripting options to manage virtualization environments. See examples of how to automate and configure in PowerShell or VBScript. To effectively manage any virtualization platform, you need a strong arsenal of scripting tools. This can allow various levels of automation and shorten configuration time. Microsoft's hypervisor includes scripting options that can use VBScript and the robust PowerShell for Hyper-V script environment. In this tip, I outline how Hyper-V scripting works and show a few examples of how to get started . Continue Reading This Article Enjoy this article as well as all of our content, including E-Guides, news, tips and more. By submitting you agree to receive email communications from TechTarget and its partners. Privacy Policy Terms of Use. Hyper-V's scripting environment is officially called the Windows Management Instrumentation (WMI) , and we will focus on the virtualization namespace. With a WMI interface, administrators have scripts for a wide range of tasks with Hyper-V. And the options for admins get even better with the PowerShell Management Library. Deciding what script to use depends largely on what is required and what is available. I will focus on Hyper-V servers that are not centrally managed with System Center Virtual Machine Manager (SCVMM), because Microsoft provides additional scripting options for SCVMM implementations. Check out this TechNet page for an overview of SCVMM scripting options. Microsoft is better at writing scripts My career has revolved around scripting as needed and working off scripts that I find on the Internet. Chances are Microsoft has a better script than what you or I can create independently. Luckily, there are plenty of resources to get you started. Below I provide a list of some of the best online resources to get started with Hyper-V scripting: - Virtual PC Guy's WebLog: Ben Armstrong's Microsoft Developer Network (MSDN) blog is a great source for sample scripts from all Microsoft virtualization systems. - Taylor Brown's Blog: This is another strong offering for the Microsoft virtualization camp. Brown covers several use cases, including one for SCVMM. - The Microsoft virtualization team blog site: This is a good catchall for everything related to Hyper-V, as well as other virtualization segments. - PowerGUI.org's Hyper-V PowerPack: This is roughly equivalent to VMware's VI Toolkit and its functionality from PowerGUI is a definite must-have'. Be sure to check out this TechTarget tip by Eric Seibert on this powerful toolkit. Get VM information example Now that I've given an overview of the scripting options, let's jump into an example. Nearly every resource for PowerShell scripting with Hyper-V begins with the query script, especially if the script involves deterministic handling. This is a simple one-liner that interacts with the WMI virtualization namespace of Hyper-V and gives information on a VM as it is exists in the namespace. Many scripts will want to query this to get current status on elements such as the OperationalStatus value, which indicates a VM's run state. Here is a sample command to get this information for a VM named "TESTVM1": Get-WmiObject -Namespace root\virtualization -Query "Select * From Msvm_ComputerSystem Where ElementName='TESTVM1'" The VM name is the only value to change if you want to run it in your own environment. When executed in PowerShell, the result displays, as shown in Figure 1. Note that the VM's OperationalStatus value is displayed as "2", meaning it is running. Stop/start a VM with VBScript example The basic tasks of starting and stopping a VM are good ways to get started with scripting. Other than PowerShell, Hyper-V machines can interact via VBScript and get full access to the WMI virtualization namespace. This example script performs a startup of a VM that is powered off using VBScript: Option Explicit Dim CallWMI Dim InventoryVMs Dim YourVM YourVM = "TESTVM1" Set CallWMI = GetObject("winmgmts:\\.\root\virtualization") Set InventoryVMs = CallWMI.ExecQuery("SELECT * FROM Msvm_ComputerSystem WHERE ElementName='" & YourVM & "'") InventoryVMs.ItemIndex(0).RequestStateChange(2) Like the PowerShell example, the only piece that needs to change in this script is the inline value for "YourVM", which is set with the VBScript variable as "TESTVM1". Saving this text as a .VBS file on the local file system will allow it to execute locally. Let's now twist this a bit and perform a shutdown on a remote Hyper-V server. The prior example was for running a script locally for a server that has the Hyper-V role and the designated VM running. This can be beneficial in environments with multiple Hyper-V servers and where SCVMM is not implemented. This script will shut down (force power off) the TESTVM1 VM remotely on Server55: Option Explicit Dim CallWMI Dim InventoryVMs Dim YourVM YourVM = "TESTVM1" Set CallWMI = GetObject("winmgmts:\\SERVER55\root\virtualization") Set InventoryVMs = CallWMI.ExecQuery("SELECT * FROM Msvm_ComputerSystem WHERE ElementName='" & YourVM & "'") InventoryVMs.ItemIndex(0).RequestStateChange(3) Note that Line 6 enters the name of Server55, which is the Hyper-V server that holds TESTVM1. This can run remotely from systems that are aware of the WMI virtualization namespace, such as another Windows Server 2008 server. The forced shutdown is sent with the Code 3 in the last line of the above example. Other popular VM codes include the following: - Reboot (10): This code performs a hard reset on a VM. - Pause (32768): This code pauses the VM. Complete information on the RequestChangeState method of the WMI virtualization namespace can be found online at the MSDN website. Testing Hyper-V scripts Hyper-V scripting should be done in a test environment. PowerShell is, as advertised, an extremely powerful shell environment. Scripts written in VBScript present the same riskbecause they do what you tell them to do, as they will also do what you tell them to do.. And when commands are passed, there is no Cancel button or backing out of commands. Simply put, the commands assume that you know what you are doing. Also, consider permissions, which could be a concern with remote Hyper-V servers. Here is a link to a TechTarget tip on the permissions model of Hyper-V. On with scripting There are plenty of options for administrators who want to automate elements of their Hyper-V environments. With some practice, care and patience, you can get Hyper-V tuned to your tastes with PowerShell or VBScripts that you create. And check out our Server Virtualization blog. Dig deeper on Microsoft Hyper-V and Virtual Server Pro+ Features Enjoy the benefits of Pro+ membership, learn more and join.
http://searchservervirtualization.techtarget.com/tip/How-to-use-PowerShell-or-VBScript-scripting-with-Hyper-V
CC-MAIN-2014-35
refinedweb
1,102
54.22
Given that .NET 1.x is entering legacy status before the end of the year, I thought it might be fun to explore the best and worst of what .NET developers have lived through for the past 5 years. First: the best. 1) Metadata Metadata is the lifeblood of the common language runtime. Just think of the number of features made possible (or made better) by the presence of metadata: garbage collection, form designers, code access security, and verification to name a few. The fact that metadata is extensible through custom attributes opens up a world of possibilities. Sure, we might have gotten tools like NUnit and Reflector without metadata, but they might have really sucked. 2) Visual Studio The multilingual IDE does web, windows, and mobile development, too. If you face being stranded on a desert island with a Windows machine, AC power, and broadband access, but can install only one piece of software on top - take Visual Studio. Given enough time, you can write the rest. (What piece of software would you write first?). Again, extensibility plays a huge rule in the success of Visual Studio. If you haven’t worked with one of the many great VS.NET ad-ins in Scott Hanselman's Ultimate List, you just haven’t lived. 3) Community When people like Chris Brumme spend their time waiting at the dentist writing deep technical blog posts like “TransparentProxy”, then you know times are changing. When I'm at the dentist I usually hide behind large, potted vegetation reading National Geographic, pretending I’m somewhere else, and hoping they forget I’m there, but everyone handles thier phobias differently. Besides blogs, we have an explosion of webcasts, chats, user groups, code camps, and geek dinners. There is no time to shower or pay bills - submerse yourself now in the world that Scoble built. 4).NET Class Libraries Every non-trivial framework has the occasional bump in the road, but let’s not talk about the System.DirectoryServices namespace while we are in a good mood, ok? The usability, intuitiveness, and discoverability of the libraries have played a large role in the adoption of .NET and the productivity of .NET programmers. 5) The Common Type System (CTS) The CTS lays the foundation for not only C#, C++, and Visual Basic to work together, but a slew of other languages (see former Bon Jovi look-alike Jason Bock’s .NET languages list). No small feat, the CTS. It’s a rewarding experience being able to jump into a new language with foreign syntax but still have some bearing as to what is happening underneath. Next up: the worst. Don't miss this one. What would you include in the “best of” list?
http://odetocode.com/blogs/scott/archive/2005/03/21/the-best-of-the-net-1-x-years.aspx
CC-MAIN-2016-40
refinedweb
456
64.61
Library tutorials & articles Socket Programming in C# - Part 2 - Introduction - Getting Started - Multiple Sockets - Server Side - Conclusion Introduction This is the second part of the previous article about the socket programming. In the earlier article we created a client but that client used to make blocking IO calls ( Receive ) to read data at regular intervals (via clicking the Rx button). But as I said in my earlier article, that model does not work very well in a real world application. Also since Windows is an events-based system, the application (client) should get notifications of some kind whenever the data is received so that client can read it rather than client continuously polling for data. Well that is possible with a little effort. If you read the first part of this article, you already know that the Socket class in the Systems.Net.Sockets namespace has several methods like Receive and Send which are blocking calls. Besides there are also functions like BeginReceive , BeginSend etc. These are meant for asynchronous IO . For example , there are at least two problems with the blocking Receive: When you call Receive function the call blocks if no data is present, the call blocks till some data arrives. Even if there is data when you made the receive call , you don't know when to call next time. You need to do polling which is not an efficient way.'ll also post the whole code for the Client and Server classes. It may be useful for someone. I spend several hours researching and testing to come up with this... I know we are in a C# area, but the code is very easy to translate.. If not ###See the complete solution below: _____________________________________________________________________________ Public Delegate Sub StringReceivedHandlerDelegate(ByVal sRemoteAddress As String) Public Class Server Private _PortNumber As Integer Private DataReceived As StringReceivedHandlerDelegate Private listener As Socket Sub New(ByVal PortNumber As Integer) _PortNumber = PortNumber End Sub Public Sub StartServer() Listen() End Sub Public Sub StopServer() If Not listener Is Nothing Then listener.Close() End If End Sub Public Class StateObject 'Client socket. Public workSocket As Socket = Nothing 'Size of receive buffer. Public Const BufferSize As Integer = 8192 'Receive buffer. Public buffer() As Byte = New Byte(BufferSize - 1) {} 'Received data string. Public sb As New StringBuilder End Class 'ManualResetEvent instances signal completion. Private Shared connectDone As New ManualResetEvent(False) Private Shared sendDone As New ManualResetEvent(False) Private Shared receiveDone As New ManualResetEvent(False) Private Sub Listen() Try Dim remoteEP As New IPEndPoint(IPAddress.Any, _PortNumber) listener = New Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp) listener.Bind(remoteEP) listener.Listen(10) listener.BeginAccept(New AsyncCallback(AddressOf ConnectCallback), listener) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Private Sub ConnectCallback(ByVal ar As IAsyncResult) Try 'Retrieve the socket from the state object. Dim client As Socket = CType(ar.AsyncState, Socket) 'Complete the connection. client = client.EndAccept(ar) Console.WriteLine("Socket connected to {0}", client.RemoteEndPoint.ToString()) 'Start Receiving Receive(client) 'Signal that the connection has been made. connectDone.Set() listener.BeginAccept(New AsyncCallback(AddressOf ConnectCallback), listener) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Private Sub Receive(ByVal client As Socket) Try 'Create the state object. Dim state As New StateObject state.workSocket = client 'Begin receiving the data from the remote device. client.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, New AsyncCallback(AddressOf ReceiveCallback), state) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Private Sub Send(ByVal client As Socket, ByVal data As String) Try 'Convert the string data to byte data using ASCII encoding. Dim byteData As Byte() = Encoding.ASCII.GetBytes(data) 'Begin sending the data to the remote device. client.BeginSend(byteData, 0, byteData.Length, 0, New AsyncCallback(AddressOf SendCallback), client) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Private Sub SendCallback(ByVal ar As IAsyncResult) Try 'Retrieve the socket from the state object. Dim client As Socket = CType(ar.AsyncState, Socket) 'Complete sending the data to the remote device. Dim bytesSent As Integer = client.EndSend(ar) Console.WriteLine("Sent {0} bytes to server.", bytesSent) 'Signal that all bytes have been sent. sendDone.Set() Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Public Sub SetStringInputHandler(ByVal pMethod As StringReceivedHandlerDelegate) Try Monitor.Enter(Me) If DataReceived Is Nothing Then DataReceived = pMethod End If Monitor.Exit(Me) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub Private Sub InvokeDelegate(ByVal sData As String) Try DataReceived.Invoke(sData) Catch ex As Exception Console.WriteLine(ex.Message) End Try End Sub End Class ____________________________________________________________________________ Public Class Client Private _RemoteHost As String Private _RemotePort As Integer Private _NetworkStream As NetworkStream Dim _TCPClient As TcpClient Sub New(ByVal RemoteHost As String, ByVal RemotePort As Integer) _RemoteHost = RemoteHost _RemotePort = RemotePort End Sub Public Function SendStringMessage(ByVal Message As String) As String Try _TCPClient = New TcpClient(_RemoteHost, _RemotePort) _NetworkStream = _TCPClient.GetStream() '_NetworkStream.WriteTimeout = 10000 '_NetworkStream.ReadTimeout = 10000 Dim strResponse As String ' Send a string (newline terminated) to the server. Dim writer As New System.IO.StreamWriter(_NetworkStream) Dim reader As New System.IO.StreamReader(_NetworkStream) writer.Write(Message) writer.Flush() ' Read server response (up to a newline). Try strResponse = reader.ReadLine Catch ex As Exception strResponse = Nothing End Try 'Close writer.Close() reader.Close() _NetworkStream.Close() Return strResponse Catch ex As Exception Return Nothing Finally If Not _TCPClient Is Nothing Then _TCPClient.Close() _TCPClient = Nothing End If End Try End Function End Class Holpe it helped. Regards, Afas. Hi how can i acknowledege an sending message in asynchronous socket program???? thank you I'm currently working on the both the server and multiple clients with tcp port connections. A port has been opened at server side for listening. Have tried multiple clients connections upto 120 clients. But in certain times, new client connections to the server are failed, but the existing client connection to server side are stil working fine if they're still connected, once they disconnected it, they cant do reconnection. Meaning to say that the server side doesnt respond to any new client tcp connection after some times. No specific error message could be found. Do you have any idea on this? Any possibilities for this matter to happen? Look forward to your helps. Thanks in advance. Cheers. Hi, I am new to this forum but I am trying to do exactly what you may have achieved! I am trying to connect multiple clients to Server! Any Help is appriciated AG i've been developing and small network app and this article was very helpfull but i have problems implementing the comunication part, i can connect, receive and send data the problem arise when i try to send several message or objects(via serialization) i found that i get half of the object on the receiving stream, making imposible de deserialization of the object, i don't know if there is a workaround to this or i need to implement tokens in the message to know when to deserialize the object, any help or ideas will be apreciate :) also i like to know if there is a way to detect when the remotehost have been disconnected after calling the Socket.BeginReceive() method. Sorry for the bad english... Duke The following source code example has some un-safe thread issues. Within the public method OnDataReceived(IAsyncResult asyn) txtDataRx.Text = txtDataRx.Text + szData; hi, I try to build a network sniffer in .net framework 2 and use the socketname.beginReceive(buffer,0,bufferLength,......); like raw socket. but when I convert the value in the buffer in to string ,there are meaningless staffs. Can you help me please.. That's Great! But, How can you show me the way to transaction betwen two computer over internet. Especially, in this example! Can you modify code for me to have connection betwen Server and Client over Internet! :( Any more, I have some question want to have any help from you for design : :) First: Game Online! Tell me the way to solve this kind of Programming! Server - Client! Second: Mobile sendding data between PC and Mobile like Yahoo! Please help me explain my question in detail! :rolleyes: Thank you so much for you reading! Hi, Iam doing a socket application,facing some problem in that.The issue is the remote host is sending some 10 messages means the client machine is able to capture only 3 to 4 messages. i need help terribly.anyone pls help.thanks in advance Hello - Using your code - i sometimes come across an issue. During the operation of the application, the CPU will max out at 99%. It stays like that until i end the app (of course) I'm not sure where the code block is maxing out at. has anyone come across this issue before? thanks tony In your topic, suppose that I don't want to send charater, but I want to send a binary file. To do this, I read an image from harddisk and then transfer it to binary array. When I send the Image binary array, I got a problem. When I know the server finish receiving data ? Do you have any solution? Rekcut- You're not very good at the hacking game, mate! We can find you wherever you go. private void cmdListenClick(object sender, System.EventArgs e) { try { //create the listening socket... msocListener = new Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp); [B]IPEndPoint ipLocal = new IPEndPoint ( IPAddress.Any ,8221);[/B] //bind to local IP Address... msocListener.Bind( ipLocal ); //start listening... msocListener.Listen (4); // create the call back for any client connections... m_socListener.BeginAccept(new AsyncCallback ( OnClientConnect ),null); cmdListen.Enabled = false; } catch(SocketException se) { MessageBox.Show ( se.Message ); } } see the red bold line change the When ever you put a socket on the listening state you have to bind that socket with ip address and port number so you can mention the ip address and port before starting the listning process If you want to know that 1) Server sends data and many different clients get that data at same time instead of sending data to each client one by one then use the UDP (Universal datagram protocol) socket instead of TCP socket Read or search about "connection less UDP socket connections" 2) Server receives data from more that one client at the same time then YES! this can be done using threads or multitasking 3) Server receives data in from different clients from different ports then again Yes! you have to set another socket to listen state in you want server to get data from different port from different users Hi Ken, I am having the same problem... Did you get it to work? Thanks, Andre Hi! is it possible that the server manage more accesses in parallel? I'm sorry for my bad English /* declare a public variable */ // Recording "cmdListen" button is "Start Listening" or "Stop Listening" so far. private bool isListening = false; private void cmdListen_Click(object sender, System.EventArgs e) { try { if ( isListening ) { /* closing m_socListener */ m_socListener.Close(); cmdListen.Text = "Start Listening"; btnSend.Enabled = false; isListening = false; } else { /* constructing m_socListener */ //create the listening socket... m_socListener = new Socket AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp); IPEndPoint ipLocal = new IPEndPoint ( IPAddress.Any ,8221); //bind to local IP Address... m_socListener.Bind( ipLocal ); //start listening... m_socListener.Listen(4); // create the call back for any client connections... m_socListener.BeginAccept(new AsyncCallback ( OnClientConnect ),null); cmdListen.Text = "Stop Listening"; btnSend.Enabled = true; isListening = true; } } catch(SocketException se) { MessageBox.Show ( se.Message ); } } My client is a Windows machine. From this I create a socket connection to an IP and a port on a server machine. On the client machine I can manually telnet to the server, Send/Receive message like this: Send: telnet IP Port Receive: Connected to IP... Send: Operation=TotalRecords Receive: TotalRecords=1000 The fact that I can connect/Send/Receive suggests there is no access issue (the server is a linux machine). Now here is my socket code. I can Connect and Send ("Operation=TotalRecords"). But Receive keep on waiting but never getting the message "TotalRecords=1000". How do I code to receive the reply? Here is my code in C#: IPAddress remoteIPAddress = IPAddress.Parse(LinuxIPAddress); EndPoint ep = new IPEndPoint(remoteIPAddress, 4321); Socket sock = new Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp); string query="Operation=TotalRecords"; sock.Connect(ep); Encoding ASCII = Encoding.ASCII; Byte[] ByteGet = ASCII.GetBytes(query); Byte[] RecvBytes = new Byte[256]; int iTx=sock.Send(ByteGet, ByteGet.Length, 0); //the code worked to this point Int32 bytes = sock.Receive(RecvBytes, RecvBytes.Length, 0); //this will wait forever... Thank you in advance. Ken Not responses as answers, responses as people asking the same question that I asked. This is a neat looking solution but it has a logicl problem and will only work with one client, that is not exceptable for an async socket. Here is a good solution that I have used for an async sockets. Sorry ... where is a response ?? .. thanks IPEndPoint ipLocal = new IPEndPoint ( IPAddress.Any ,8221); Sorry I did not look at the complete response list and see that this question was already posted. I tried to running the server app and client app. Worked great, when I only had one client. If I run two clients and try to connect to the server I only see messages from the first client to connect. In fact, if I run one client and connect, then disconnect, and then reconnect, I get no connection refused error, but I do not get any messages comming across to the server. Is there some socket clean up that is not happening? I like this scheme of using async sockets but if only one person can connect, well thats a big problem, please enlighten me. I can run the server application successfully with port 8221 but I get an error when I changed the port number to 80, 8100 or others. Following is the exception: Only one usage of each socket address (protocol/network address/port) is normally permitted. Does anyone have any idea of how I can resolve this problem?? Managed to figure out the problem with the help of Zane on the microsoft.public.dotnet.languages.csharp newsgroup. Basically it comes down to ping being a UDP packet that does not have a socket connection and so have to use .BeginReceiveFrom rather than BeginReceive. Ciao Rekcut Hi all, I am currently trying to use the client code to simultaneously send icmp packets to several IPs from different threads. i.e. I have the client code as a class and I instantiate a new instance for each different IP. Within the class I implement an asynchronous delegate to run the client code, so that in the mean time I can return to the main thread and implement a new instance of the class and send a different icmp packet. Thus at any one time I have several instances of the client code running on different threads, waiting for icmp packets to be received on the OnDataReceived callback. My problem is that I am finding icmp reply packets from one thread in the receive databuffer of another thread. i.e. the CSocketPacket object returned as a reference in "IAsyncResult asyn" is not coherent. In other words the data in theSockId.dataBuffer does not correspond to the theSockId.thisSocket socket. Is this happening because this code is not threadsafe?? or does anyone have any idea of how I can resolve this problem?? Regards Rekkie regards, tuco I get an error when the client closes the connection, and then no new connections can be made. Does anybody have a solution Also. Did anybody get it to work with multiple clients? Did any of you get the program to work with multiple clients? If so, how did you do First, let me thank you for the great article - it has proved really valuable for me in my current project. I am currently trying to extend the example to enable transfer of XML-data (or any data). However, I am not sure how to manipulate the recieved data as a complete string to parse using another function. Your example revieces a byte at a time and appends this to the content of a text box. Your example uses an "iterative" function to recieve each byte and then calls WaitForData() to wait for the next byte. But to be able to manipulate this data I need to somehow collect it in a string and when the data/command from the client has been received - do something like sending a answer back. The setup could be: Client sends command: "GetAmountInAccount" Server recieves this, checks the and replyes: "You are broke" but how do I detect when the entire command from the client has been received? Thanks, Firstly I would just like to say thanks for the great example code. Very helpful. I am now trying to have more than 1 client listen/communicate with the server. I am getting some very strange results though. Can anybody please give me some advice. I have altered the code on the server side to go back to "listening" after the first client connects. This works, but when the second client connects the first client gets kicked out (after a few seconds?) somehow. Look forward to any information. Thanks Jason Grettings! This thread is for discussions of Socket Programming in C# - Part 2.
http://www.developerfusion.com/article/3997/socket-programming-in-c-part-2/
crawl-002
refinedweb
2,896
57.98
Snowplow Python Tracker 0.4.0 released We are happy to announce the release of the Snowplow Python Tracker version 0.4.0. This version introduces the Subject class, which lets you keep track of multiple users at once, and several Emitter classes, which let you send events asynchronously, pass them to a Celery worker, or even send them to a Redis database. We have added support for sending batches of events in POST requests, although the Snowplow collectors do not yet support POST requests. We have also made changes to the format of unstructured events and custom contexts, to support our new work around self-describing JSON Schemas. In the rest of the post we will cover: - The Subject class - The Emitter classes - Tracker method return values - Logging - Pycontracts - The RedisWorker class - Self-describing JSONs - Upgrading - Support 1. The Subject class An instance of the Subject class represents a user who is performing an event in the Subject-Verb-Direct Object model proposed in our Snowplow event grammar. Although you can create a Tracker instance without a Subject, you won’t be able to add information such as user ID and timezone to your events without one. If you are tracking more than one user at once, create a separate Subject instance for each. An example: from snowplow_tracker import Subject, Emitter, Tracker # Create a simple Emitter which will log events to e = Emitter("d3rkrsqld9gmqf.cloudfront.net") # Create a Tracker instance t = Tracker(emitter=e, namespace="cf", app_id="CF63A") # Create a Subject corresponding to a pc user s1 = Subject() # Set some data for that user s1.set_platform("pc") s1.set_user_id("0a78f2867de") # Set s1 as the Tracker's subject # All events fired will have the information we set about s1 attached t.set_subject(s1) # Track user s1 viewing a page t.track_page_view("") # Create another Subject instance corresponding to a mobile user s2 = Subject() # All methods of the Subject class return the Subject instance so methods can be chained: s2.set_platform("mob").set_user_id("0b08f8be3f1") # Change the tracker subject from s1 to s2 # All events fired will have instead have information we set about s2 attached t.set_subject(s2) # Track user s2 viewing a page t.track_page_view("") It is also possible to set the subject during Tracker initialization: t = Tracker(emitter=e, subject=s1, namespace="cf", app_id="CF63A") 2. The Emitter classes Trackers must be initialized with an Emitter. This is the signature of the constructor for the base Emitter class: def __init__(self, endpoint, protocol="http", port=None, method="get", buffer_size=None, on_success=None, on_failure=None): The only field which must be set is the endpoint, which is the collector to which the emitter logs events. port is the port to connect to, protocol is either "http" or "https", and method is either “get” or “post”. When the emitter receives an event, it adds it to a buffer. When the queue is full, all events in the queue get sent to the collector. The buffer_size argument allows you to customize the queue size. By default, it is 1 for GET requests and 10 for POST requests. If the emitter is configured to send POST requests, then instead of sending one for every event in the buffer, it will send a single request containing all those events in JSON format. on_success is an optional callback that will execute whenever the queue is flushed successfully, that is, whenever every request sent has status code 200. It will be passed one argument: the number of events that were sent. on_failure is similar, but executes when the flush is not wholly successful. It will be passed two arguments: the number of events that were successfully sent, and an array of unsent requests. AsyncEmitter The AsyncEmitter class works just like the base Emitter class, but uses threads, allowing it to send HTTP requests in a non-blocking way. CeleryEmitter The CeleryEmitter class works just like the base Emitter class, but it registers sending requests as a task for a Celery worker. If there is a module named snowplow_celery_config.py on your PYTHONPATH, it will be used as the Celery configuration file; otherwise, a default configuration will be used. You can run the worker using this command: celery -A snowplow_tracker.emitters worker --loglevel=debug Note that on_success and on_failure callbacks cannot be supplied to this emitter. RedisEmitter Use a RedisEmitter instance to store events in a Redis database for later use. This is the RedisEmitter constructor function: def __init__(self, rdb=None, key="snowplow"): rdb should be an instance of either the Redis or StrictRedis class, found in the redis module. If it is not supplied, a default will be used. key is the key used to store events in the database. It defaults to “snowplow”. The format for event storage is a Redis list of JSON strings. Flushing You can flush the buffer of an emitter associated with a tracker instance t like this: t.flush() This synchronously sends all events in the emitter’s buffer. Custom emitters You can create your own custom emitter class, either from scratch or by subclassing one of the existing classes. The only requirement for compatibility is that is must have an input method which accepts a Python dictionary of name-value pairs. 3. Tracker method return values If you are using the synchronous Emitter and call a tracker method which causes the emitter to send a request, that tracker method will return the status code for the request: e = Emitter("d3rkrsqld9gmqf.cloudfront.net") t = Tracker(e) print(t.track_page_view("")) # Prints 200 This is useful for initial testing. Otherwise, the tracker method will return the tracker instance, allowing tracker methods to be chained: e = AsyncEmitter("d3rkrsqld9gmqf.cloudfront.net") t = Tracker(e) t.track_page_view("").track_screen_view("title screen") The set_subject method will always return the Tracker instance. 4. Logging The emitters.py module has Python logging turned on. The logger prints messages about what emitters are doing. By default, only messages with priority “INFO” or higher will be logged. To change this: from snowplow_tracker import logger # Log all messages, even DEBUG messages logger.setLevel(10) # Log only messages with priority WARN or higher logger.setLevel(30) # Turn off all logging logger.setLevel(60) 5. Pycontracts The Snowplow Python Tracker uses the Pycontracts module for type checking. The option to turn type checking off has been moved out of Tracker construction: from snowplow_tracker import disable_contracts disable_contracts() Switch off Pycontracts to improve performance in production. 6. The RedisWorker class The tracker comes with a RedisWorker class which sends Snowplow events from Redis to an emitter. The RedisWorker constructor is similar to the RedisEmitter constructor: def __init__(self, _consumer, key=None, dbr=None): This is how it is used: from snowplow_tracker import AsyncEmitter from snowplow_tracker.redis_worker import RedisWorker e = Emitter("d3rkrsqld9gmqf.cloudfront.net") r = RedisWorker(e, key="snowplow_redis_key") r.run() This will set up a worker which will run indefinitely, taking events from the Redis list with key “snowplow_redis_key” and inputting them to an AsyncEmitter, which will send them to a Collector. If the process receives a SIGINT signal (for example, due to a Ctrl-C keyboard interrupt), cleanup will occur before exiting to ensure no events are lost. 7. Self-describing JSONs Snowplow unstructured events and custom contexts are now defined using JSON schema, and should be passed to the Tracker using self-describing JSONs. Here is an example of the new format for unstructured events: t.track_unstruct_event({ "schema": "iglu:com.acme/viewed_product/jsonschema/2-1-0", "data": { "product_id": "ASO01043", "price": 49.95 } }) The data field contains the actual properties of the event and the schema field points to the JSON schema against which the contents of the data field should be validated. The data field should be flat, rather than nested. Custom contexts work similarly. Since and event can have multiple contexts attached, the contexts argument of each trackXXX method must (if provided) be a non-empty array: t.track_page_view("localhost", None, "", [{ schema: "iglu:com.example_company/page/jsonschema/1-2-1", data: { pageType: 'test', lastUpdated: new Date(2014,1,26) } }, { schema: "iglu:com.example_company/user/jsonschema/2-0-0", data: { userType: 'tester' } }]) The above example shows a page view event with two custom contexts attached: one describing the page and another describing the user. As part of this change we have also removed type hint suffices from unstructured events and custom contexts. Now that JSON schemas are responsible for type checking, there is no need to include types a a part of field names. 8. Upgrading The release version of this tracker (0. 9. Support Please get in touch if you need help setting up the Snowplow Python Tracker or want to suggest a new feature. The Snowplow Python Tracker is still young, so of course do raise an issue if you find any bugs. For more details on this release, please check out the 0.4.0 Release Notes on GitHub.
https://snowplowanalytics.com/blog/2014/06/10/snowplow-python-tracker-0.4.0-released/
CC-MAIN-2019-18
refinedweb
1,478
55.03
User talk:Verdy p/Archive 2015 Contents - 1 Hello! - 2 Please save your time doing edits about Category:Administrative and it's variants - 3 Language templates - 4 I have suggestion that we should utilize data from Template2 when we sort software - 5 weeklyOSM - 6 Breaking changes - 7 Stats page - 8 Description template - 9 Template:Collapse - 10 Template:Documentation - 11 Breaking Templates (again) - 12 Complicating my work - 13 Avoid creating so many revisions - 14 Template: Stammtisch Wien - 15 Reason for moving wiki articles and categories in Portuguese - 16 Category:Company vs Category:Companies_that_are_involved_in_the_OSM_project vs Category:Manufacter of GPS chips or units - 17 Mixing Category:GPS (single instance of GNSS) and Category:GNSS was bad idea. - 18 : for indention - 19 double redirects - 20 User blocked for 1 Week. - 21 Tile formulas Hello! Just friendly notice that you can edit wiki content more efficiently/robust using semi-automatic wiki editors. But we careful and check your edits for correctness. Sometimes I'm using: 1. Options-preferences (CTRL+P): Project: custom http:/ wiki.openstreetmap.org/w/ 2. File-login (CTRL+L) Add Username: Verdy p You may also store wiki password but this is insecure/up to you. Xxzme (talk) 00:09, 4 May 2015 (UTC) - I have tried it in the past in other wikis, and really don't like this tool (too much insecure and very errorprone). Thanks anyway for the suggestion. - On the opposite I use sometimes PyWiki scripts on some wikis. — Verdy_p (talk) 00:10, 4 May 2015 (UTC) Please save your time doing edits about Category:Administrative and it's variants It was massive and useless for any purpose/task. Currently it contains 1/3 of unsorted information. For example, there nothing administrative in Category:Quality_Assurance. It is about tools/services we use. Xxzme (talk) 02:28, 4 May 2015 (UTC) - There's a cleanuop requested at top of page, and grouping pages by languages helps maintaining them (notably because their actual page names are translated and their languages navbox are not always working correctly due to missing links. - This is not a lot of edits to do, even if it takes some time; these edits are simple to do. — Verdy_p (talk) 02:30, 4 May 2015 (UTC) - Yes, this was really massive cat and I placed label. Okay then, but don't forget to {{hiddencat}} language versions and copy {{Cleanup}} request in language versions too, or at least refer to changes in Category:Administrative. Xxzme (talk) 02:41, 4 May 2015 (UTC) - Ok I'll add the template in those few categories. - Sorting pages by language is needed, as well as filling the many missing categories and solving interlanguage links spread everywhere. - This caetgory is just the first one I experimented to perform cleanup (in addition to the top level one that is now correctly structured). - This is not overcategorisation: the intent is to use same categories are used in English and other languages, and they reproduce the same existing structure. - But still paintin the English categories clean and more easily maintenable (this is difficult when there are various translated page names filling them and we don't really know how they are categorized without following various links. I just consider what is existing now: it is also easier to perform maintenance and recategorisation by topic when titles are using a single language in a category, and easier to see some naming conventions (this still does not prohibit renaming pages for showing an alternate localized title: languages navboxes are working even if the listed links are redirects to a translated name. — Verdy_p (talk) 02:48, 4 May 2015 (UTC) Language templates Hello, I've noticed that you added the name of the page as a parameter to several instances of Template:Languages. Isn't that only necessary if the page name is different from the canonical (English) name? --Tordanik 09:42, 8 May 2015 (UTC) - That's to make sure it is kept before the page is renamed, to preserve interlanguage links. Many translators also forget to incldue the origianl name before renaming their pages (and then have interlnaugage links no longer working: you did the same error while renaming some English pages!). In fact such use is documented since long in the translation guide (linked from the Languages box). — Verdy_p (talk) 09:44, 8 May 2015 (UTC) - Surely if the English page is moved, the other languages' pages should be changed to avoid the redirect? But ok, I understand that it makes problems less likely. --Tordanik 13:18, 8 May 2015 (UTC) - This concerns all renamings: either when renaming a translated page, or renaming the English page: the Languages will still continue working even if you forget it or leave the redirect on purpose (for example pages in different languages do not have the same title, there's a common English name that will redirect to them from any page in the Languages bar. - In summary, it's best to include it even in the English version, directly at the beginning, so that translations will be created by copying the name, and links will continue to work (otherwise we don't know exactly what was the title to which the new name or the old names were refering to, we have to look in page histories...). — Verdy_p (talk) 13:23, 8 May 2015 (UTC) I have suggestion that we should utilize data from Template2 when we sort software I'm not sure why it wasn't done before: Template:Software2. Probably not many editors can edit template so it place categories (Category:Software by supported platform) after you split |platform= value by ';' I suggest not to use for-loops in template (I don't think there more than 8 platforms per program). Can we use explode 4 or 8 times per each Template2 - what is your opinion on this? Will it work? Xxzme (talk) 12:31, 8 May 2015 (UTC) - I also don't like "for-loops" emulations in templates, they are extremely costly/inefficient/slow. It is best to expand them when the number is known. This is a very old fashion of doing things, that does not really help. - However, when unrolling loops, we can still use a subtemplate to simplify, but we should avoid excessive expansions. - I have not looked for "Software2" about how this could be done, though. The are strange uses of the language code (this is also true for the template used to decribe tags in multiple languages as they generate categories without prefixes, which are not easily navigatable). - — Verdy_p (talk) 12:35, 8 May 2015 (UTC) - Actually I don't have concerns about performance, probably WM is fast enough nowdays. I will take little break from wiki today, if you have interest in this you can sort everything by platform here: Category:Software by supported platform. - [1] [2]. But our Special:Version says there no loops enabled. Xxzme (talk) 12:53, 8 May 2015 (UTC) - I was speaking about the for-loop emulation that still exists in Commons or in some cases in English Wikipedia (which are being phased out due to their huge cost and using real tricks of the MW syntax). - This wiki server is still outdated (an old version) and not very performant enough to use loops (and we still don't have the Lua extension deployed here to make it more easily). — Verdy_p (talk) 12:56, 8 May 2015 (UTC) weeklyOSM Hey, are you mad? We are currently building up a wikisite and category for our newsblog, why do you delete this?? --Ziltoidium (talk) 15:19, 9 May 2015 (UTC) - I am not "MAD" (why being insulting) ? - Because there's another category using capitals.... See "Category:WeeklyOSM", not "Weeklyosm" with lowercase (which has NO content, has been blanked, is not categorized, and has no links to it). - I have NOT asked the deletion of the correctly named category which has contents, has links, is categorized... - So the all lowercase named category is empty and is a clear duplicate (that is why it was blanked, when it should have been request-deleted). - This is standard cleanup: blanking pages is NOT enough, we ask for deletions. — Verdy_p (talk) 15:26, 9 May 2015 (UTC) Hi! Sorry, i didnt mean to offend you. Really sorry for that. I now see and understand what you meant. weeklyosm vs weeklyOSM. In this case you are right! It´s just that i got this email, that someone deleted my categories, we are currently working on. I didn´t know there is one wrong misspelled category. That´s ok, this one can really be deleted. I simply thought someone is destroying our current work. Sorry! --Ziltoidium (talk) 15:42, 9 May 2015 (UTC) I simply prefer to write on the discussion page and don´t edit without asking. My way..--Ziltoidium (talk) 15:45, 9 May 2015 (UTC) Breaking changes Hello. In future, before making significant changes to templates used across the entire site, can you please check with wiki admins? You can easily find some in the IRC channel. You have a history of making sweeping changes that break a good chunk of the wiki's content, including the most recent edit to the Tags template. Thank you. --Dee Earley (talk) 16:25, 11 May 2015 (UTC) - Can you point a page where there's a problem? I used many tests and previews before submitting and I made sure that it would work across languages, but may be there's something I've not seen. — Verdy_p (talk) 16:47, 11 May 2015 (UTC) - Oh I forgot to remove a final test code AFTER the template (this "find" code was part of a final preview test and I should have removed it before submitting, after previewing it, as it was below the doc part I did not see it). - My goal was only to unify the translated versions of this template and make them point to it (already I made the French version use the generic version) because we don't need translated versions that have to be maintained separately or that generate inconsistant cateogorization. — Verdy_p (talk) 16:55, 11 May 2015 (UTC) - I understand your intention but that doesn't change the fact that you are making sweeping changes to templates that are used in over 50% of the 30K content pages on the wiki. My request still stands, please check with other admins or on IRC before making such sweeping changes. Thank you --Dee Earley (talk) 19:31, 11 May 2015 (UTC) - These are not "sweeping changes": I have not removed any existing working functionality and the aspect of the site is not changed. Linking on the site is also not changed. and the 30K+ articles are still written the same way and continue working as intended. - There is still a lot of cleaning to do in many articles because of the red links that are generated or that go to the wrong page, or that are incorrectly categorized, or categorized to inexistant categories... but not because of this template (in fact there are many other old and broken templates that should be unified to use this maintained template). - If you look at what I did in the recent week, there was lot of cleanup (including for correcting the many errors that Xxxzme left all around when he ignored all other languages than English and when he restructured and renamed many pages without even looking for broken redirects). - The few bytes that were left in this template were unintentional and by accident, and extremely easy and fast to remove (anyone could have done such accidental error, even you and all admins; the result was also not dramatic, just some small garbage displayed after the links), as they were clearly separated; there was no severe "downtime", the site was still fully functional even if there were some extra characters displayed: they just demontrate to you that I was performing various tests before submitting, because I know that this wiki has some differences with wikis of Wikimedia, and notably the version of MediaWiki, and the set of supported extensions and their own versions) — Verdy_p (talk) 19:40, 11 May 2015 (UTC) - While you may not have changed page content this time, you are making (breaking) changes to templates that are used in every page of the wiki, and HAVE caused problems. Changes that affect a large proportion of the wiki are, by definition, sweeping. (And making changes while awaiting a reply isn't helpful) --Dee Earley (talk) 19:50, 11 May 2015 (UTC) - You are linking to things dating from 2012, where Translation of this site was completely incoherent, non functional. At that time, Only English was usable and navigation was broekn in many other places than the few ones that were working but partly broken in a short time. After my change, there were tend of thousands NEW links suddenly working and many more pages could be translated ! - Almost all the current translation system comes from my work on this wiki. I have greatly helped cleaning up the situation. Even in that time, I reacted very fast (and you were not present at that time, and there was in fact nobody, except me, here to understand how to even start the cleanup process for translations!). - That work is still not terminated (and very few people understand how to do it or how to perform a compatible transition: they prefer rewriting or adding new ways for doing the same thing... and stopping to maintain it because they don't care about the later maintenance) — Verdy_p (talk) 19:54, 11 May 2015 (UTC) - As I said, I understand what you've been doing, and thank you, but you did just break 15K pages. That is fact. All I;m asking is that you run these changes past an admin or IRC first. Simple. --Dee Earley (talk) 19:58, 11 May 2015 (UTC) - I did not break 15K pages in such a way that they become completely unusable. The fix was extremely easy. Even admins hare are making tons of errors (without discussing them anywhere). I have corrected many of them (invalid CSS, bad assumptions, incorrect support for RTL languages by assuming English behavior everywhere...). Most of these problems are unknown to them, they are not experienced in them; it's not something that I condamn, nobody can know everything and there are always things unsuspected; everyone can do things that may break accidentaly but which is extremely minor compared to the improvements added. I assume good faith. - So excuse me for this very small perturbation that was not so dramatic (and no, 15K were NOT all affected even if 15K pages use the template). - If you are not convinced, look at how I recently fixed all the Map Features in Azerbaijani (I chose this language first because it has low volumes of visits and it was less risky, but it allowed making sure that the solution would work before going to more major languages)... - Finally to facilitate the maintenance of these smaller languages, it is needed to merge the solutions into a more centralized one, so in fine, this will require some changes in the templates used in the English versions to support some featutes, or to facilitate the transition (with additional temporary code added for compatibility). - Some templates do not need to be specialized by language, and notably Template:Tag (My intent is to redirect the specialization to the central basic version working also for English). — Verdy_p (talk) 20:08, 11 May 2015 (UTC) - Thank you for understanding, I look forward to the further discussion on IRC. --Dee Earley (talk) 09:54, 12 May 2015 (UTC) - I have never used IRC for anything, I also don't chat, don't send SMS, don't chat. All for the same reason, I do not like instant communication that I consider garbage thrown immediately and without any history. Also It's not possible to manage time. Such use is extremely limited to basic administrative tasks (like confirming codes). — Verdy_p (talk) 11:35, 12 May 2015 (UTC) Stats page I see you edit warring on the Stats page. Please don't make a nuisance of yourself so soon after we've gone through hassle of dealing with User:Xxzme. Just give us a break please. Stop editing the wiki for a while if you can't do it harmoniously. -- Harry Wood (talk) 09:01, 13 May 2015 (UTC) - I've not done edit warring, I've discussed it with someone already... Give him the time to reply if he wants. The old 2007 page is still there with its history, but separate from the redirects. - And if you are talking about Xxxzme, no admin has replied to my comment about him since its blocking... You are using a hurry solution against me here for something that was completely minor, I had made many fixes, removed broken links an fixed some layout in the Statistics page, they were completely blanked and none was applied to my changes in the page with newest name (which is "Stats", not "Statistics" the historic name created months before, but that is now in "Statistics/old"). - I really suggest the renaming ("Statistics" can be safely deleted now, it only contains my own history, no real edit, the 2007 history is fully in "Statistics/old"; once this deletion of "Statistics" is done, with its dummy talk page also redirected with only one dummy edit from me, we will be able to restore the name that "Stats" should have kept, by renaming it finally to "Statistics", and then the new "Stats" redirects will be deleted onve the remaining old articles and talk page will reference either "Statistics" or possibly "Statistics/old" for a few ones, if we want to keep this 2007 early content and history) — Verdy_p (talk) 09:03, 13 May 2015 (UTC) - No hurry solution. I'm not telling you to stop editing the wiki. I'm telling you to stop editing the wiki if you can't do it harmoniously. -- Harry Wood (talk) 09:32, 13 May 2015 (UTC) - What "hamrmoniously" means here ? My corrections were effective, and BOTH names were are are still referenced, and some of them were even linked by double redirects that I had fixed. - Even the content was linking to a blocked abusive exernal site ("j.mp"): I had removed it. It was also linking to old sites that are not even reachable now (dead servers since long): these are clean now with my changes. - I had restructured the top of the page to give links to internal pages before the OSM-wiki-made reports and separated the reports that are no longer maintained since 2011. - Then I grouped the reports by nature of the statistics (registered users, contributions/changesets, geographic objects/GPX). - All sites in osm.org are linked to the top menu of the page, without havint to scann the long page to find them in various places in the middle of reports. This required a few tries to find the best grouping but also to decipher the ambiguous descriptions displayed within images themselves. - All images also have a descriptive heading allowing to locate them from the TOC (some of them are hard to differentiate, given that unrelated topics were mixed). - External sites are also separated between those that are active with updated statistics, and those displaying only old stats. - Was I wrong? And these changes was not an "edit war", they had just been blanked blindly... for non obvious reasons. — Verdy_p (talk) 09:40, 13 May 2015 (UTC) - Yeah all of that's fine, but I read the edit history as suggesting you're edit warring a little bit with Tordanik. Sorry if I got that wrong. Xxxzme has been testing my patience. -- Harry Wood (talk) 09:46, 13 May 2015 (UTC) - But Tordanik has not replied to my question on his talk page... How would he justify an "edit war", given that apparently he has still not read it (and he just performed a single blind blanking). - And this question is absolutely not related to my edits on the page itself, only about its "prefered" name selected via redirects. Tordanik did not care anything when he banked every correction and checks I did. He blanked that blindly by overwriting a redirect on everything, and by just keeping the old unmaintained content of "Stats" (now it shows my own corrections that were in the other page, nothing has been destroyed in the interim, nobody has contested these changes for now, so this is not an "edit war". — Verdy_p (talk) 09:51, 13 May 2015 (UTC) Description template The template has to set a page title because any underscores in the key and value names are displayed as spaces otherwise. Also, what’s the point of a language box when the same template is used in every language?--Andrew (talk) 09:34, 16 May 2015 (UTC) - The language box was there, I did not add it. Or we are not talking about the same template. — Verdy_p (talk) 09:35, 16 May 2015 (UTC) - OK I see, it is part of ther generated code, but it is just not needed for the Template page itself (which is not a tag page and lacks some parameters). I've put it in includeonly. — Verdy_p (talk) 09:41, 16 May 2015 (UTC) - Note that there are still some problems to solve. For the capitalization problem I think I can find a solution for it, even if the DISPLAYTITLE is used. — Verdy_p (talk) 09:42, 16 May 2015 (UTC) Template:Collapse see also: Template:Diskussion and Template:Popup and Template:Public Transport in Austria and Category:Navigational templates and ... --Reneman (talk) 16:00, 16 May 2015 (UTC) - What do you want I do there ? For now I've just unified the French version and the English version, but may be these templates you cite could use the English version as well? — Verdy_p (talk) 16:02, 16 May 2015 (UTC) - Also I'd like to make a further request to admins on the content of the Javascript so that it will no longer display "hide" or "show", but black triangular symbols, with an additional hint (title="" attribute) when hovering them, displaying translatable words (these translations would be generated by Mediawiki on the server side at data-hide="hide" data-show="show" attributes on elements with class="Navframe" which can be set by translatable templates. - Finally the Javascript currently only looks for elementsByType('div') to then check if it has the class="NavFrame", when it should just look for elementsByClass("NavFrame") so that we can also use the same Javascript for collapsible table elements, or collapsible lists (including indented blocks which are in fact definition lists, i.e. "dl" elements in HTML)... It would also allow us to also collapse horizontal lists (spans with class="NavFrame" that can contain collapsible subspans in their direct children). There's no need of new CSS classes. - — Verdy_p (talk) 16:13, 16 May 2015 (UTC) Template:Documentation 1 The margin shows where the template is over! The large margin is correct! 2px is too little! --Reneman (talk) 18:33, 16 May 2015 (UTC) - It's impossible for the doc part to collide with the content given its margin is still > 0, and given that the doc part uses also a thin contrasting border with its content. - We use such thing to measure margins create by the template (too many templates are forgetting to hide final newlines that generate additional paragraphs, and this tiny (but sufficient) margin helps diagnose it instantly. — Verdy_p (talk) 19:13, 16 May 2015 (UTC) - Note: we are not talking here about margins that separate two paragraphs of text or heading. he intent is to measure the vertical space visually and 2px is largely enough (it could even be 1px, but some browsers are overflowing a box border of 1px in the wrong direction and could eat 1 outer pixel. That's why there's an extra pixel. - In fact when you put text in a box, lines of text already have their own leading and training in the line-height, and here also we need minimum vertical margins (2px) after a contrasting border, but 0.5em of horizontal margin. - This is a general rule: don't add more vertical margins than necessary, otherwise they look very unbalanced. — Verdy_p (talk) 19:18, 16 May 2015 (UTC) .template-documentation {clear: both; margin: 1em 0px 0px; border: 1px solid #AAA; background-color: #ECFCF4; padding: 1em;}source: w:Template:Documentation Element {clear: both; margin-top: 1em; border: 2px dotted #666; background-color: #ECFCF4; padding: 0.6em;}source: w:commons:Template:Documentation Element {clear: both; margin: 1em 0px 0px; border: 1px solid #AAA; background: none repeat scroll 0% 0% #ECFCF4; padding: 1em 1em 0.8em;}source: w:fr:Modèle:Documentation - These are bad examples! Excessive top margin but no bottom margin at all! And this causes lots of problem with layouts creating boxes within boxes (the top margins are adding up!). Due to these styles, almost all boxes are forced top cancel these default styles with style="" attributes in lots of places. (and most people also forget that with any non-zero margin, border or padding, the width 100% overflows the container box (and we start seeing horizontal scrollbars, or unaligned margins, and contents that can't even fir the screen width!). — Verdy_p (talk) 20:13, 16 May 2015 (UTC) Any use "margin: 1em"! We do that too! For this we need no discussion! Please no editwar! I do not want to protect the page ;o) Thank you for your understanding. --Reneman (talk) 19:53, 16 May 2015 (UTC) - This is not a template meant to be used in articles, it is specifically for developoing templates, documenting them and previewing them. 1em is also largely excessive with templates that have no preview at all but that only just their doc page, which is dense, it should remain near the top of page. - Major wikis don't use excessive margins for such technical template, where the thin border is used on purpose to create a visible delimitation (which is much clearer than a mere margin between two paragraphs. - — Verdy_p (talk) 19:57, 16 May 2015 (UTC) - The use of the template is in Wikipedia exactly the same as here! --Reneman (talk) 20:03, 16 May 2015 (UTC) - Not all Wikis. With 1em margin you will never almost never know by looking at the preview if the inclusion of the template includes a margin or not because it will collapse within the margin of the doc part (this is something on which people loose a lot of time looking for where are the dangling margins in their template code. — Verdy_p (talk) 20:10, 16 May 2015 (UTC) - I think you do not understand. I said, I will not discuss! This is a template from Wikipedia. We use the template exactly as Wikipedia. The function and layout stay identical to Wikipedia. If you want to change it, change it first at Wikipedia! --Reneman (talk) 20:23, 16 May 2015 (UTC) - Since when this site behaves like Wikipedia ? Its design is extremely far from Wikiepedia in all pages, it cannot even use the same extensive set of templates because here there's lot of extensions not installed and this site also runs an old version of MediaWiki. - And once again, this is not a template for formatting the content, but to separate the doc from the template to preview; the doc page is completely separated by its already very visible contrasting border and decoration. - We cannot import lot of things from Wikipedia and we are not docuemtning Wikipedia here ! This is not the same project. And there's not jsut Wikipedia. Why don't you believe me when people complain about margins that are impossible to measure and predict in templates and that are dangling everywhere (all admins here do not even know where to place newlines or how to use correctly the noinclude sections... — Verdy_p (talk) 20:35, 16 May 2015 (UTC) Breaking Templates (again) Today the Template:RelationDescription doesn't work. I (and many others) have noticed that you make many changes to global templates without adequately testing them. You have a personal user space which is the appropriate place to test the changes you are making before amending pages which affect many others. I would strongly suggest sending emails to the global talk list before unleashing such changes. The effect is to make hundreds of wiki pages unusable as documentation. In addition it is noticeable that you consistently fail to engage with criticism. As well as breaking useful documentation you make lots of unnecessary work for other wiki contributors, users and specifically for wiki admins. I dont expect you to pay any heed to this comment, but if this behaviour continues I will raise it formally on the main OSM mailing list. SK53 (talk) 09:41, 17 May 2015 (UTC) - Without testing ? I did a lot of tests. Please be more specific because I have not seen "hundreds" of pages affected. What I did was to allow more pages to be translated. — Verdy_p (talk) 09:42, 17 May 2015 (UTC) - OK the last change was to put a noninclude section to drop the language links on the template page itself, and there was a single / missing, This is not a huge change, it is extremely easy to see and fix. - And definitely this is is not "many changes". - If only you were looking at the server history you would see that I effectively make LOTS of tests and previews before submitting, and even after that I visit pages using the templates to detect other poossible impacts. I just forgot to do it when I added the 2 characters noinclude section to remove language links, this was a typo and effectively I had not tested it immediately. - I do many fixes that you don't see but that you are profiting. I have solved thousands of pages that were not working and not linking properly since long. - So, when I show you the beautiful moon with my arm, you are just looking at the tip of my finger... — Verdy_p (talk) 09:45, 17 May 2015 (UTC) - In fact there was really a bug in the template even before my change: I see that you have attempted to revert my change by it did not work, and you also reverted your revert. - My change in the template was documenting correctly the purpose, I did exactly what was described, and this was definitely not "lot of changes". - The problem of this wiki is that it has NO central place to discuss things, every one has its view about what is the appropriate place for discussing. In fact the only thing that works is talk pages with those that are present . Some want to use some specific IRC chjannel, some want the global list (which is relaly overpopulated, and does not discuss about the wiki, I have unsubscribed it since long as it is not enough focused). Even only the French talk list has a lot of trafic. And within it talks about the wiki are also a very small minority. - In addition not all admins on this site are following all lists. So we only do best efforts with those that discuss with talk pages. And we just want description about what we are doing and when. - Even you, you have made lots of errors in this wiki that I have fixed. Your history shows your own errors, with very few you have corrected yourself. We just collaborate, but if you don't understand the issues, please remain calm and ask, don't just send such alarm with excessive words for criticizing everything. we are here to help each other, not to criticize what the others do. Everyone makes small errors that are easily solved. — Verdy_p (talk) 10:04, 17 May 2015 (UTC) - But it keeps happening over and over and over again. I've already asked you to discuss changes to global templates on IRC. --Dee Earley (talk) 19:44, 18 May 2015 (UTC) - Specifically on the topic of IRC, I would prefer discussion on the respective wiki talk pages. On IRC, only people who are online at the time will see it, but on the wiki, everyone interested in a template will see discussion happen on their watchlist. Moreover, many wiki contributors don't use IRC, but all wiki contributors use the wiki. --Tordanik 09:14, 19 May 2015 (UTC) Complicating my work Hi Verdy p, it seems that you for a reason which I still do not grasp insist on destroying a category Czech_Documentation. We, Czech translators, use it and need it. Why do you believe that this category needs to be removed, when someone else is telling you that we need it? Does the server suffer from having one category more? You write "This category is no lognert needed (there's already a parent Czech category)." Where is this category? And even if it exists whats wrong in having the page in two different categories if we want it so? - Like English it is in Category:Cs:Data standards (fell free to rename "Data standard" in Czech). You can populate it exactly the same way as English or others. — Verdy_p (talk) 14:18, 18 May 2015 (UTC) - Also take an example on the German and Japanese categories that are very populated (notably Japanese). Cross-language navigation is very important notably becauze there's not of missing translations and creating separate trees just creates more maintennce nightmare and cause the site not being used enough by people not reading English. 14:22, 18 May 2015 (UTC) Chrabros (talk) 14:11, 18 May 2015 (UTC) - I do not destroy it, it is being reparented just like all other documentations by language, Czech is not an exception. Categories uses language prefixes jsut like pages, and templates. - And there's no ,need to maintain a dual structure (overcategorizartion in this wiki is a several problem, it does not really help discovering which contents are available in one language or another, we want the same structure across all languages, as muc has possible, except for things that are not translatable). - Don't you see that you want to keep this overcategorization and jsut keep your category that mixes ALL topics in one category? This won't work for the long term and already this is a problem with many pages in Czech that an't be found where you seem to expect them. — Verdy_p (talk) 14:15, 18 May 2015 (UTC) - All categories in Czech so will have the same structure as English or other languages when they are translations. — Verdy_p (talk) 14:16, 18 May 2015 (UTC) - OK, but until all the work is done and the categories will settle. Would it be possible to keep this category intact even when you consider it overcategorisation? As we do not have a full language space on this wiki, there is no other way I know how to track new Czech translation other than to keep it in one category. Chrabros (talk) 14:24, 18 May 2015 (UTC) - You can track the work by seeing migrated categories. the Czech documetnation is already within "Category:cs:Categories", but all other topics are being recategorized per language with identical structure. - I am not destroying the content, but keeping the dual structure does not help seeing what has been done or not. Everything reamins in a category in Czech, except that once migrated to the appropriate place, there's no need to keep a duplicate entry. Slowly, the "Czech documentation" mix can be reduced, topic category by topic category, page after page, without creating new red links. - THis new strcuture allows easier transaltion in fact, becaues you can predict where to put pages and avoid mix lot of things with the English content. - If you want a tracking category for translations in progresses or mising, there already exists templates and tracking categories for that. — Verdy_p (talk) 14:30, 18 May 2015 (UTC) - Maybe I'm butting my nose where I shouldn't, but I agree with Chrabros that there is no problem with Category:Cs:Czech Documentation. It's not even an abandoned category. If the czech community wants it, they can have it. Even if there may be some overlap. --Jgpacker (talk) 16:22, 18 May 2015 (UTC) - I have not said that it was an abandonned category... — Verdy_p (talk) 16:22, 18 May 2015 (UTC) - Let's try it from the other end. Could you point me to a place on this wiki where it is said that it is prohibited to create and use a new category if I want it? I do not believe that there is such a rule. I repeat that we need this category to maintain our translation effort and you are currently messing with it. Please stop vandalising our work. Chrabros (talk) 17:34, 18 May 2015 (UTC) - I have not said that this prohibited (but the wiki documents since long how to sort languages corectly, the link for that is in EVERY languages bar; first start reading it, the rules have not changed and since long this is the correct way to do, and all other languages are follwing it: if you want a good example look at Japanese, French, German, and partly Russian; but the same is also used for all other less important languages and this works). - This is also the SAME model used in Wikimedia for its international sites (Commons, Meta), so this is nothing new (except if Wikimedia uses suffixes, but this wiki uses prefixes instead). - And as I said I'm not "vandalising" but sirting the contents and making lots os links working as expected. - None' of the pages are left uncategorized and all your pages in a Czech category. - However this wiki WANTS that we use language prefixes (otherwise it's simply impossibly to make interlanguage links working without extreme maintenances everywhere, and finally all gets mixed. - Look at the current categories now and if you remeber what it was before, it was simply impossible to navigate in any other language than English. - Normalizing the prefixes has greatly helped increasingt the number of translations (evern your workj has been facilitated) because we could find easily where to create links from/to, and we can now see which pages are missing transaltions and which have a translation. They are all managed in the same group. - Stop accusing me of vandalizing because this is definitely not vandalism. I have not destroyed or removed any functionality, I've added functionalities in a very clear way, I've even simplified your maintenance work, and fixed many non working links in Czech pages (but you don't see them simply because this is working now; before these links were red or going only to English pages)! - So I really don't understand why you way that I complicate your work (unless you want a private list of pages, but categories are not user pages: you can create these lists for you in your user page if you wish), but there's no interest in maintaining in general categories a duplicate system that requires additional maintenance (alsso because this is the "lazy" solution where you collect random pages on all topics, and don't want to sort them or even allow them to get outside of this small world where English is one one side with all other languages interlinked correctly, and Czech left appart and links going nowhere). — Verdy_p (talk) 17:43, 18 May 2015 (UTC) - Another note: this wiki is not just made for Czech translators. It is a distribution site where translations are expected to be read by a lot more users that can't translate the content and can't read English reliably. - What you want is jsut a private collection of pages four your isolated work as a translator to Czech (mostly from English, I doubt you translate anything from German, Russian, or Japanese). - Interlanguage links (in all directions and between any pairs) is an important feature of international wikis and for international projects like this one. — Verdy_p (talk) 17:53, 18 May 2015 (UTC) - It seems that you do not understand me. I do not care what you do with the categories on Czech pages. It was mess before so you probably are improving it. I only care about ONE category which you keep deleting from Czech pages. There is a reason why we want this category and I still do not understand why you have to delete it. It does not interfere with your re-categorization efforts. Does it? Chrabros (talk) 17:57, 18 May 2015 (UTC) - Another note, if this is not clear : I am NOT the one that posted the idea of deleting the "Language Documentation" categories and that removed all its parent categories. I have reparented them however in the "Category:xx:Categories" where they should remain, and there's lot of work to do to sort what they contain in order to allow all these pages to be navigatable either by language with the language bar, and by topic. - I have not restructured completely the wiki like Xxxxzme did (he left red links and interlanguage links not working, everywhere he did that, and it was hard to follow him to correct what he was never verifying). - But if a Czech page is the translation of an English page, it should have the same categories to allow the same navigation and the same maintenance. - I do not touch pages that are untransalted or specific to a language (most of them are only for Wikiprojects by country, and these pagers are just TODO lists which have no interest for being translated. - Also if pages do not contain any language specific content, it is left in the English categories (most of them are utility templates, to resolve links or that just generate some presentation layout; if we can avoid translations by making them smarter, these tempaltes are modified so that they'll be reusable in all languages, without complicating the task for translators). — Verdy_p (talk) 18:02, 18 May 2015 (UTC) - Also you have NEVER exposed the reason why you (only you) want to keep this list of pages? If this is important for your own work, jsut create a list of pages in your User pages. This won't pollute the rest and nobody will break your list. You've admitted it in, the title of this topic, you're speaking of "my work", but this wiki is collective and does not belong to you only. May be this complicates a bit your work, but this is the tradeoff to do (I do not overcomplicate things, I simplify them instead, but a few things have to be made by admitting that we are never alone and others will not use your method). My method of sorting things on the opposite is used now extensively also by others since long and documented (it is then approved), evne if it requires many incremental changes to be applied where it is still missing. — Verdy_p (talk) 18:07, 18 May 2015 (UTC) - Could you please answer my previous question? Why we cannot have one category containing all the pages in Czech language? What harm does it make? Chrabros (talk) 18:12, 18 May 2015 (UTC) - The reason is that I need it as a category. If someone from Czech translators translates tag or key page it is added automatically to this category so we can see what was done. It works automatically without any other effort. BTW to your ad hominem attack: I wrote "I" because I have translated vast majority of Czech pages. Now there are two of us and we use this category to collaborate. Is it clear now? Can you just leave thsis one category alone? Chrabros (talk) 18:12, 18 May 2015 (UTC) - And for now I have not even touched the Key:/Tag: pages which will require work later. In other words, the categories that you want are completely unrelated. - But of course there is work to to also on these many Key:/Tag: pages because they are effectively incorrectly categorized (including in English). - If there are missing translations in Czech (for a translatable page in English) you can find them in a dedicated category (but still not Tag:/Key: pages which use an old system that still does not work correcctly across languages). — Verdy_p (talk) 18:16, 18 May 2015 (UTC) - All others are using Wikiproject pages when they cooperate on a task on this wiki. Why not you (I have left Wikiprojects where they are, even if a few have transaltable pages in multilingual countries like Swizterland, or Italy, Belgium, Canada, or Ukraine). — Verdy_p (talk) 18:18, 18 May 2015 (UTC) - Again, can you answer me a simple question why am I prohibited to create and use one single category on this wiki when I find it useful? Chrabros (talk) 18:23, 18 May 2015 (UTC) - Again, (I have already replied to you about this!), this is NOT prohibited, and I do not propose to delete the existing one. But populating the generic space is completely unnecessary (it would be specific to Czech and cannot be automated for others). - But please avoid polluting the main translatable space. (Nobody can see the pages you want to keep an eye to: please create a wikiproject page for the pages to follow, otherwise you will continue to work alone and you will complain against everybody else that has NO view about what you need). There's a Czech Wikiproject, why don't you use it to list pages to work on or create your TODO lists ? — Verdy_p (talk) 18:28, 18 May 2015 (UTC) - Ideally yes, using a prefix is still prefered to using an adjective. It correctly instruct other templates that can detect in which language it is populated (language names are complex, they vary depending on sources, or on their context of use with variations caused by grammatical rules). - But if you need them for your own Wikiproject I suggest using "Category:Cs:Wikiproject Czech XXXX/..." so that you can still have generic pages for progresses, to do lists, copperative talks. You can even name the project "Czech XXXX" in Czech, provided you keep the prefix for keeping the main namespace clean. The other benefit is that it is easier to extract all the content in Czech or search in it when it has the same "Cs:" prefix. (the remark applies in all the 3 common namespaces: Main, Category, Template, where we want to separate languages from English)... - But if this is just to list all contents available in Czech, there's a root category for that: Category:Cs:Categories: you can translate the later term "Categories" in Czech, by renaming the category page into "Category:Cs:XXXX" (preserve the prefix, but make sure that the English category name (after "Cs:") is preserved, and that it is linked to your translated category name, as it will be used by interlingual links). — Verdy_p (talk) 19:12, 18 May 2015 (UTC) Avoid creating so many revisions You create far too many revisions for your changes. Have a look at this, the 7 version history is dominated by the changes you made within an hour. The version history is an important tool, and you should take care not to spam it. A lot of that is simply you changing the text again which you wrote a few seconds earlier. Use the preview button, not the save button, and only save when you are confident that you don't intend to change that page again that day. In addition to that, please give descriptions of your changes. --Tordanik 09:37, 29 May 2015 (UTC) - I gave the description but the rest were minor typos corrections when rereading more carefully. The diffs are clear and tiny about these changes it you are interested in following them. This is not what we generally call "spam" (whose definition is *massive*, 7 is not, submission of undesired content, to many users, I did not target any user, and there was no junk). There are many other users on this wiki that can also make these changes, you would have to follow them as well if you are interested. There's absolutely nothing in wikis that forbids doing incremental changes. — Verdy_p (talk) 16:44, 29 May 2015 (UTC) - And note that you also perform multiple edits to the same page (look at you own history). Like me, this occurs more frequently when we are updatings pages with small incremental changes, than when creating completely new sections (there are much less things to check, with the exception of finding targets for possible links). Your edits in several "news" pages show similar behavior with multiple incremental changes (it's much harder to do everything in just one edit without breaking many others). — Verdy_p (talk) 20:04, 29 May 2015 (UTC) - But you'll be looking to improve this style of wiki editing in the future following Tordanik's polite suggestions I assume -- Harry Wood (talk) 17:01, 29 May 2015 (UTC) - Even if this is not evident for you, I almost ALWAYS use preview except in talk pages where it does not matter; this would be evident if you had accesss to the server log and not just the page history. For some caes the preview alone is not enough when there are links to checks, or dependent pages to verify after the submission: I also view them when there are inclusions. And this can require a few other incremental edits. - My number of previews largely outweights the number of edits by several factors. (And please note that I have some vision problems now, it's hard to see typos even when rereading: the preview only gives a global view of the edited section, but does not focus on looking at the whole page or at specific zooms for details. Unlike many users here, I correct my own typos and do not leave them for long. But frequently I detect other typos made by others that are corrected in further edits on which I was initially not focused as they were not part of my initial intended change. — Verdy_p (talk) 17:08, 29 May 2015 (UTC) Template: Stammtisch Wien Please stop redirecting that template to another page. This wiki is a resource for the whole OSM community and not your private playground.--Andrew (talk) 07:08, 14 June 2015 (UTC) - I have not redirected it to another page, I have reverted an overwrite that was done recently that wanted to kill active links. This page was in "DE:" since long (it is effectively purely in German language even if it displays mostly digits, all links are to German pages and the tempalte is only usable in German pages as well). - The attempt by a user was to kill active links by *replacing* the page bn an invalid deletion request banner (and this effectively breaks pages referencing it). - It was also incorrectly removing the correct catefoization polluting the English or multilingual space and also was in a geenric Template category that required subcategorization. - I followed the rules about categorization per language and more precise categorization but one user does not see that (or refuses to just read the reasons). - I have NOT killed any reference, all links were preperly working. So it is not for my own "playground" but really because there are too many categories mixing languages and that are overpopulated. — Verdy_p (talk) 08:14, 14 June 2015 (UTC) Reason for moving wiki articles and categories in Portuguese Your edits have renamed an article and its associated category causing the main article in English to not link to the Portuguese version of the article anymore. Is there a reason for doing this? I think navigation across languages is now broken as a result. Apps are likely to link to the English page or to the language-specific page whose name is the same as in English (just an extra prefix), apps can't know the localized name of the article. The OSM wiki runs an older version of MediaWiki that does not support interlanguage links.--Fernando Trebien (talk) 00:54, 30 June 2015 (UTC) - After taking a closer look, it seems the edit at fault is this one: The english page name was modified, without the proper update of the other pages. Cheers --Jgpacker (talk) 02:02, 30 June 2015 (UTC) - Yes, but not only. The Languages template called at the top of pretty much every article in the wiki creates links to prefix:{{PAGENAME}}. The {{PAGENAME}} call is actually a magic word which cannot translate the original name of the article to a localized name, thus, language-specific articles must retain the name in English. Interlanguage link support in more recent versions of MediaWiki (as in Wikipedia) is supposed to solve that problem.--Fernando Trebien (talk) 02:36, 30 June 2015 (UTC) - The actual trouble was that the English page was moved from Automated Edits with a capital E to Automated edits in March, so that the redirect pt:Automated Edits wasn’t found. I've created a redirect that let's you navigate from English to Portuguese.--Andrew (talk) 06:15, 30 June 2015 (UTC) - Conclusion: I did not make anothing wrong. This was caused by someone renaming an English page and not checking the redirects that were needed also in translations. All was correct when I just forwarded the previous English name to the portuguese name. The reason given for renaming the English page is based by someone that did not realize that here, this is NOT Wikipedia and this wiki uses other rules (notably, renaming pages must be made carefully to check the redirects and interlanguage links (he did not bother about that, causing the trouble). - I have absolutely no responsibility for this recent issue that was caused by someone else. — Verdy_p (talk) 05:15, 1 July 2015 (UTC) - Good. And no doubt you'll join me in giving a friendly "thank you" to Fernando Trebien for spotting the problem, and Jgpacker & Andrew for investigating and solving it... Problem solved.... Happy collaborating. -- Harry Wood (talk) 11:54, 2 July 2015 (UTC) Category:Company vs Category:Companies_that_are_involved_in_the_OSM_project vs Category:Manufacter of GPS chips or units - I doubt that Nokia is a Category:Company "Companies that are involved in the OSM project" - I don't think we need to place any page in category "Company" (only sub-cats) Xxzme (talk) 19:00, 29 August 2015 (UTC) Mixing Category:GPS (single instance of GNSS) and Category:GNSS was bad idea. your edits here. Complex topics like Accuracy of GPS data belong to GNSS category. For some users "GLONASS" or Galileo is used **instead** of GPS and why we should call them "GPS"? Instead of "GNSS" you may use "Satellite": "Satellite unit", "Satellite receiver", "Camcorder with satnav". Xxzme (talk) 07:33, 30 August 2015 (UTC) : for indention Hi Verdy, please note that I have undone your change from last May to Quality assurance. The ":" is not meant for indention on normal text pages as it produces a definition list in html which is semantically not right here. Could you please tell me why you wanted to change the syntax of this page? Happy mapping! --Aseerel4c26 (talk) 20:06, 20 October 2015 (UTC) - You're wrong, because now you've broken the bulleted list (introduced by the *) : using a "p" element breaks the list by inserting a separting paragraph. Here the ":" was appropriate to keep the correct indentation level **within** the same bullet item. yes it generated a dl/dd, but that's the same as in talk pages where dl/dd is fine without any leading dt. If you don't like dl/dd, then the only choice is to use a "br" element (but then you cannot include any newlines in the bulleted list item as it would also be broking the bulleted list) or use a second asterisk to create a sublist. But here dl/dd is a complement adding definition to the leading bulleted item. - Unfortunately, this is the wiki syntax (we don't write HTML elements, but the wiki parser still generates dl/dd for ":" even if there's no leading "dt" (i.e. line starting by ";"), when it could generate of course an indented block within the existing list item. The ":" is the standard on MediaWiki for indented blocks, they preserve the lists in which they are inserted ! - Your fix is a bad trick and dl/dd do not have any impact and are safer here (including semantically). — Verdy_p (talk) 20:15, 20 October 2015 (UTC) - Hi Very, thanks for your reply. Hmm, I do not see that I break the list. Look in source here: - There is exactly one list item for the one "Notes" entry. This list item just includes a paragraph in its text. Could you please have a look again where you see "breaks the list"? - Yes, the ":" works in media wiki and is used on talk pages, but only there (which is still not nice). It results in the insertion of a definition list inside the list item - where there is no definition list in reality. - Could you please tell me where my "trick" is "bad" or more bad? --Aseerel4c26 (talk) 05:23, 21 October 2015 (UTC) - By "bad", I mean that it requires you to type everything on the same line, without eny newline, and including when there are multiple paragraphs. And you absolutely need to use only HTML tags (but you cannot include any div, including for floating elements). We are far from the wiki editing facility. - The dl/dd is effectively still generated by MediaWiki but it does not hurt at all there, even if the list is redused to a list of dd (definitions, introduced in fact by the container element which is the bulleted list item just above that acts as a title. And it's much simpler to edit. - There is still a request in MediaWiki to generate the ":" lines in a blockquote when they have no leading line with ";" (dt). But it was still never done. Anyway this is the way indented blocks are editing in MEdiaWiki, everywhere, not just for this page. Semantically it still does not hurt and the semantic containment in the parent list is still not broken. - So your way of doing that is just a "dewikification", that complicates edits. The generated effective HTML tags are still invisible, even if dl/dd is not the best choice here (where they should be blockquotes and not even paragraphs as you want to do). You won't reinvent the fact that ":" is used for indented blocks within a larger container, even if there is no dt (";"-leaded MediaWiki lines). Also look at the page, the indepent blockeffectively follows a bold heading line, and this line should then be the "dt" item, the "*" bullets should not even be present: everything there is effectively a definition list. - You want to be pedantic, but your pedantic way of writing is not even correct if you follow this logic. It is only a more complicate syntax for editing (with still restrictions on the contents you could place there (such as positioned blocks that add complements to a bulleted list item). Using explicit "p" elements is strongly discouraged in MediaWiki (also they introduce extra vertical paddings that separate them from the preivois block for the list item, even though it is not separate but contained as a subpart of it. - — Verdy_p (talk) 13:11, 21 October 2015 (UTC) - Thank you, I will get back to you in a few days - I want to think about it again. --Aseerel4c26 (talk) 18:12, 21 October 2015 (UTC) - I Verdy_p, you might have seen that I have set the page back to your version again. I am not really clear myself about where to go. Still on my Todo list do think about it. Happy mapping in the next two sunny days! :-) --Aseerel4c26 (talk) 21:24, 30 October 2015 (UTC) double redirects Hi Verdy, just by chance, really (I was trying to open Key:roof:shape), I was stumpling over double redirects as a result of your pave move. Double redirs do not work. In case you are not aware of this problem: Please try to fix such redirects by editing the original link. See --Aseerel4c26 (talk) 19:45, 21 October 2015 (UTC) - Where are there double redirects. I have always checked them after moves, may be there was one edit not recorded in the list of links. If I forgot one (may be the list of links was too long and it was in another page), sorry this is easy to solve. — Verdy_p (talk) 20:36, 21 October 2015 (UTC) - OK, this is fixed. I did not see one level of indentation in the long list, and I had fixed only one, but not all of the existing aliases. — Verdy_p (talk) 20:41, 21 October 2015 (UTC) - Okay, thank you! :-) --Aseerel4c26 (talk) 04:51, 22 October 2015 (UTC) User blocked for 1 Week. From: Verdy p <verdy_p@***.fr> 31. Oktober 2015 um 11:20 To: Reneman <rene******@***.de> I absolutely oppose your accusation of "vandalism" and the reason you give "delete contents from pages". Here is what you have done this morning: * 31 octobre 2015 à 11:00 Reneman (discuter | contributions) a bloqué Verdy p (discuter | contributions) ; expiration : 1 semaine (création de compte interdite) (Removing content from pages: Your changes are not discussed!) * 31 octobre 2015 à 10:56 Reneman (discuter | contributions) a automatiquement marqué la révision 1236457 de la page Historic.Place/News comme relue * 31 octobre 2015 à 10:55 Reneman (discuter | contributions) a déplacé la page Historical Objects/News vers Historic.Place/News (Undo by Verdy p (talk) VANDALISM!) (rétablir) * 31 octobre 2015 à 10:55 Reneman (discuter | contributions) a supprimé la page Historic.Place/News (Duplicated page: content was: "#REDIRECT Historical Objects/News" (and the only contributor was "Verdy p")) I have absolutely not deleted any content. all pages on this subject (except the "/News" subpage had already been renamed). There were missing links. But now you've created broken links to these news page in existing translations. Even links from the external website for this project were ALL tested. In addition there were several double redirects that I had fixed on these pages (caused by a few page renamed since long, I am not the auhor of this old renaming of the base page). I had just rebased it the way it should be. I had properly checked all links. ABSOLUTELY NO content was removed, there was nothing to discuss these were genuine corrections. And I had discussed these related changes. Visibly, you use the "revert and block" too speedily without even thinking about what you do. Now I'll have to contact another admin because you are acting on this wiki as if it was only yours, because your sanction against me is ONCE AGAIN, completely unjustified. But you do not want to discuss anything even on the wiki, when you accuse me of not discussing. May be you may oppose one or two things, but I've always been contactable. You are completely wrong. - "Historical Objects" ist ein Projektname. Niemand übersetzt Windows in Fenster, den einen Eigennamen kann man nicht übersetzen! - Wiederholt hat dir die Community mitgeteilt, dass du deine Änderungen vorher absprechen sollst! Für das Projekt "Historical Objects" sind alle Ansprechpartner im Wiki benannt, auch ich! Du hast aber nicht gefragt, ob du Seiten umbenennen darfst! - Das Projekt befindet sich in Überarbeitung und hat einen neuen Projektnamen "Historic.Place". -> Eigenname, nicht übersetzbar! - Wie bereits angemahnt, sind Unterseiten bei der Berbeitung zu berücksichtigen! Das Projekt hat davon sehr viele! - So lange du nicht bereit bist, vorher deine Änderungen zu besprechen, wirst du von mir erzogen! Erziehungsmaßnahme: Sperren deines Accounts. - Das Aufräumen nach dir hat mich 3 Stunden gekostet!!! - Wer nicht lernen will, muss fühlten! - Mit freundlichen Grüßen René aus Mainz vom Team "Historic.Place". --Reneman (talk) 12:09, 31 October 2015 (UTC) - You have absolutely not replied to the concern about your false accusation of vandalism. I did not delete anything. - You have never told anywhere and with anyone with the restrictions you wanted to apply. If this ever occured, the place of those discussions can't be found (it is documented nowhere on this wiki) - You acted completely alone and abused your privilege for a single edit you reverted and causing me to be immediately blocked without discussion. - You've not talked to anyone about the sanction you applied to me. - In fact you don't talk with anyone on this wiki, your just force your own point of view. - You also make lots of hard deletions of pages without discussing them. You also delete many redcirecting links that are kept for historical reasons (for external links to this wiki) or because they are needed for the interlanguage navigation. - You have stated, without proving it, that the name "Historical Objects" was a formal untranslatable project name. However this name was ALREADY translated on the relevant pages. And they are not even the name of the external web site, which is out of control from the OSM Foundation and that you do not even own and administer yourself. - All my edits were genuine, not abusive. Your sanction was clearly disproportionate. - You refuse to learn and use the common practice which is on this wiki to use the resources available to users on this wiki (we are not bound here to policies required by other external sources, the external wiki may havbe a policy but I have not used it and not even broken it: that external website has 3 links to this wiki, and these links were completely functional, I did not break them). - Things could have been easier if you had first attempted to contact me or create an appropriate discussion place. But there was none. You just blocked me for "educating me", but you did not give any hint about what I could have missed and there was absolutely nothing to learn from what you did or said. You only acted alone. - In fact you don't even know the meaning of the term "Vandalism" and you have breached the policy related to abuses on services hosted by the OSM Foundation, developed by the foundation and published on its wiki when you used your privileges on this wiki. Visibly you need to learn something. Refer to this Foundation wiki which clearly states what must be done if you think there's a real abuse. As you did not notify it formally, you are completely wrong. - Finally you persist in replying only in informal German, using wortds that are probably offensive/insulting, even when people write you in English (I know you understand and write English). German is only a secondary language but I know it much less than English, and most users on this wiki can't read German, and notably not your informal level of German which is nearly untranslatable (tyhat level of language may be used when chatting/emailing privately within a German-only community, but this wiki is definitely not German-only). Most users on this wiki, including admins from the Foundation, do not understand German and can't read you. You were granted some privileges only to perform some basic cleanup on the wiki or to block **evident** abuses such as spammed contents or deletions, but this is absolutely not what occured here. And there was absolutely no situation of emergency (in fact when you reverted the pages, you broke many of them and made many errors). - I'm not at all responsible about the time it took for you to correct your own errors after your reverts. You did not even asked to anyone for help, and did not ask me what could be wrong. I could not help here because you had blocked me from doing anything or replying to you on this wiki. The time it took for you is enterirely your fault and NOTHING was broken when you started reverting me and blocking me immediately without discussion. You've just transformed this wiki in a set of proprietary pages where you do not accept any edits except by you, and you don't want any help. - You should also learn about how to design a page. You've made pages containing section titles without any content except a link with the same term to another page. Waht I did was only to format it, but all links were kept. The page I updated was much more readable. And I was about to discuss it because the contents were ALREADY desynchronized in the German version. That'd another place where you need education/training. There was nothing wrong in what I did. - The last time you blocked me, it was for a single term which was ambiguous in German but distinguishjed in English and other languages. Alone, you took the decision to merge the two terms (including one for which I gave a correct translation, you've kept it). Nowhere you discussed this merge and now you've created a new desynchronisation of German with other languages. Visibly you want to create a German-only version of this wiki and don't care at all about other languages, but you have also a too strong point of view of what should remain in German or what will remain in English: your edits make a wiki that uses a mixed language, too technical, and not targetting real users, but only a few users like you. And you don't want any newer users coming here to cooperate. Your action is completely anticooperative. - I did not breach any policy, you did it twice, very abusively. You destroyed legitimate edits. If there's a vandal here, you are the vandal, your destructing actions are damageable for this wiki and cause many users to not participate as they know you abuse your privileges and will do whatever you want here. - — Verdy_p (talk) 13:31, 7 November 2015 (UTC) Tile formulas Hello Verdy. Your recent changes to the tile name formulas have introduced some errors. We've found at least one in the Java implementation ("the tile2lon function always returns -180, instead of a correct lon"), and you've made mass changes to every other language making it very difficult to pick apart. I've rolled them all back for now, but can you please separate formatting and content changes, and if you still want to change the formulas do each language as a single distinct edit with your reasoning in the description after consulting with an expert in that language. Leaving the descriptions blank is of no use to anyone. Thank you for your cooperation. --Dee Earley (talk) 14:41, 15 December 2015 (UTC) - I did the edits language per language. - But yes there was a minor error in the Java due to the replacement of pow(2,z) by (1<<z) (which was correct except the result was an int instead of a double) but only because it caused the division using it to occur only between integers and truncating the result of the division. - This page is very long in fact, but not very readable. It does not help much to have over long developed formulas which are not even very optimal when they don't reuse some common results. - My error in a single Java function which was done in a DISTINCT edit, but you reverted everything without much care. - — Verdy_p (talk) 16:05, 15 December 2015 (UTC) - You are mistaken. This is a single edit touching pretty much every example, which you followed up with a number of other random edits across the page. If you roll back my revert again, you will be banned. Final warning. I am fed up of fixing what you break on our wiki. Separate format and content changes and provide description and verifications of any changes. --Dee Earley (talk) 16:14, 15 December 2015 (UTC) - The Scala example also seems to be broken in a similar way: ((lon + 180.0) / 360.0 * (1<<z)).toInt - to: (1 << z) * ((lon + 180.0) / 360.0).toInt - --Dee Earley (talk) 17:12, 15 December 2015 (UTC) - I've not rolled back everything, like you did without any caution. - Yes the missing promotion from int to double was forgotten in Java (unexpectedly due to change or order between operands), but it was in the fix you reverted with all the rest. - And there's absolutely no error in the scala code you quote here, simply because (1<<z) is an integer, given z is an integer ! You don't know what you're speaking about. — Verdy_p (talk) 19:10, 15 December 2015 (UTC) - The scala code always returns 0 for the lon. As you take the int of ((lon + 180)/360), which will always be between 0 and 1. Whereas the original code multiplies it with the zoom factor before converting it to int. So I don't know how you can say that the Scala code doesn't have bugs. It simply shows that you don't test your changes before making them. It's impossible to know all those languages in all detail, so it's just as impossible to edit all those languages with "trivial" changes and expect everything to still work. In short, you shouldn't modify well-tested code before making sure that your version is an improvement. Reverting was the only option here, as we can't test every change you made, and already discovered 2 mistakes with the little programming knowledge we have. --Sanderd17 (talk) 19:50, 15 December 2015 (UTC) - I warned you not to redo the exact same changes you did last time, so you go and do them again. As you refuse to stop making unilateral changes to all the examples in a single edit, the ban has been applied. The page has been reverted yet again so formatting and code changes can be separated and verified individually. You have broken two language's examples already, who knows what else until ALL your changes are verified. --Dee Earley (talk) 21:30, 15 December 2015 (UTC) - Just to be clear, some of the changes you were making were reasonable and useful, but you were mixing significant format changes with code changes making it VERY difficult to actually verify the code changes don't break things (as you have done to two of them despite your claim that you hadn't). I say again that you can make the changes, but they MUST be individual, described, verified edits. anything else will be rolled back, no questions asked. Related reading: If you’re going to reformat source code, please don’t do anything else at the same time --Dee Earley (talk) 10:08, 16 December 2015 (UTC) - But I made them separately, you reverted them in a group without taking any care. You reverted also all those individual edits that were not reformatting, and that were properly commented, and blocked me for that. - I had taken into account the two bugs, but corrected others, you simply ignored everything. - Really you don't know how to read a diff, you're puzzled just by whitespaces, those edits were really small, Blocking me for that is really unfair, unless you are too lazy, or too tired. - Next time, please take some break and have a sleep. Blocking me for two small issues that were corrected easily and after I took inbto account the discussions here will not help anyone, unless you feel you are the only owner of the wiki and don't want any one else to do something on it. - There are tons of small or big errors on this wiki (including from you, and I corrected many of yours, without alarming you unfairly as you did, everyone makes some errors even when they play fairly) but people will not correct them or will not improve it, and this wiki will become a large bin of garbage thrown by anyone that thinks he is proprietary of all pages and ignores some work for coherence. With this type of speedy action you did, people stop working on the wiki after their first time. - I'm "fed up" (your term) with the way this wiki is administered and how you frustrate many people here that try to do someting useful and when such admins just simply stop all kind of discussion and don't even follow the rules they expose like you did above, treating me as if I was the smae kind of spammers that broken many pages yesterday (the way it was managed by blocking the wiki for everyone, was really bad, with pages tagged for deletion that were in fact the original modified ones, and lots of errors by admins themselves and then a very long recovery time... Note that you have forgotten several accounts of the spam bots seen yesterday and left several pages around, you did not read the activity log scrupulously) - — Verdy_p (talk) 01:07, 17 December 2015 (UTC) - As a reply: the changes appear as a big diff on the wiki. Such a big diff is not easy to read, and you didn't give an explanation for every change you made. Also, you admit you were mixing reformatting and code functionality, which is exactly what shouldn't be done on a big scale like this. Next to that, the issues were not corrected easily. It takes a lot of time to simply read a diff like this in detail, and I knew there would be issues with such a big change, it was only a matter of finding them. Lastly, if you correct errors, please do so in a small edit, while mentioning exactly what you corrected, so we, and other wiki readers can learn from it and check it was a valid issue. A big change mixing text with code formatting changes and code functionality just messes up the history of a page, and a former correct formula can become incorrect, as we've shown you twice. - Now, as a general remark, I don't get why you want to do style changes. Every project has its own style on where to put whitespace and how operations should be formatted. It has no use to uniformise the style here on this page as programmers using it in their own project will have to modify the style anyway. As such, when you change code, please say what was wrong with it on a functionality level (like the new version being x% faster, or more precise, or handling an edge-case better). Big changes without explanation just can't be included in code where multiple people already worked on. This is common knowledge in any programmer's team, and as the wiki page is mostly code, the same should hold for the wiki page. --Sanderd17 (talk) 07:44, 17 December 2015 (UTC) - Once again you don't read: you have reverted MULTIPE separate small edits which contained the comments. That was a big revert where your did not read anything. - You don't even follow the "advices" you give me here. Thaty's just lazy. You've blocked me even if I had applied "your" rules (in fact you have a vision of these rules, otheres want exactly the opposite, and editing whitespaces is not a big deal when diffs are easy to read. Not my edits were definitely NOT big, much less than your grouped revert. - You've even canceled TRUE corrections (onve code is definitely wrong notably in C#... you've reverted my evident correction which was done separately exactly the way you described above, and blocked me for that !) - Rethink about it, if you're too tired, avoid such speedy reverts you did withour thinking. It is evident you did not read anything. Smll or big you simply did not want any change on the page, you would have blocked any one else on the same terms. — Verdy_p (talk) 08:21, 17 December 2015 (UTC) - So you accuse us of not reading anything, while we found 2 mistakes in your changes? Most project maintainers won't even go through the hassle of reading such a diff and simply don't accept it. The remark that the C# WorldToTilePos function should return an integer point is indeed a valid remark, and that change should be accepted (however, the change is not complete, as even in your version the reverse TileToWorldPos function still accepts floats). And the change came after the massive diff, which means it had to be reverted all together. Next to that, editing whitespace can be important in whitespace-sensitive languages. Adding or removing whitespace in a bash script can cause big issues, the same holds for changing the indentation of a python script, and there's even an esotheric language purely based on whitespace. Then there are also many language sensitive about where newlines can happen. So saying whitespace changes are harmless for all languages is just wrong. Whether you're talking about a wiki or about code, it should follow an iterative process, where every step is explained and shown to be an improvement. --Sanderd17 (talk) 09:07, 17 December 2015 (UTC) - Yes, I reverted multiple edits as you had made a huge edit with formatting changes and code changes. Every single time. As the "small" edits were on top of this big homogenous edit, it's impossible to pick them apart and revert just the initial one. As I've said several times, you can make formatting changes (not to the code layout or structure), but they MUST be an individual edit. I will also call out anyone else that makes unilateral, breaking changes to pages. My contributions mostly consist of fixing up your and Xxzme's edits. This is the last time I will say this and any more homogenous edits will be rolled back. Thanks for your compliance and understanding. --Dee Earley (talk) 09:14, 17 December 2015 (UTC) - Your rule of "individual edits" is apure inventioçn you have added to this wiki. And there are opponents to this view, with other adminbs that don't like multiple small edits as well and criticize those that do it. - It's accepted practice in pretty much every computing environment. See the link I gave earlier. - There's absolutely no concensus on this rule which is written NOWHERE, and that you apply only to me on this case, based only on your own preferences. - I apply it to breaking changes I see or am alerted to. - Various people read the diffs in a different way than you do. But even if you don't admit it, you did not read anything, and you don't know how to use proper reverts when you cancel everything blindly. I am MUCH more careful than what you do. Next time I will report all the many errors you did in the past, because there's lot to say about them. - Go ahead. - You've just blocked me on two errors that I took clearly into account, this is the NORMAL way to work on a wiki: in many small changes by the same person there are always small errors. This is the overal account of what is good. and further things needing corrections that allows all wikis to progress. - I blocked you for making homogenous changes with no suitable description. - And don't compaare me to "Xxxzemy" that made deep content changes. I have never supported him and not acted like what he did. But on a wiki where there's few active editors there's little place to discuss everything (in wikis ther's a term for that: "be bold", then discuss problems when someone will find them and there's a conflict of versions. But here there was NO conflict of view, and accepted the two small errors but you rejected blindly the errors I reported. - There is a conflict of view. You're making changes against accepted industry behavior and common sense. - If you want this wiki to be managed oinly by you, then don't open account creations, and create a closed wiki. This wiki is not like that, we're open to more people than a few admins that have some privileges only for really problematic cases when damages are more important than benefits. - What, like breaking example code hidden in a mass edit? - I have not rejected your comments, yuou rejected everything. You've closed yourself the discussion by your own contradictions and inventing a rule that you don't even apply to yourself. — Verdy_p (talk) 09:33, 17 December 2015 (UTC) - --Dee Earley (talk) 09:39, 17 December 2015 (UTC) - Add to this, the fact that I am not the only person that has called you out on your mass edits and breaking changes of the wiki. Just scroll up and look at the fact that your talk page has an archive! --Dee Earley (talk) 09:43, 17 December 2015 (UTC) - You don't know the meaning of "mass edit", all the edits were actually small but you don't see that. I've kept almost all, I've not removed contents. I've not added a lot. - And this wiki page is to explain things and showing things more evidently, this has no impact on the code used in tools that is managed separaterly. This page shows some examples to comment them. And the presentation is more important and should make things obvious to the reader. without extra garbages like excesses of parentheses. - Also I've not broken the indentation rules, but made them more visible, using a common practive used in lots of development projects for indenting the code correctly. If you call that "mass edit"... — Verdy_p (talk) 09:46, 17 December 2015 (UTC) - For reference, the don't make many edits related to small typo correction. Not distinct, described atomic edits. Also, this is not a small edit. The Diff is just as long as the page content itself, and even when picked apart in a proper merge tool, contains many changes even when ignoring whitespace changes. --Dee Earley (talk) 09:53, 17 December 2015 (UTC)
http://wiki.openstreetmap.org/wiki/User_talk:Verdy_p/Archive_2015
CC-MAIN-2016-44
refinedweb
14,446
67.08
! JavaScript, Web development, Firebase I have an app that allows you to upload images. The images are stored using Firebase Storage. Then, once uploaded I have a Firebase Cloud Function that can turn that into a thumbnail. The problem with this is that it takes a long time to wake up the cloud function, the first time, and generating that thumbnail. Not to mention the download of the thumbnail payload for the client. It's not unrealistic that the whole thumbnail generation plus download can take multiple (single digit) seconds. But you don't want to have the user sit and wait that long. My solution is to display the uploaded file in a <img> tag using URL.createObjectURL(). The following code is most pseudo-code but should look familiar if you're used to how Firebase and React/Preact works. Here's the FileUpload component: interface Props { onUploaded: ({ file, filePath }: { file: File; filePath: string }) => void; onSaved?: () => void; } function FileUpload({ onSaved, onUploaded, }: Props) => { const [file, setFile] = useState<File | null>(null); // ...some other state stuff omitted for example. useEffect(() => { if (file) { const metadata = { contentType: file.type, }; const filePath = getImageFullPath(prefix, item ? item.id : list.id, file); const storageRef = storage.ref(); uploadTask = storageRef.child(filePath).put(file, metadata); uploadTask.on( "state_changed", (snapshot) => { // ...set progress percentage }, (error) => { setUploadError(error); }, () => { onUploaded({ file, filePath }); // THE IMPORTANT BIT! db.collection("pictures") .add({ filePath }) .then(() => { onSaved() }) } } }, [file]) return ( <input type="file" accept="image/jpeg, image/png" onInput={(event) => { if (event.target.files) { const file = event.target.files[0]; validateFile(file); setFile(file); } }} /> ); } The important "trick" is that we call back after the storage is complete by sending the filePath and the file back to whatever component triggered this component. Now, you can know, in the parent component, that there's going to soon be an image reference with a file path ( filePath) that refers to that File object. Here's a rough version of how I use this <FileUpload> component: function Images() { const [uploadedFiles, setUploadedFiles] = useState<Map<string, File>>( new Map() ); return (<div> <FileUpload onUploaded={({ file, filePath }: { file: File; filePath: string }) => { const newMap: Map<string, File> = new Map(uploadedFiles); newMap.set(filePath, file); setUploadedFiles(newMap); }} /> <ListUploadedPictures uploadedFiles={uploadedFiles}/> </div> ); } function ListUploadedPictures({ uploadedFiles}: {uploadedFiles: Map<string, File>}) { // Imagine some Firebase Firestore subscriber here // that watches for uploaded pictures. return <div> {pictures.map(picture => ( <Picture picture={picture} uploadedFiles={uploadedFiles} /> ))} </div> } function Picture({ uploadedFiles, picture, }: { uploadedFiles: Map<string, File>; picture: { filePath: string; } }) { const thumbnailURL = getThumbnailURL(filePath, 500); const [loaded, setLoaded] = useState(false); useEffect(() => { const preloadImg = new Image(); preloadImg.src = thumbnailURL; const callback = () => { if (mounted) { setLoaded(true); } }; if (preloadImg.decode) { preloadImg.decode().then(callback, callback); } else { preloadImg.onload = callback; } return () => { mounted = false; }; }, [thumbnailURL]); return <img style={{ width: 500, height: 500, "object-fit": "cover", }} src={ loaded ? thumbnailURL : file ? URL.createObjectURL(file) : PLACEHOLDER_IMAGE } /> } Phew! That was a lot of code. Sorry about that. But still, this is just a summary of the real application code. The point is that; I send the File object back to the parent component immediately after having uploaded it to Firebase Cloud Storage. Then, having access to that as a File object, I can use that as the thumbnail while I wait for the real thumbnail to come in. Now, it doesn't matter that it takes 1-2 seconds to wake up the cloud function and 1-2 seconds to perform the thumbnail creation, and then 0.1-2 seconds to download the thumbnail. All the while this is happening you're looking at the File object that was uploaded. Visually, the user doesn't even notice the difference. If you refresh the page, that temporary in-memory uploadedFiles ( Map instance) is empty so you're now relying on the loading of the thumbnail which should hopefully, at this point, be stored in the browser's native HTTP cache. The other important part of the trick is that we're using const preloadImg = new Image() for loading the thumbnail. And by relying on preloadImage.decode ? preloadImage.decode().then(...) : preload.onload = ... we can be informed only when the thumbnail has been successfully created and successfully downloaded to make the swap. Web development, Mozilla, MDN As of September 2021, I am leaving Mozilla after 10 years. It hasn't been perfect but it's been a wonderful time with fond memories and an amazing career rocket ship. In April 2011, I joined as a web developer to work on internal web applications that support the Firefox development engineering. In rough order, I worked on... This is an incomplete list because at Mozilla you get to help each other and I shipped a lot of smaller projects too, such as Contribute.json, Whatsdeployed, GitHub PR Triage, Bugzilla GitHub Bug Linker. Reflecting back, the highlight of any project is when you get to meet or interact with the people you help. Few things are as rewarding as when someone you don't know, in person, finds out what you do and they say: "Are you Peter?! The one who built XYZ? I love that stuff! We use it all the time now in my team. Thank you!" It's not a brag because oftentimes what you build for fellow humans it isn't engineering'ly brilliant in any way. It's just something that someone needed. Perhaps the lesson learned is the importance of not celebrating what you've built but just put you into the same room as who uses what you built. And, in fact, if what you've built for someone else isn't particularly loved, by meeting and fully interactive with the people who use "your stuff" gives you the best of feedback and who doesn't love constructive criticism so you can become empowered to build better stuff. Mozilla is a great company. There is no doubt in my mind. We ship high-quality products and we do it with pride. There have definitely been some rough patches over the years but that happens and you just have to carry on and try to focus on delivering value. Firefox Nightly will continue to be my default browser and I'll happily click any Google search ads to help every now and then. THANK YOU everyone I've ever worked with at Mozilla! You are a wonderful bunch of people! tl;dr; git clone && cd content && yarn install && yarn start && open will get you all of MDN Web Docs running on your laptop. The MDN Web Docs is built from a git repository: github.com/mdn/content. It contains all you need to get all the content running locally. Including search. Embedded inside that repository is a package.json which helps you start a Yari server. Aka. the preview server. It's a static build of the github.com/mdn/yari project which handles client-side rendering, search, an just-in-time server-side rendering server. All you need is the following: ▶ git clone ▶ cd content ▶ yarn install ▶ yarn start And now open in your browser. This will now run in "preview server" mode. It's meant for contributors (and core writers) to use when they're working on a git branch. Because of that, you'll see a "Writer's homepage" at the root URL. And when viewing each document, you get buttons about "flaws" and stuff. Looks like this: If you don't want to use git clone you can download the ZIP file. For example: ▶ wget ▶ unzip main.zip ▶ cd content-main ▶ yarn install ▶ yarn start At the time of writing, the downloaded Zip file is 86MB and unzipped the directory is 278MB on disk. When you use git clone, by default it will download all the git history. That can actually be useful. This way, when rendering each document, it can figure out from the git logs when each individual document was last modified. For example: If you don't care about the "Last modified" date, you can do a "shallow git clone" instead. Replace the above-mentioned first command with: ▶ git clone --depth 1 At the time of writing the shallow cloned content folder becomes 234MB instead of (the deep clone) 302MB. Every MDN Web Docs page has an index.json equivalent. Take any MDN page and add /index.json to the URL. For example /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice/index.json Essentially, this is the intermediate state that's used for server-side rendering the page. A glorified way of sandwiching the content in a header, a footer, and a sidebar to the side. These URLs work on localhost:5000 too. Try for example. The content for that index.json is built just in time. It also contains a bunch of extra metadata about "flaws"; a system used to highlight things that should be fixed that is somewhat easy to automate. So, it doesn't contain things like spelling mistakes or code snippets that are actually invalid. But suppose you want all that raw (rendered) data, without any of the flaw detections, you can run this command: ▶ BUILD_FLAW_LEVELS="*:ignore" yarn build It'll take a while (because it produces an index.html file too). But now you have all the index.json files for everything in the newly created ./build/ directory. It should have created a lot of files: ▶ find build -name index.json | wc -l 11649 If you just want a subtree of files you could have run it like this instead: ▶ BUILD_FOLDERSEARCH=web/javascript BUILD_FLAW_LEVELS="*:ignore" yarn build The programmatic APIs are all about finding the source files. But you can use the sources to turn that into the built files you might need. Or just to get a list of URLs. To get started, create a file called find-files.js in the root: const { Document } = require("@mdn/yari/content"); console.log(Document.findAll().count); Now, run it like this: ▶ export CONTENT_ROOT=files ▶ node find-files.js 11649 Other things you can do with that findAll function: const { Document } = require("@mdn/yari/content"); const found = Document.findAll({ folderSearch: "web/javascript/reference/statements/f", }); for (const document of found.iter()) { console.log(document.url); } Or, suppose you want to actually build each of these that you find: const { Document } = require("@mdn/yari/content"); const { buildDocument } = require("@mdn/yari/build"); const found = Document.findAll({ folderSearch: "web/javascript/reference/statements/f", }); Promise.all([...found.iter()].map((document) => buildDocument(document))).then( (built) => { for (const { doc } of built) { console.log(doc.title.padEnd(20), doc.popularity); } } ); That'll output something like this: ▶ node find-files.js for 0.0143 for await...of 0.0129 for...in 0.0748 for...of 0.0531 function declaration 0.0088 function* 0.0122 In the most basic form, it will start the "preview server" which is tailored towards building just in time and has all those buttons at the top for writers/contributors. If you want the more "production-grade" version, you can't use the copy of @mdn/yari that is "included" in the mdn/content repo. To do this, you need to git clone mdn/yari and install that. Hang on, this is about to get a bit more advanced: ▶ git clone ▶ cd yari ▶ yarn install ▶ yarn build:client ▶ yarn build:ssr ▶ CONTENT_ROOT=../files REACT_APP_DISABLE_AUTH=true BUILD_FLAW_LEVELS="*:ignore" yarn build ▶ CONTENT_ROOT=../files node server/static.js Now, if you go to something like you'll get the same thing as you get on but all on your laptop. Should be pretty snappy. No, it leaks a little. For example, there are interactive examples that uses an iframe that's hardcoded to. There are also external images for example. You might get a live sample that refers to sample images on.... So that'll fail if you're without WiFi in a spaceship. Making all of MDN Web Docs available offline is, honestly, not a priority. The focus is on A) a secure production build, and B) a good environment for previewing content changes. But all the pieces are there. Search is a little bit tricky, as an example. When you're running it as a preview server you can't do a full-text search on all the content, but you get a useful autocomplete search widget for navigating between different titles. And the full-text search engine is a remote centralized server that you can't take with you offline. But all the pieces are there. Somehow. It all depends on your use case and what you're willing to "compromise" on.
https://www.peterbe.com/oc-Web+development
CC-MAIN-2022-40
refinedweb
2,072
58.18
I wish you would give some more examples. This is a very basic overview You'll never get examples in a Help file. You need a book, or a video, or a class. Here's one of the best books around on the subject: Hi Steve Thanks for the quick responce. I was hoping to get some real world help on this. I notice that there are alot of posts on this simular topic all point towards an external purchase on Amazon. Surely this cant be that hard? Or is it? I have worked out that if I manully create the table I want and the headings I can export this to xlm and try to match up how the SQL command should output. I dont understand why I cannot use the current <Root> <Story> <Table> hirearchy (from my current Import XML) and get Adobe Indesign to do what I expect it to and return new data rows below each other. Sorry hard to type but I have 6500 items that I want on a new row. Fairly simple I would have thought. Ben Honestly, only a tiny fraction of the InDesign user base (and this forum base) use XML. If you can provide more detail about your xml file and your template someone might be more able to help. Other than Root/Story/Table, are you using namespaces in your tags? Are you using any particular attributes? In your Indesign file, are you starting with a placeholder table or bringing in all data for the table with the XML? I can probably point you to examples of some tables, but need to know more about what you are looking to build, or have built.
http://forums.adobe.com/thread/1192120
CC-MAIN-2014-15
refinedweb
285
81.02
05/24/2018 by Johannes Schnatterer in Software Craftsmanship Coding Continuous Delivery—Helpful Tools for the Jenkins Pipeline After the first two parts of this series discuss the basics and the performance of Jenkins Pipelines, this article describes useful tools and methods: Shared libraries allow for reuse for different jobs and unit testing of the pipeline code. In addition, the use of containers with Docker© offers advantages when used in Jenkins Pipelines. In the following, the Pipeline examples from the first two articles will be successively expanded to demonstrate the features of the Pipeline. In so doing, the changes will be presented in both declarative and scripted syntax. The current status of each extension can be followed and tried out on GitHub (see Jenkinsfile Repository GitHub). Beneath the number stated in the title for each section, there is a branch in both declarative and scripted form that shows the full example. The result of the builds for each branch can also be seen directly on our Jenkins instance (see Cloudogu Open Source Jenkins). As in the first two parts,, properties/archiving, parallelization and nightly builds. Thus, shared libraries are the eighth example. Shared Libraries (8) In the examples shown in this series of articles, there are already a few self-written steps, such as mvn() and mailIfStatusChanged(). These are not project-specific and could be stored separately from the Jenkins file and thus also be used for other projects. With Jenkins Pipelines there are currently two options for referencing external files loadstep: Loads a Groovy script file from the Jenkins workspace (i.e. the same repository) and evaluates it. Further steps can then be loaded dynamically. - Shared libraries: Allow the inclusion of external Groovy scripts and classes. The load step has a few limitations: - Classes cannot be loaded, only Groovy scripts (see Groovy scripts vs. classes. With these scripts, additional classes cannot be easily loaded, and inheritance is not possible. For the scripts to be usable in the Pipeline, each script has to end with return this;. - Only files from the workspace can be used. Therefore, reuse in other projects is not possible. - The scripts loaded in this step are not shown in the “replay” feature described in the first article. As a result, they are more difficult to develop and debug. Shared libraries are not subject to these three limitations, which makes them much more flexible. Thus, their use will be described in greater detail in the following. Currently, a shared library has to be loaded from its own repository. Loading from the repository that will be created is currently not possible, but may be at some point in the future (see cps-global-lib-plugin Pull Request 37). This will make it possible to divide the Jenkins file into various classes/scripts, which in turn will increase maintainability and provide the option of writing unit tests. This is also helpful for the development of shared libraries, since these can be used in their own Jenkins file. The repository for each shared library needs to have a specific directory structure: srccontains Groovy classes varscontains Groovy scripts and documentation resourcescontains other files A test directory for unit tests and an own build are recommended. To reduce the complexity of the Jenkins file from the examples and make it so that the functionality is reusable for other projects, in the following example, a step will be exported to a shared library. In the mvn step, for example, an mvn.groovy file is created in the shared library repository in the vars directory (see Listing 1). This contains the method known from the first part of this article series. def call 1 In the Groovy script in Listing 1, however, this method is specified using the Groovy convention call(). Technically, Jenkins creates a global variable for all .groovy files in the vars directory and names it according to the filename. If this variable is now called with the call operator (), its call() method will be implicitly called Groovy call operator. Since brackets are optional for the call in Groovy, the query of the steps in scripted and declarative syntax remains, as previously, for example: mvn 'test'. There are several options for using the shared library in the Pipeline. First, the shared libraries must be defined in Jenkins. The following options exist for the definition of shared libraries: - Global: Must be set by a Jenkins administrator in the Jenkins configuration. Shared libraries defined therein are available in all projects and are treated as trustworthy. This means that they may execute all Groovy methods, internal Jenkins APIs, etc. Therefore, caution should be exercised. This can, however, also be used, for example, to encapsulate the queries described under nightly builds, which would otherwise require script approval. - Folder/multibranch: Can be set by authorized project members for a group of build jobs. Shared libraries defined therein are only valid for associated build jobs and are not treated as trustworthy. This means they run in the Groovy sandbox, just like normal Pipelines. - Automatic: Plugins such as the Pipeline GitHub Library Plugin (see Github Branch Source Plugin) allow for automatic definition of libraries within Pipelines. This makes it possible for shared libraries to be used directly in Jenkins files without prior definition in Jenkins. These shared libraries also run in the Groovy sandbox. For our example, the GitHub Branch Source Plugin can be used, since the example is available from GitHub and therefore requires no further configuration in Jenkins. In the examples for both scripted and declarative syntax, the externally referenced steps (for example, mvn) are defined as follows through the inclusion of the shared library in the first line of the script: @Library('github.com/cloudogu/jenkinsfiles@e00bbf0') _ Here, github.com/cloudogu/jenkinsfiles is the name of the shared library and the version is given after the @, in this case a commit hash. A branch name or day name could also be used here. It is recommended that a defined status (day or commit instead of branches) be used to ensure deterministic behavior. Since the shared library will be newly called from the repository in each build, there would otherwise be the risk that a change to the shared library could affect the next build without any change to the actual Pipeline script or code. This can lead to unexpected results whose causes are difficult to find. Alternatively, libraries can be loaded dynamically (using the librarystep). These can be used only after the step is called. As described above, classes can also be created in shared libraries in addition to scripts (in the src directory). If these are contained in packages, they can be declared using import statements after the @Library annotation. In scripted syntax, these classes can be instantiated anywhere in the Pipeline, but in declarative syntax only within the script step. An example of this is the shared library of the Cloudogu EcoSystem Cloudogu ces-build-lib. Shared libraries also offer the option to write unit tests. For classes, this is often possible with Groovy resources (see Cloudogu ces-build-lib). For scripts, the JenkinsPipelinUnit (see JenkinsPipelineUnit) is useful. With this framework, scripts can be loaded and mocks of the installed Pipeline steps easily defined. Listing 2 shows what a test for the step described in Listing 1 could look like. @Test void mvn() { def shParams = "" helper.registerAllowedMethod("tool", [String.class], { paramString -> paramString }) helper.registerAllowedMethod("sh", [String.class], { paramString -> shParams = paramString }) helper.registerAllowedMethod("withEnv", [List.class, Closure.class], { paramList, closure -> closure.call() }) def script = loadScript('vars/mvn.groovy') script.env = new Object() { String JAVA_HOME = "javaHome" } script.call('clean install') assert shParams.contains('clean install') } Listing 2 Here, a check is performed to determine whether the given parameters have correctly been passed on to the sh step. The framework provides the variable helper to the test class via inheritance. As can be seen in Listing 2, plenty of mocking is used: The tool and withEnv steps as well as the global variable env are mocked. This shows that the unit test only checks the underlying logic and of course does not replace the test in a true Jenkins environment. These integration tests cannot yet currently be automated. The “replay” feature described in the first article is well suited to the development of shared libraries: The shared library can also be temporarily modified and executed here along with the Jenkins file. This makes it possible to avoid a lot of unnecessary commits to the shared library’s repository. This tip is also described in the extensive documentation on shared libraries (see Jenkins Shared Libraries). In addition to external referencing of steps, entire Pipelines can be defined in shared libraries (see Standard build example), thus standardizing its stages, for example. In conclusion, here are a few more open source shared libraries: - Official examples with shared library and Jenkins file (see Shared Library demo). Contains classes and scripts. - Shared library used by Docker© Inc. for development (see Shared Library Docker©). Contains classes and scripts. - Shared library used by Firefox Test Engineering (see Shared Library Docker©). Contains scripts with unit tests and Groovy build. - Shared library of the Cloudogu EcoSystem (see Cloudogu ces-build-lib). Contains classes and scripts with unit tests and Maven build. Docker© (9) Docker© can be used in Jenkins builds to standardize the build and test environment and to deploy applications. Furthermore, port conflicts with parallel builds can be prevented through isolation, as already discussed in the first article of this series. Another advantage is that less configuration is needed in Jenkins. Only Docker© needs to be made available on Jenkins. The Pipelines can then simply include the necessary tools (Java, Maven, Node.js, PaaS-CLIs, etc.) using a Docker© image. A Docker© host must of course be available in order to use Docker© in Pipelines. This is an infrastructure issue that needs to be dealt with outside of Jenkins. Even independent of Docker©, for production it is recommended to operate the build executor separately from the Jenkins master to distribute the load and prevent builds from slowing the response times of the Jenkins web application. This also applies to making Docker© available on the build executors: The Docker© host of the master (if it exists) should be separated from the Docker© host of the build executor. This also ensures that the Jenkins web application remains responsive, independent from the builds. Moreover, the separation of hosts provides additional security, since no access to the Jenkins host is possible in the event of container breakouts (see Security concerns when using Docker©). When setting up a special build executor with Docker©, it is also recommended to directly install the Docker© client and make it available in the PATH. Alternatively, the Docker© client can also be installed as a tool in Jenkins. This tool must then (as with Maven and JDK in the examples provided in the first article in this series) be explicitly stated in the Pipeline syntax. This is currently only possible in scripted syntax and not with declarative syntax (see Pipeline Syntax – Tools). As soon as Docker© is set up, the declarative syntax offers the option of either executing the entire Pipeline or individual stages within a Docker© container. The image based on the container can either be pulled from a registry (see Listing 3) or built from a Docker file. pipeline { agent { Docker { image 'maven:3.5.0-jdk-8' label 'Docker' } } //... } Listing 3 Through the use of the Docker parameter in the agent section, the entire Pipeline will be executed within a container, from which the given image will be created. The image used in Listing 3 ensures that the executables from Maven and the JDK are made available in the PATH. Without any further configuration of tools in Jenkins (as with Maven and JDK in the examples provided in the first article of this series), it is possible to execute the following step, for example: sh 'mvn test'. The label set in Listing 3 refers to the Jenkins build executor in this case. This causes the Pipelines to only execute on build executors that have the Docker label. This best practice is particularly helpful if one has different build executors. This is because if this Pipeline is executed on a build executor that does not have a Docker© client available in the PATH, the build will fail. If, however, no build executor is available with the respective label, the build remains in the queue. Storage of data outside the container is another point that needs to be considered with builds or steps executed in containers. Since each build is executed in a new container, the data contained therein are no longer available for the next run. Jenkins ensures that the workspace is mounted in the container as a working directory. However, this does not occur, for example, for the local Maven repository. While the previously used mvn step from the examples (based on the Jenkins tools) uses the Maven repository of the build executor, the Docker© container creates a Maven repository in the workspace of each build. This does cost a bit more storage space and the first build will be slower, but it prevents undesired side effects such as, for example, when two simultaneously running builds of a Maven multi-module project overwrite each other’s snapshots in the same local repository. If the repository of the build executor needs to be used in spite of this, a few adjustments to the Docker© image are necessary (see Cloudogu ces-build-lib – Docker). What should be avoided is creation of the local Maven repository in the container. This would result in all dependencies being reloaded from the Internet for each build, which in turn would increase the duration of each build. The behavior described in Listing 3 in declarative syntax can also be stated in scripted syntax, as shown in Listing 4. node('Docker') { // ... Docker.image('maven:3.5.0-jdk-8').inside { // ... } } Listing 4 As with declarative syntax (see Listing 3), build executors can also be selected via labels in scripted syntax. In scripted syntax (Listing 4), this is done using a parameter of the node step. Here, Docker© is contacted using a global variable (see Global variable reference Docker). This variable offers even more features, including: - use of specific Docker© registries (helpful for tasks such as continuous delivery with Kubernetes, which is described in the third part of this series), - use of a specific Docker© client (defined as Jenkins tool, as described above), - building of images, specification of the day and sending to a registry, and - starting and stopping of containers. The Docker variable does not always support the latest Docker© features. For example, multi-stage Docker© images (see Jenkins issue 44609) cannot be built. Docker©’s CLI client can be used in this case, for example: sh 'Docker build ...'. Comparison of Listing 3 with Listing 4 clearly shows the difference between descriptive (declarative) and imperative (scripted) syntax. Instead of stating declaratively which container needs to be used from the outset, the location from which something is to be executed in this container is stated imperatively. This also makes things more flexible, however: While with declarative syntax, the entire pipeline or individual stages can be executed in containers, with scripted syntax, individual sections can be executed in containers. As already described on multiple occasions, scripted syntax can in any case also be executed in declarative syntax within the script step or, alternatively, one’s own steps written in scripted syntax can be called. This call is used in the following to convert the mvn step in the shared library (Listing 1) from Jenkins Tools to Docker© (compare Listing 5). def call(def args) { Docker.image('maven:3.5.0-jdk-8').inside { sh "mvn ${args} --batch-mode -V -U -e -Dsurefire.useFile=false" } } Listing 5 After the shared library is updated (as described in Listing 5), in both the scripted and declarative Pipeline examples, each mvn step then runs without modification in a Docker© container. In conclusion, another advanced Docker© topic. The scripted Pipeline syntax practically invites nesting of Docker© containers, or in other words “Docker in Docker” execution. This is not easily possible, since no Docker© client is initially available in a Docker© container. However, it is possible to execute multiple containers simultaneously with Docker.withRun() (see documentation Pipeline Docker©). There are, however, also builds that start Docker© containers, for example with the Docker© Maven plugin (see Docker© Maven Plugin). These can be used to start up test environments or execute UI builds, for example. For these builds, “Docker in Docker” must actually be made available. However, it would not make sense to start another Docker© host in a Docker© container, even if this were possible (see Do Not Use Docker In Docker for CI). Instead, the Docker© socket of the build executor can be mounted in the Docker© container of the build. Even with this procedure, one should be aware of certain security limitations (see Never Expose Docker© Socket). Here, the aforementioned separation of the Docker© host of the master from the Docker© host of the build executor becomes even more important. To make access to the socket possible, a few adjustments to the Docker© image are also necessary. For this, the user that starts the container must be in the Docker© group to gain access to the socket. The user and group must also be generated in the image (see for example Cloudogu ces-build-lib – Docker). Conclusion and outlook This article describes how the maintainability of the Pipeline can be improved through outsourcing of code into a shared library. This code can then be reused and its quality checked via unit tests. In addition, Docker© is presented as a tool with which Pipelines can be executed in a uniform environment, isolated and independent from the configuration of the respective Jenkins instance. These useful tools create the foundation for the fourth part, in which the Continuous Delivery Pipeline is completed.
https://cloudogu.com/en/blog/continuous_delivery_part_3
CC-MAIN-2020-29
refinedweb
2,993
53.92
Configuring react-d3 into a Rollup Build System Rollup is a next generation bundling system that I’m using with react. There is a port of the d3 Javascript graphing library called react-d3. As with most cases where you put together a bunch of new technologies, there are configuration issues. Here is how I include react-d3 into my rollup projects: import React from 'react' import { LineChart } from 'react-d3' Then to make an actual graph: export class GraphWidget extends React.Component { constructor( props ) { super( props ); } render() { var width = 700, height = 300, margins = {left: 100, right: 100, top: 50, bottom: 50}, title = "Events", x = function( d ) { return d.index; }; var lineData = [ { name: "series1", values: [ { x: 0, y: 20 }, { x: 24, y: 10 } ] }, { name: "series2", values: [ { x: 70, y: 82 }, { x: 76, y: 82 } ] } ]; return ( <LineChart title = {title} width = {width} height = {height} margins = {margins} data = {lineData} x = {x} /> ); } } React-D3 breaks with Use Strict The current version of d3, and react-d3 depends on having access to “this”. The default build with babel is to use strict, which eliminates the global this, and will break react-d3. There is a lot of discussion on the web about ways to turn off use strict in plain babel environments, and with more popular packagers like webpack and browserify. Here is how I solved this for my rollup environment. First install: npm install rollup-plugin-post-replace This is just like the common plugin rollup-plugin-replace except that it runs after bundling. Then in rollup.config.js: var postReplaceConfig = { '"use strict";': '', "'use strict';": '' }; var config = { plugins: [ postReplace ( postReplaceConfig ), ] }
https://oroboro.com/react-d3-rollup/
CC-MAIN-2020-40
refinedweb
266
58.82
After reading this chapter, you'll be able to do the following: Identify different levels of rendering complexity Understand the basic structure of an OpenGL program Recognize OpenGL command syntax Understand in general terms how to animate an OpenGL program "A Very Simple OpenGL Program" presents a small OpenGL program and briefly discusses it. This section also defines a few basic computer-graphics terms. "OpenGL Command Syntax" explains some of the conventions and notations used by OpenGL commands. "OpenGL as a State Machine" describes the use of state variables in OpenGL and the commands for querying, enabling, and disabling states. "OpenGL-related Libraries" describes sets of OpenGL-related routines, including an auxiliary library specifically written for this book to simplify programming examples. "Animation" explains in general terms how to create pictures on the screen that move, or animate. OpenGL is designed to work efficiently even if the computer that displays the graphics you create isn't the computer that runs your graphics program. This might be the case if you work in a networked computer environment where many computers are connected to one another by wires capable of carrying digital data. primitive - points, lines, and polygons. (A sophisticated library that provides these features could certainly be built on top of OpenGL - in fact, that's what Open Inventor is., drawn by a computer (which is to say, rendered) in successively more complicated ways. The following paragraphs describe. Figure J-2 shows a depth-cued version of the same wireframe scene. Note that the lines farther from the eye are dimmer, just as they would be in real life, thereby giving a visual cue of depth. Figure J-3 shows an antialiased version of the wireframe scene. Antialiasing is a technique for reducing the jagged effect created when only portions of neighboring pixels properly belong to the image being drawn. Such jaggies are usually the most visible with near-horizontal or near-vertical lines. Figure J-4 shows a flat-shaded version of the scene. The objects in the scene are now shown as solid objects of a single color. They appear "flat" in the sense that they don't seem to respond to the lighting conditions in the room, so they don't appear smoothly rounded. Figure J-5 shows a lit, smooth-shaded version of the scene. Note how the scene looks much more realistic and three-dimensional when the objects are shaded to respond to the light sources in the room; the surfaces of the objects now look smoothly rounded. Figure J-6 adds shadows and textures to the previous version of the scene. Shadows aren't an explicitly defined feature of OpenGL (there is no "shadow command"), but you can create them yourself using the techniques described in Chapter 13 . Texture mapping allows you to apply a two-dimensional texture to a three-dimensional object. In this scene, the top on the table surface is the most vibrant example of texture mapping. The walls, floor, table surface, and top (on top of the table) are all texture mapped. Figure J-7 shows a motion-blurred object in the scene. The sphinx (or dog, depending on your Rorschach tendencies) appears to be captured as it's moving forward, leaving a blurred trace of its path of motion. Figure J-8 shows the scene as it's drawn for the cover of the book from a different viewpoint. This plate illustrates that the image really is a snapshot of models of three-dimensional objects. Figure J-10 shows the depth-of-field effect, which simulates the inability of a camera lens to maintain all objects in a photographed scene in focus. The camera focuses on a particular spot in the scene, and objects that are significantly closer or farther than that spot are somewhat blurred. Arrange the objects in three-dimensional space and select the desired vantage point for viewing the composed scene. Calculate the color of all the objects. The color might be explicitly assigned by the application, determined from specified lighting conditions, or obtained by pasting a texture onto the objects. Convert the mathematical description of objects and their associated color information to pixels on the screen. This process is called rasterization. Before you look at an OpenGL program, - short for picture element - is the smallest visible element the display hardware can put on the screen. Information about the pixels (for instance, what color they're supposed to be) is organized in system intensity of all the pixels on the screen. Now look at an OpenGL program. Example 1-1 renders a white rectangle on a black background, as shown in Figure 1-1 . Figure 1-1 : A White Rectangle on a Black Background Example 1-1 : A Simple OpenGL Program #include <whateverYouNeed.h> main() { OpenAWindowPlease();(); KeepTheWindowOnTheScreenForAWhile(); }The first line of the main() routine opens a window on the screen: The OpenAWindowPlease() routine is meant as a placeholder for a window system-specific routine. The next two lines are OpenGL commands that clear the window to black: glClearColor() establishes what color the window will be cleared to, and glClear() actually clears the window. Once the color to clear to is set, the window is cleared to that color whenever glClear() is called.2f() commands. As you might be able to guess from the arguments, which are (x, y) coordinate pairs, the polygon is a rectangle. Finally, glFlush() ensures that the drawing commands are actually executed, rather than stored in a buffer awaiting additional OpenGL commands. The KeepTheWindowOnTheScreenForAWhile() placeholder routine forces the picture to remain on the screen instead of immediately disappearing. You might also have noticed some seemingly extraneous letters appended to some command names (the 3f in glColor3f(), for example). It's true that the Color part of the command name. Some OpenGL commands accept as many as eight different data types for their arguments. The letters used as suffixes to specify these data types for ANSI C implementations of OpenGL are shown in Table 1-1 , along with the corresponding OpenGL type definitions. The particular implementation of OpenGL that you're using might not follow this scheme exactly; an implementation in C++ or Ada, for example, wouldn't need to. Thus, the two commands glVertex2i(1, 3); glVertex2f(1.0, 3.0);are equivalent, except that the first specifies the vertex's coordinates as 32-bit integers and the second specifies them as single-precision floating-point numbers.); float color_array[] = {1.0, 0.0, 0.0}; glColor3fv(color_array). Finally, OpenGL defines the constant GLvoid; if you're programming in C, you can use this instead of void. Each state variable or mode has a default value, and at any point you can query the system for each variable's current value. Typically, you use one of the four following commands to do this: glGetBooleanv(), glGetDoublev(), glGetFloatv(), or glGetIntegerv(). Which of these commands you select depends on what data type you want the answer to be given in. Some state variables have a more specific query command (such as glGetLight*(), glGetError(), or glGetPolygonStipple()). In addition, you can save and later restore the values of a collection of state variables on an attribute stack with the glPushAttrib() and glPopAttrib() commands. Whenever possible, you should use these commands rather than any of the query commands, since they're likely to be more efficient. The complete list of state variables you can query is found in Appendix B . For each variable, the appendix also lists the glGet*() command that returns the variable's value, the attribute class to which it belongs, and the variable's default value. The OpenGL Extension to the X Window System (GLX) provides a means of creating an OpenGL context and associating it with a drawable window on a machine that uses the X Window System. GLX is provided as an adjunct to OpenGL. It's described in more detail in both Appendix D and the OpenGL Reference Manual. One of the GLX routines (for swapping framebuffers) is described in "Animation." GLX routines use the prefix glX. The OpenGL Programming Guide Auxiliary Library was written specifically for this book to make programming examples simpler and yet more complete. It's the subject of the next section, and it's described in more detail in Appendix E . Auxiliary library routines use the prefix aux. "How to Obtain the Sample Code" describes how to obtain the source code for the auxiliary library. Open Inventor is an object-oriented toolkit based on OpenGL that provides objects and methods for creating interactive three-dimensional graphics applications. Available from Silicon Graphics and written in C++, Open Inventor provides pre-built objects and a built-in event model for user interaction, high-level application components for creating and editing three-dimensional scenes, and the ability to print objects and exchange data in other graphics formats. In addition, since OpenGL's drawing commands are limited to those that generate simple geometric primitives (points, lines, and polygons), the auxiliary library includes several routines that create more complicated three-dimensional objects such as a sphere, a torus, and a teapot. This way, snapshots of program output can be interesting to look at. If you have an implementation of OpenGL and this auxiliary library on your system, the examples in this book should run without change when linked with them. The auxiliary library is intentionally simple, and it would be difficult to build a large application on top of it. It's intended solely to support the examples in this book, but you may find it a useful starting point to begin building real applications. The rest of this section briefly describes the auxiliary library routines so that you can follow the programming examples in the rest of this book. Turn to Appendix E for more details about these routines. auxInitPosition() tells auxInitWindow() where to position a window on the screen. auxInitDisplayMode() tells auxInitWindow() whether to create an RGBA or color-index window. You can also specify a single- or double-buffered window. (If you're working in color-index mode, you'll want to load certain colors into the color map; use auxSetOneColor() to do this.) Finally, you can use this routine to indicate that you want the window to have an associated depth, stencil, and/or accumulation buffer. auxKeyFunc() and auxMouseFunc() allow you to link a keyboard key or a mouse button with a routine that's invoked when the key or mouse button is pressed or released. sphere octahedron cube dodecahedron torus icosahedron cylinder teapot cone You can draw these objects as wireframes or as solid shaded objects with surface normals defined. For example, the routines for a sphere and a torus are as follows: void auxWireSphere(GLdouble radius); void auxSolidSphere(GLdouble radius); void auxWireTorus(GLdouble innerRadius, GLdouble outerRadius); void auxSolidTorus(GLdouble innerRadius, GLdouble outerRadius); All these models are drawn centered at the origin. When drawn with unit scale factors, these models fit into a box with all coordinates from -1 to 1. Use the arguments for these routines to scale the objects. Example 1-2 : A Simple OpenGL Program Using the Auxiliary Library: simple.c #include <GL/gl.h> #include "aux.h" int main(int argc, char** argv) { auxInitDisplayMode (AUX_SINGLE | AUX_RGBA); auxInitPosition (0, 0, 500, 500); auxInitWindow (argv[0]); glClearColor (0.0, 0.0, 0.0, 0.0); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0, 1.0, 1.0); glMatrixMode(GL_PROJECTION); glLoadIdentity();(); sleep(10); } In a movie theater, motion is achieved by taking a sequence of pictures (24 per second), and then idea that makes motion picture projection work is that when it is displayed, each frame is complete., so. An easy solution is to. It frames per second, and the graphics are idle for 1/30-1/45=1/90 second per frame. Although 1/90 second of wasted time might not sound bad, it's wasted each 1/30 second, so actually one-third of the time is wasted. In addition, the video refresh rate is constant, which can have some unexpected performance consequences. For example, with the 1/60 second per refresh monitor and a constant frame rate, you can run at 60 frames per second, 30 frames per second, 20 per second, 15 per second, 12 per second, frames per second. Then, all of a sudden, you add one new feature, and your performance is cut in half because the system can't quite draw the whole thing in 1/60 of a second, so it misses the first possible buffer-swapping time. A similar thing happens when the drawing time per frame is more than 1/30 second - the performance drops from 30 to 20 frames per second, giving a 33 percent performance hit. Another problem is that if the scene's complexity is close to any of the magic times (1/60 second, 2/60 second, 3/60 second, and so on in this example), then because of random variation, some frames go slightly over the time and some slightly under, and. Interestingly, the structure of real animation programs does not differ too much from this description. Usually, the entire buffer is redrawn from scratch for each frame, as it is easier to do this than to figure out modifications to a structure are being made for each frame where there's significant recomputation,. However, GLX provides such a command, for use on machines that use the X Window System: void glXSwapBuffers(Display *dpy, Window window);Example 1-3 illustrates the use of glXSwapBuffers() in an example that draws a square that rotates constantly, as shown in Figure 1-2 . Figure 1-2 : A Double-Buffered Rotating Square Example 1-3 : A Double-Buffered Program: double.c #include <GL/gl.h> #include <GL/glu.h> #include <GL/glx.h> #include "aux.h" static GLfloat spin = 0.0; void display(void) { glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glRotatef(spin, 0.0, 0.0, 1.0); glRectf(-25.0, -25.0, 25.0, 25.0); glPopMatrix(); glFlush(); glXSwapBuffers(auxXDisplay(), auxXWindow()); } void spinDisplay(void) { spin = spin + 2.0; if (spin > 360.0) spin = spin - 360.0; display(); } void startIdleFunc(AUX_EVENTREC *event) { auxIdleFunc(spinDisplay); } void stopIdleFunc(AUX_EVENTREC *event) { auxIdleFunc(0); } void myinit(void) { glClearColor(0.0, 0.0, 0.0, 1.0); glColor3f(1.0, 1.0, 1.0); glShadeModel(GL_FLAT); } void myReshape(GLsizei w, GLsizei h) { glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); if (w <= h) glOrtho (-50.0, 50.0, -50.0*(GLfloat)h/(GLfloat)w, 50.0*(GLfloat)h/(GLfloat)w, -1.0, 1.0); else glOrtho (-50.0*(GLfloat)w/(GLfloat)h, 50.0*(GLfloat)w/(GLfloat)h, -50.0, 50.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity (); } int main(int argc, char** argv) { auxInitDisplayMode(AUX_DOUBLE | AUX_RGBA); auxInitPosition(0, 0, 500, 500); auxInitWindow(argv[0]); myinit(); auxReshapeFunc(myReshape); auxIdleFunc(spinDisplay); auxMouseFunc(AUX_LEFTBUTTON, AUX_MOUSEDOWN, startIdleFunc); auxMouseFunc(AUX_MIDDLEBUTTON, AUX_MOUSEDOWN, stopIdleFunc); auxMainLoop(display); }
http://fly.cc.fer.hr/~unreal/theredbook/chapter01.html
CC-MAIN-2013-20
refinedweb
2,472
53.1
Group all Anagrams together in C++ In this tutorial, we are going to learn to group all the anagrams together in a given vector of strings with C++. For instance, the given vector of strings is [“rams”,”mars”,”silent”,”listen”,”cars”,”scar”] , then we have to group all anagrams together and return a two dimensional vector.[[“rams”,”mars”],[“silent”,”listen”],[“cars”,”scar”]].To implement this we will use unordered maps and two dimensional vectors. Approach : - Iterate over the given vector of string and for each string first sort it. - Check whether the given sorted sequence is present in the unordered map or not. - If the sorted sequence is not present then make it a key in the unordered map. - If the sorted sequence is present then append the original string to the corresponding sorted sequence. Finally, store all the values in a two-dimensional vector according to the key in the unordered map. C++ implementation to group all anagrams together Below is our C++ code to perform the task: #include<bits/stdc++.h> using namespace std; vector<vector<string>> group(vector<string>& str) { vector<vector<string> > vec; unordered_map<string,vector<string> > m; for(int i=0;i<str.size();i++){ string x=str[i]; sort(x.begin(),x.end()); m[x].push_back(str[i]); } for(auto i:m){ vec.push_back(i.second); } return vec; } int main() { vector<string> str ={"ram","mar","listen","silent","lentsi","more","like"}; vector<vector<string>> ans; ans=group(str); cout<<"The grouped anagrams are as follows:"<<endl; for (int i = 0; i <ans.size(); i++) { for (int j = 0; j < ans[i].size(); j++) cout << ans[i][j] << " "; cout << endl; } } Output: After running our program, we will able to get the result given below: The grouped anagrams are as follows: like more ram mar listen silent lentsi So from the output, we can see that we did it successfully.
https://www.codespeedy.com/group-all-anagrams-together-in-c/
CC-MAIN-2021-10
refinedweb
313
54.63
Vanilla set.text to TextBox I know this is a vanilla question but probably it will be easy to answer by someone. I want to set the text to a TextBox(for a progressbar, to follow what's going on) but I can't get it updated. Any help? from vanilla import * from time import sleep class Test(): def __init__ (self): self.w = Window((200, 100), 'a W') self.w.text = TextBox((10, 10, -10, -10), 'start') self.w.open() # set text self.counter = 0 while self.counter < 10: print self.counter self.w.text.set(self.counter) sleep(1) self.counter += 2 else: print 'finished counting' Test() can you use the defconAppKit progressWindow? see Cocoa is build up rather smart so it checks when a view is been updated frequently before redrawing it on screen. Although you can force a redraw: self.w.text.getNSTextField().display() after setting a string into the TextBox
https://forum.robofont.com/topic/244/vanilla-set-text-to-textbox
CC-MAIN-2021-49
refinedweb
154
78.35
The JPA Overview's Chapter 12, Mapping Metadata explains join mapping. All of the examples in that document, however, use "standard" joins, in that there is one foreign key column for each primary key column in the target table. OpenJPA, OpenJPA will function properly. There is no special syntax for expressing a partial primary key join - just do not include column definitions for missing foreign key columns. In a non-primary key join, at least one of the target columns is not a primary key. Once again, OpenJPA supports this join type with the same syntax as a primary key join. There is one restriction, however: each non-primary key column you are joining to must be controlled by a field mapping that implements the org.apache.openjpa.jdbc.meta.Joinable interface. All built in basic mappings implement this interface, including basic fields of embedded objects. OpenJPA will also respect any custom mappings that implement this interface. See Section 14, “ Custom Mappings ” for an examination of custom mappings. Not all joins consist of only links between columns. In some cases you might have a schema in which one of the join criteria is that a column in the source or target table must have some constant value. OpenJPA. @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK1"), @JoinColumn(name="T2.PK2" referencedColumnName="'a'") }); private ...; }: @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK2"), @JoinColumn(name="T2.PK1" referencedColumnName="2") }); private ...; } Finally, from the inverse direction, these joins would look like this: ...; }
http://openjpa.apache.org/builds/1.2.2/apache-openjpa-1.2.2/docs/manual/ref_guide_mapping_notes_nonstdjoins.html
CC-MAIN-2015-32
refinedweb
259
50.53
Coefficients for bezier curves #1 Members - Reputation: 1584 Posted 15 May 2013 - 01:34 AM #2 Crossbones+ - Reputation: 2243 Posted 15 May 2013 - 06:50 AM You start from the parametric Bézier curve formula and rewrite the terms until you obtain the desired formula. This is simple algebra, nothing complicated. For example, for the quadratic case: The derivation in the cubic case is similar. But I have always found the Bézier formula a much more useful formulation. Why do you want the coefficients of the polynomial? EDIT: corrected the formula.. Edited by apatriarca, 15 May 2013 - 06:58 AM. #3 Members - Reputation: 1584 Posted 15 May 2013 - 08:18 AM Thanks, I'll have to look through that. I am using the coefficients for several things. For example, to get the x at a given t I can just do: float Curve::QuadraticBezierSegment::getXAtT( float t ) const { // A*t^2 + B*t + C return ( m_AX * t * t ) + ( m_BX * t ) + m_CX; } There might be some other way to do that too, I am not sure. Also, if I take the derivative of the formula and plug in the coefficients I can solve it and get local extrema, ie. the bounds of the curve. Again, I am not a math person, and there might be other ways to do this. If some come to mind, please feel free to share Edited by GuyWithBeard, 15 May 2013 - 08:19 AM. #4 Crossbones+ - Reputation: 2243 Posted 15 May 2013 - 02:12 PM To get a point in a Bézier curve the standard method is known as De Casteljau's algorithm. You iteratively linearly interpolate all control points until you find a single point. This is more robust than evaluating your formula. It is also useful to get a better geometric intuition. You may also be interested in the Horner's Method which is a more robust way to evaluate your polynomial. You write C + t*(B + t*A) instead of C + t*B + t*t*A in the quadratic case. The derivative of a Bézier curve is a lower degree Bézier curve. The derivative of a quadratic Bézier curve is thus a segment and of a cubic Bézier curve is a quadratic Bézier curve. The proof use the following result on the Bernstein polynomials (used to define the equation of a Bézier curve). The control points of the derivative curve are indeed the points for all i from 0 to n-1. If all you want is however to get some bounds on the curve, a Bézier curve also have the property to be contained in the convex hull of its points. It can be useful to easily discard curve pairs for collision for example. There also exists interative methods to compute intersections of Bézier curve using this method (or something similar). #5 Members - Reputation: 1584 Posted 16 May 2013 - 11:11 PM Coolio, thanks for the explanation! #6 Members - Reputation: 1880 Posted 17 May 2013 - 10:40 AM (Disclaimer: sorry if the LaTeX equations don't come out right; I've had issues!) apatriarca's formula for the derivative of a Bezier curve is mostly right, but you need to divide the control points by the interval of the curve. The formula for the derivative's control points is Di = n/(t1 - t0) * (Pi+1-Pi). This isn't a problem if the parameter domain for the Bezier curve is [0,1], but if it's something else, then you have to divide by the interval. If you want to convert a Bezier curve to a power basis polynomial, you need to convert the Bezier curve into an explicit Bezier curve (one where the x-coordinates of the curve are evenly spaced, such as below: Then you can express this as a Bernstein polynomial [eqn]y = \sum_{i=0}^n y_i B_i^n (t)[\eqn], where x = t. The closed-form conversion between Bernstein to power basis is given as [eqn]p_i = \sum_{k=0}^i b_k \binom{n}{i} \binom{i}{k} (-1)^{i-k}[\eqn]. My real question is: why you would want to convert to a polynomial when you can probably do what you want in the Bernstein basis? #7 Crossbones+ - Reputation: 2243 Posted 17 May 2013 - 12:36 PM In my previous reply I was only considering Bézier curves in which t vary uniformly in the [0, 1] interval. If you change the parametrization you clearly have to adjust the derivative, but this does not make my result wrong in any way. More complicated parametrizations are actually also possible. What if I want to use t = sin(s) with s in [0, pi/2]? The (support of the) curve clearly remains the same, but the velocity/derivative is now different. #8 Members - Reputation: 1880 Posted 22 May 2013 - 12:43 PM apatriarca, I didn't mean to say that your formula was incorrect. I just wanted to add that the interval of the curve matters in the derivative, you know, for completeness.
http://www.gamedev.net/topic/643117-coefficients-for-bezier-curves/
CC-MAIN-2016-40
refinedweb
833
61.16
Branch: refs/heads/nested Commit: 19f202cc163ce24756aa0493936eead05ed8ec8b Author: William S Fulton <wsf@...> Date: 2013-11-30 (Sat, 30 Nov 2013) Changed paths: M Examples/test-suite/nested_structs.i Log Message: ----------- C nested struct passed by value example This was causing problems in Octave as wrappers were compiled as C++. Solution has already been committed and required regenerating the inner struct into the global C++ namespace (which is where it is intended to be in C). Commit: df679071681242ec2619c82693f261f1f1c34b80 Author: William S Fulton <wsf@...> Date: 2013-12-01 (Sun, 01 Dec 2013) Changed paths: M Examples/test-suite/common.mk A Examples/test-suite/nested_private.i Log Message: ----------- Testcase of private nested class usage causing segfault Needs fixing for C#/Java Compare:
http://sourceforge.net/p/swig/mailman/swig-cvs/thread/529c46f03f90f_4246134dd4c926e9@hookshot-fe6-pe1-prd.aws.github.net.mail/
CC-MAIN-2015-06
refinedweb
119
50.33
This is the mail archive of the cygwin mailing list for the Cygwin project. On 6/24/16, 2:59 PM, "Corinna Vinschen" <cygwin-owner@cygwin.com on behalf of corinna-cygwin@cygwin.com> wrote: >>. I ended up implementing this a couple of days ago. I was just spending a lazy Sunday morning and then it hit me: this is an exceptionally bad idea. The problem is that Windows uses the Anonymous identity for accounts who have not logged in using a password (as per Erik Soderquistâs email regarding IIS behavior). Files in FUSE file systems that have a UID that cannot be mapped to a SID, will suddenly be owned by that Anonymous user! Obviously this is a huge security hole. I intend to fix this ASAP, but I am now back to where we started. The obvious SID to use is the NULL SID, but that is already used by Cygwin for other purposes. >>". Ideally we should choose a SID that: (1) Is very unlikely to be used by Microsoft at any point in the future. (2) Cannot be associated to a user logon for any reason (see problem with Anonymous SID) above. (3) Maps to a reasonable UID in Cygwin. I propose the following SID/UID mapping: S-1-0-99 <=> UID 0xffffffff (32-bit -1) This is a SID in the S-1-0 (Null Authority) namespace (same one that contains the NULL SID), which is unlikely to be used by Microsoft. So it likely satisfies (1). For the same reason (that it is a new/unused SID in the S-1-0) namespace, I think it also satisfies (2). If we follow the rules from Cygwinâs "POSIX accounts, permission, and securityâ document [IDMAP], the SID S-1-0-99 maps to 0x10063. But we can make a special rule for this SID to map it to a different UID. Mapping it to -1 may be the easiest option, but perhaps we can also consider mapping it to 0xfffffffe (-2). Bill [IDMAP]
http://www.cygwin.com/ml/cygwin/2016-06/msg00372.html
CC-MAIN-2017-51
refinedweb
336
71.65
The Extensible Markup Language (XML) standard is now almost four years old, and a lot of progress has been made since the W3C adopted it as an officially recommended specification. XML.ORG, a registry and repository for XML vocabularies overseen by the Organization for the Advancement of Structured Information Standards (OASIS), now has well over a hundred standard vocabularies for industry-specific usage. With the Electronic Business XML (ebXML) initiative, standards common to all industries -- such as standards for purchase orders, and the like -- will begin to emerge. (See Resources for more information on the XML specification, XML.ORG, OASIS, and ebXML.) Still, compared to the enormous potential of using XML for Web-based applications, these are still the early days. Some might fear that a large number of vocabularies represent a fragmentation in the standard. On the contrary, XML is intended as a metalanguage for establishing these vocabularies. XML differs from HTML in that it describes the data but not its presentation. While XML can easily be understood by programmers and programs, there is the need to display the data on Web pages and other page-oriented documents as well. To maximize the flexibility of using this data, the presentation should be specified outside of the XML document, using style sheets, for example, to define its appearance. The unique business structures that give each company its own competitive edge can be represented in private vocabularies. Companies can organize their departments separately, treating them as individual enterprises with vocabularies that reflect their way of doing business. But ultimately, information in the private definitions must be converted to a public standard for exchange with other organizations. It is also probable that new versions of vocabularies, even with completely different structures, will replace the old as companies learn better ways to do business. All of this points to a need for automatic conversion from one form of XML to another, from XML to HTML, and from XML to completely different presentation formats, such as PDF. What is needed, then, is a general way to accomplish mechanical translations from XML to all of these different forms. The solution: XSL transformations The Extensible Stylesheet Language (XSL) specification describes powerful tools to accomplish the required transformation of XML data (see Figure 1). XSL consists of the XSL Transformations (XSLT) language for transformation, and Formatting Objects (FO), a vocabulary for describing the layout of documents. XSLT uses the XML Path Language (XPath), a separate specification that describes a means of addressing XML documents and defining simple queries. The XSLT 1.0 and XPath 1.0 specifications are complete, having become W3C recommendations on November 16, 1999. The XSL 1.0 specification (which also describes FO) is expected to reach W3C recommended status soon. (See Resources for more information on the latest versions of XSLT, XPath, and XSL from the W3C.) Figure 1. The Extensible Stylesheet Language and its component technologies There are now several implementations of processors for XSLT. In particular, the Xalan project from Apache Software Foundation (see Resources) is a robust and highly compliant XSLT and XPath implementation. This tool was donated to Apache by IBM; it was developed within Lotus software by Scott Boag and his team. While Boag's team continues to develop Xalan, being part of Apache means it will enjoy contributions from individuals and other companies in the industry. With the XSLT specification in place and with the release of Xalan 1.0 in March 2000, XSLT is now stable and ready for real-world use. Xalan continues to be developed; the current release on xml.apache.org is version 2.1. The XSLT language offers a powerful means of transforming XML documents into other forms, producing XML, HTML, and other formats. It is capable of sorting, selecting, numbering, and has many other features for transforming XML. It operates by reading a style sheet, which consists of one or more templates, then matching the templates as it visits the nodes of the XML document. The templates can be based on names and patterns. Templates include literal text that is used in the resulting transform interspersed with directives to include specific data. This technique thus defines transformations declared "by example," a simple programming model. Figure 2 illustrates a simple XSLT transformation that pulls a literal string from an XML document and places it, with formatting, into an HTML document. Figure 2. XSLT: A simple example XSLT is not a general-purpose programming language like Java and C++. For example, symbolic "variables" cannot be reassigned a new value, so they are really constant definitions. This limitation means that counters and accumulators are not available. Java-like "for" or "while" statements are also not available; instead, iteration can be accomplished using recursion. The limitations in the language definition are intended to support powerful optimization techniques. The XSLT language has an extension function that allows a style sheet to call out to modules developed in Java or C++ (depending on the implementation of the XSLT engine). This allows the use of conventional programming languages for problems that are more easily solved that way. The most important feature of XSL is the ability to develop transformations quickly, with few lines of code. A transformation that could be developed and tested in an hour might take days to write using Java, even when an off-the-shelf XML parser such as Xerces (again, contributed to Apache by IBM) is used. One could write transformations in Perl, using XML4P to add XML parsing and DOM access support, but for many transformations it would be faster to use XSL. (See Resources for more information on Xerces and XML4P.) XSL application scenarios XSL is a new technology and the software industry has only begun to come up with uses for it. In the following sections, you will see some of the ways it is used in these days of its infancy. These scenarios are not intended as design patterns or definitive approaches, but rather as examples of the many ways in which XSL may be employed. The purpose in presenting these approaches is to stimulate your thinking for solving problems by using XSL in ways not yet invented. XSL was originally developed with conversion of XML to HTML in mind, hence "Stylesheet" is its middle name. In this role, XSL can be run on the client, using a style sheet either local to the client's system or stored as a resource on a server. Using XSL on the client allows processing to be distributed to each client's computer. Most companies find it more convenient to offload processing from client workstations to servers. This simplifies the task of upgrading the power of an entire system; if more power is needed, the server can be upgraded or supplemented with other servers. An important advantage to the use of servers is that applications can be upgraded in only one place -- on the server -- rather than requiring a redeployment of application software on many client machines. A server-based transformation architecture is shown in Figure 3. Figure 3. Server-side rendering of HTML for client browser use XSL works well on a server. A common way to provide access is to use servlets that respond to a client's request by starting XSL and returning the resulting stream. One can even imagine an architecture where XSL is used both on a server and on the client. For example, the server might select records that match a query, and "prune" parts of the tree that contain information not needed by the client to reduce transmission time. The client could then run XSL locally to format the XML data according to the appearance required for viewing. Recent studies have concluded that browsing will become a small fraction of the total Internet traffic in the coming years. They suggest that even though there are uncountable Web sites today, a larger use of the Internet (some say 10 times as much) will be in the exchange of information in XML from one server to another, in scenarios that do not include a browser. Thus, business-to-business frequently involves vocabulary translation -- translating from one XML application to another -- rather than transformation of XML to HTML. Translation on demand, whether to HTML, XML, or some other form, is recognized as a common use case on an application server. IBM's recently announced WebSphere Transcoding Publisher (see Resources) automatically provides XSL translation on demand. It is capable of rendering XML to several different forms. As such, it is the logical extension of the server XSL transformation model discussed in the previous section. Transcoding can be used to create HTML renderings or PDF (via XSL FO and an FO processor such as Apache FOP, see Resources), thus supporting conventional desktop and laptop clients. Transcoding can also reformat the data to Wireless Markup Language (WML, see Resources) and other forms suitable for handheld devices. Doing so often requires pruning the data to a simpler form, as well as adapting it to the device requirements for handhelds. In the Copernicus project, IBM used transcoding technology to build a system with SABRE's travel management system coupled to Nokia intelligent telephones (see Resources). Information from SABRE is transcoded to an appropriate form for the device, and then sent to the device. At that point the mobile user can make changes to his or her itinerary as required, using HTTPS to talk to specialized business objects on the server. The flexibility of the transcoding technology expands the system to support many other types of handheld devices, even when they involve vocabularies other than WML. Finally, aside from converting XML to devices for direct client use, the Transcoding Publisher can be used for automated vocabulary translation, such as may be required for business-to-business transactions. The major advantage of the transcoding server model is that it can start with support for a few devices, then add style sheets to support others as the need arises. In addition to applications listed above, it could be used to support traditional print media -- newspapers, magazines, books -- as well as Web publishing, or even the new e-books offline readers. It could support a fax-on-demand system. Cars will eventually be able to connect to the network, and transcoding can be set up to send information in the form they require. As set-top boxes integrate home entertainment systems with home computers, transcoding will also play a role. Figure 4 illustrates a number of possible uses for a transcoding server. Figure 4. XSL used in the Transcoding Publisher Server IBM's Transcoding Publisher runs the XSL processor from a servlet that handles requests. It also supports caching of transformed data, so that multiple requests for the same transformation do not require running XSL for each request. Enterprise application integration XML is being embraced by every major software vendor. The ability to both emit XML and incorporate data expressed in XML is being added to most software products for which it makes sense. Because XML is a common and portable data format that is, or will be, available in these products, there is a tremendous opportunity to use XML data to integrate software into a complete system. However, because the XML data may be in a variety of vocabularies, a company may need a quick and mechanical means of converting it from the form received into the one the company needs. It is also possible to imagine that a company's internal structure might evolve into a series of entities with well-defined interfaces, and XML vocabularies that reflect their function. In this sense, the company's structure begins to resemble the structure of business-to-business relationships between companies, but on a smaller scale. In Figure 5, XML is the exchange medium between departments of a company, and XSL is used to transform data from the private form favored by one department into a form needed for processing in another. Figure 5. Intra-enterprise application integration The same model can be applied to the exchange of information between companies. There is a new trend in developing companies such that one company specializes in only one aspect of a complete business cycle. Such companies optimize their processes to be cost effective. Since on their own they may not be able to provide certain products or services, they may seek complementary products or services from other small companies, together offering the complete product or service required by their customers. This arrangement might be a one-time partnership, or it may exist for a longer period. For all intents and purposes, a "soft merger" of this type begins to look like a virtual company. Indeed, the virtual company may have a name different from the partners involved in creating the service or product. In the new economy, this kind of business aggregation requires the ability to respond quickly to new opportunities. When companies expose their services and products as processes represented in XML, it is possible to use XSL with not much programming to assemble an operating e-business from the partners' component systems, as shown in Figure 6. Such companies can be described as "integration-ready." Prior to the standardization of XML and XSL, building virtual companies from partners -- configuring middleware to work together and writing the required business logic -- could take days, weeks, or months. While XML and XSL do not eliminate these requirements, they do provide a quickly implemented and efficient means of aggregating the partners' business data. Figure 6. Creation of virtual companies by aggregating data and processes In most cases there is no requirement that a company be involved in only one such partnership. One could easily imagine a company that specializes in, say, warehousing and fulfillment, providing the same service to a large number of partnerships. Portals such as My Yahoo! are familiar to many Web users. They allow the client to design a custom home page with live, updated information according to the user's wishes. My Yahoo! gathers data from many sources to let users request an up-to-date weather forecast for their area, current stock prices, news headlines, and the like. This information is combined into a single Web page that has different parts of the screen allocated to presenting each part of the customized report. This model can also benefit a business worker. Suppose a clerk is employed to manage the supply of a particular line of parts needed for his company's manufacturing process. A portal could be designed to display prices or availability for certain critical components from various vendors. Information from the company's ERP system, such as inventory and forecasted demand, can be incorporated on the same page. The similarity with the My Yahoo!-type portal is the ability to gather data from a variety of resources, select according to a user's profile, and format the data for a particular screen. When the sources of such data can provide it in XML, XSL can be used to automate the transformation required for portals. One can imagine sending HTML streams to subobjects on the browser as a means of managing regions for display. Figure 7 shows an example of pulling data from multiple sources and formatting it into a single portal screen. Figure 7. A portal used for aggregating information from diverse sources In all of the examples above, XML is treated as data to be converted from one form to another, either for consumption by a client or by another server. Yet another way of using XML is to generate procedural code based on specifications described by XML data. For example, in configuring a complex product such as a personal computer, the information about available options might be exported into XML. XSL could convert it into HTML for forms filled out by the end user. If the user chooses a SCSI adapter, a refresh of the form from the same XML might include SCSI device options that were not available until the adapter was selected. By writing a new style sheet, the same XML source document could generate forms definition in other computer languages -- Java code that instantiates and initializes controls, Windows resource definitions, even forms for older 3270 terminal-based systems. All of this is possible because XML describes only the content; the presentation is defined using XSL style sheets. By designing various style sheets for various form systems, the same XML can be used for different kinds of applications. The sections above list just a few application categories where XSL can be gainfully employed. Expect many other uses to emerge as the technology is embraced by creative developers around the world. Limits of mechanical translation XSL can solve many problems by translating XML mechanically, but a few caveats apply. It is just one tool, and it will not address every need for changing XML documents. As stated above, the language itself is not intended for general-purpose programming. Unlike Java or C++, for example, variables can be set only once; they are really more like symbolic constants in that respect. They cannot be incremented, so loop counting is not possible. If there is a need to parse a "lastname, firstname" string into separate components, it can be done in XSL, but not easily. Such situations may call for the use of extensions plugged into XSL. With the Java version of Xalan, Java classes can be used to extend the power of an XSL processor. Mechanical translation must be done with care. When converting from one vocabulary to another, it is important to consider the meaning of the data between tags, not just the tag name. Even with a common tag name like <name> (customer name? company name?), it is hard to be sure what the name means. In addition to the meaning of the data, the format of the data must be understood. When combining listings from two catalogs of electronic parts, for example, the specifications of particular components must be expressed in a similar standard. The working voltage of a capacitor, say, could be expressed as a fixed value, a range of values, or a fixed value with a percent tolerance. The application that eventually consumes such data may understand only one form. Both of these problems are best addressed by having very well-defined vocabularies that are agreed upon between companies. XML.ORG oversees the definition and development of such vocabularies within an industry, and it is important that the specifications reflect the input of all companies that will be using the vocabulary for e-business. XSL is a powerful transformation facility that provides mechanical translation of XML documents from one form to another. It can convert to HTML, to another XML vocabulary, or to text that is not XML at all. Many transformations can be designed using only an XSL processor, and it is possible to add extensions to the processor to support particular requirements that are not easy using only XSL. You have seen several scenarios where XSL plays a role. These initial ideas about using XSL represent solutions to certain problems seen today, but XSL can be used in many ways that have yet to be invented. Finally, XSL by itself cannot address all incompatibilities between XML documents. When vocabularies are not well defined, either by the exact meaning of a tag or the exact format of the data associated with it, mechanical translation will not solve the problem. This underscores the importance of developing well-defined standard vocabularies for e-business usage under the auspices of a neutral standards organization such as XML.ORG. - The Extensible Markup Language (XML) is an officially recommended standard of the W3C. For the most up-to-date information on XML, go to the W3C's XML page. - To stay on top of current XML developments, visit XML.ORG, The XML Industry Portal. - OASIS, the Organization for the Advancement of Structured Information Standards, is a consortium that creates interoperable specifications based on XML. - The Electronic Business XML Initiative home page is the source for ebXML specifications, technical reports, reference materials, and news. - Review the specification documents for XSL Transformations (XSLT), Version 1.0, XML Path Language (XPath), Version 1.0, and Extensible Stylesheet Language (XSL), Version 1.0, from the W3C. - Get more information on Xalan, a robust XSLT and XPath implementation from the Apache Software Foundation. Apache also provides Xerces, an XML parser, XML4P, a DOM parser for Perl, and FOP, an FO-based print formatter. - The IBM WebSphere Transcoding Publisher provides automatic XSL translation and is capable of rendering XML into several different forms. - For a discussion of Wireless Markup Language (WML) and an extensive list of resources, read "WAP Wireless Markup Language Specification (WML)" in The XML Cover Pages. - Read a news release about the SABRE/Nokia project. - What kind of language is XSLT? by Michael Kay puts XSLT in perspective. - Transforming XML documents by Doug Tidwell, is a three-part tutorial on how to transform XML documents into various formats, including HTML, Scalable Vector Graphics (SVG), and PDF. - For more about XML namespaces, see part 1 and part 2 of David Marston's article, "Plan to use XML namespaces" (developerWorks, November 2002).. PDFs of Mark's current presentations can be downloaded from.. You can contact Mark Colan at mcolan@us.ibm.com.
http://www.ibm.com/developerworks/xml/library/x-xsltwork/index.html
crawl-002
refinedweb
3,557
52.19
23 Nov 04:27 2012 How to design matrix on edgeR to study genotype x environmental interaction Dear Daniela, I think you would be very well advised to seek out a statistical bioinformatician with whom you can collaborate on an ongoing basis. A GxE anova analysis would be statistically sophisticated even if you were analysing a simple univariate phenotypic trait. Attempting to do that sort of analysis in the context of an RNA-Seq experiment on miRNAs is far more difficult again. The design matrices you have created may be correct, but that's just the start of the analysis, and there are many layers of possible complexity. The BCV in your experiment is so large that I feel there must be quality issues with your data that you have not successfully dealt with. It seems very likely, for example, that there are batch effects that you have not yet described. To answer some specific questions: You might be better off with prior.df=10 instead the default, but this has little to do with the size of the BCV. You ask why one variety and one stage are disappearing from your design matrix. If you omit the "0+" in the first formula (and you should), you will find that one vineyard will disappear as well. This is because the number of contrasts for any factor must be one less than the number of leveles. This is a very fundamental feature of factors and model formula that you need to become familiar with before you can make sense of any model formula. Your email makes no mention of library sizes or sequencing depths, but obviously that has a fundamental effect on what is significantly different from what. I think you know now how to use edgeR in principle. However, as you probably already appreciate, deciding what is the right analysis for your data is beyond the scope of the mailing list. Best wishes Gordon On Thu, 22 Nov 2012, bioconductor-request@... wrote: > Date: Thu, 22 Nov 2012 10:07:19 +0100 > From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> > To: bioconductor@... > Subject: Re: [BioC] How to design matrix on edgeR to study genotype x > environmental interaction > Message-ID: > > Dear Gordon, > > Thank you so much for your valuable input. I took sometime to study a bit > more and be able to consider all the aspects you pointed out. At this time > I reconsider the analysis and started again, with the data exploration of > all 48 samples. > > First I filtered out the low reads, considering just the ones with more > than 1 cpm in at least 2 libraries (I have two replicates of each library); > the MDS plot clearly separate one of the locations from the other two > (dimension 1) and with less distinction the two varieties (dimension 2). > The stages also seems to be separated in two groups (the first two ones > together and separate of the two last ones) but as the varieties, not so > distinct. The two replicates are also consistent. > > With the BCV plot I could observe that reads with lower logCPM have bigger > BCV (the BCV value was equal to 0.5941), and then comes my first question: > > Should I choose *prior.df* different from the default, due to this > behavior, when estimating genewise dispersion? > > To proceed with the DE analysis, I tried two approaches, this time with all > the 48 samples, as suggested. > For both approaches, I have the following data frame: > >> target > Sample Vineyard Variety Stage > 1 1 mont CS ps > 2 2 mont CS ps > 3 4 mont CS bc > 4 5 mont CS bc > 5 7 mont CS 19b > 6 8 mont CS 19b > 7 10 mont CS hv > 8 11 mont CS hv > 9 13 mont SG ps > 10 14 mont SG ps > 11 16 mont SG bc > 12 17 mont SG bc > 13 19 mont SG 19b > 14 20 mont SG 19b > 15 22 mont SG hv > 16 23 mont SG hv > 17 25 Bol CS ps > 18 26 Bol CS ps > 19 28 Bol CS bc > 20 29 Bol CS bc > 21 31 Bol CS 19b > 22 32 Bol CS 19b > 23 34 Bol CS hv > 24 35 Bol CS hv > 25 37 Bol SG ps > 26 38 Bol SG ps > 27 40 Bol SG bc > 28 41 Bol SG bc > 29 43 Bol SG 19b > 30 44 Bol SG 19b > 31 46 Bol SG hv > 32 47 Bol SG hv > 33 49 Ric CS ps > 34 50 Ric CS ps > 35 52 Ric CS bc > 36 53 Ric CS bc > 37 55 Ric CS 19b > 38 56 Ric CS 19b > 39 58 Ric CS hv > 40 59 Ric CS hv > 41 61 Ric SG ps > 42 62 Ric SG ps > 43 64 Ric SG bc > 44 65 Ric SG bc > 45 67 Ric SG 19b > 46 68 Ric SG 19b > 47 70 Ric SG hv > 48 71 Ric SG hv > > At the first instance, I used the full interaction formula as the following > code: > >> d <- DGEList(counts=file) >> keep <- rowSums(cpm(DGElist) > 1) >= 2 >> DGElist <- DGElist[keep,] >> DGElist$samples$lib.size <- colSums(DGElist$counts) >> DGElist_norm <- calcNormFactors(DGElist) > *> design <- model.matrix(~0 + Vineyard + Variety + Stage + > Vineyard:Variety + Vineyard:Stage + Variety:Stage + Vineyard:Variety:Stage, > data=target)* > > [or even (*> design <- model.matrix(~0 + Vineyard*Variety*Stage, > data=target)*) which gives the same result] > >> rownames(design) <- colnames(DGEList_norm) > > However, when I call the *design* I see that one Variety (i.e., CS) and one > Stage (i.e., 19b) are not present in the design matrix, as individual > effect or even in the interactions. > > Then I passed to the second approach, in which, I create groups: > >> group <- > factor(paste(target$Vineyard,target$Variety,target$Stage, from the design matrix when using the full interaction formula? > > Sorry for the long email and thank you for all the advises, > > Best wishes > > Daniela Lopes Paim Pinto > PhD student - Agrobiosciences > Scuola Superiore Sant'Anna, Italy > >> sessionInfo() > R version 2.15.2 (2012-10-26) >] edgeR_3.0.3 limma_3.14.1 > > loaded via a namespace (and not attached): > [1] tools_2.15.2 > > > > > > > > > > > 2012/11/11 Gordon K Smyth <smyth@...> > >> Dear Daniela, >> >> What version of the edgeR are you using? The posting guide asks you to >> give sessionInfo() output so we can see package versions. >> >> Your codes looks correct for testing an interaction, although you could >> estimate the same interaction more directly using an interaction formula as >> in Section 3.3.4 of the edgeR User's Guide. >> >> However the model you have used is correct only if all 12 samples >> correspond to the same physiological stage. I wonder why you are not >> analysing all the 48 samples together. I would start with data exploration >> of all 48 samples, including exploration measures like transcript >> filtering, library sizes, normalization factors, an MDS plot, a BCV plot, >> and so on. The first step is to check the data quality before going on to >> test for differential expression. >> >> edgeR has very high statistical power, even giving p-values smaller than I >> would like in some cases. So if you're not getting any differential >> expression, it is because there is none or because you have data quality >> problems. >> >> Best wishes >> Gordon >> >> Date: Fri, 9 Nov 2012 14:44:28 +0100 >>> From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> >>> To: bioconductor@... >>> Subject: Re: [BioC] How to design matrix on edgeR to study genotype x >>> environmental interaction >>> >>> Dear Gordon, >>> >>> Thank you so much for the reference. I read all the chapter regarding to >>> the models and I tried to set up the following code considering a data >>> frame like this: >>> >>> target >>>> >>> Sample Variety Location >>> 1 1 CS Mont >>> 2 2 CS Mont >>> 3 25 CS Bol >>> 4 26 CS Bol >>> 5 49 CS Ric >>> 6 50 CS Ric >>> 7 13 SG Mont >>> 8 14 SG Mont >>> 9 37 SG Bol >>> 10 38 SG Bol >>> 11 61 SG Ric >>> 12 62 SG Ric >>> >>> group <- factor(paste(target$Variety,**target$Location,>> >>> And then I estimated the trended and tag wise dispersion and fit the model >>> doing: >>> >>> disp.tren <- estimateGLMTrendedDisp(**DGEnorm,design) >>>> disp.tag <- estimateGLMTagwiseDisp(disp.**tren,design) >>>> fit <- glmFit(disp.tag,design) >>>> >>> >>> When I made some contrasts to find DE miRNAs, for example: >>> >>> my.constrasts <- makeContrasts(CS_BolvsMont = CS_Bol-CS_Mont, >>>> >>> CSvsSG_BolvsMont = (CS_Bol-CS_Mont)-(SG_Bol-SG_**Mont), levels=design) >>> >>>> lrt <- glmLRT(fit, contrast=my.constrasts[,"CS_**BolvsMont"]) >>>> >>> >>> I expected to find DE miRNAs due the environment effect (CS_BolvsMont) and >>> for example DE miRNAs due the interaction genotypeXenvironment ( >>> CSvsSG_BolvsMont). >>> >>> However the results do not seems to reflect it, since I did not get even a >>> single DE miRNA with significant FDR (even less than 20%!!!!) and going >>> back to the counts in the raw data I find reasonable differences in their >>> expression, which was expected. I forgot to mention that I decided to >>> consider stage by stage separately and not add one more factor on the >>> model, since I am not interested, for the moment, on the time course (as I >>> wrote in the previous email - see below). >>> >>> Could you (or any body else from the list) give me some advise regarding >>> the code? Is this matrix appropriate for the kind of comparisons I am >>> interested on? >>> >>> Thank you in advance for any input. >>> >>> Daniela >>> >>> >>> >>> >>> 2012/10/30 Gordon K Smyth <smyth@...> >>> >>> Dear Daniela, >>>> >>>> edgeR can work with any design matrix. Just setup your interaction >>>> model using standard R model formula. See for example Chapter 11 of: >>>> >>>> >>>>****manuals/R-intro.pdf<**manuals/R-intro.pdf> >> <http://**cran.r-project.org/doc/**manuals/R-intro.pdf<> >>> >> >>> >>>> Best wishes >>>> Gordon >>>> >>>> Date: Mon, 29 Oct 2012 16:24:31 +0100 >>>> >>>>> From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> >>>>> To: bioconductor@... >>>>> Subject: [BioC] How to design matrix on edgeR to study genotype x >>>>> environmental interaction >>>>> >>>>> Dear all, >>>>> >>>>> I'm currently working with data coming from deep sequencing of 48 small >>>>> RNAs libraries and using edgeR to identify DE miRNAs. I could not figure >>>>> out how to design my matrix for the following experimental design: >>>>> >>>>> I have 2 varieties (genotypes), cultivated in 3 different locations >>>>> (environments) and collected in 4 physiological stages. None of them >>>>> represent a control treatment. I'm particulary interested on identifying >>>>> those miRNAs which modulate their expression dependent on genotypes (G), >>>>> environments (E) and G x E interaction. For instance the same variety in >>>>> the 3 different locations, both varieties in the same location and both >>>>> varieties in the 3 different locations. >>>>> >>>>> I was wondering if I could use the section 3.3 of edgeR user guide as >>>>> reference or if someone could suggest me any other alternative method. >>>>> >>>>> Thanks in advance >>>>> >>>>> Daniela >>>>> >>>>> ______________________________________________________________________ The information in this email is confidential and intend...{{dropped:4}} _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives:
http://permalink.gmane.org/gmane.science.biology.informatics.conductor/44917
CC-MAIN-2015-40
refinedweb
1,780
58.11
I was writing a code for class as follows: import java.util.Scanner; public class Salary2 { public static void main(String[] args) { Scanner input=new Scanner(System.in); String name; float salary; float hours; System.out.print("What is the employees name?"); name=input.nextLine(); while (!name.equals("stop")) { System.out.printf("What is %ss salary per hour?", name); salary=input.nextFloat(); System.out.printf("How many hours did %s work?", name); hours=input.nextFloat(); System.out.printf("%ss salary for this week is $%.2f", name,hours*salary); System.out.print("\nWhat is the employees name?"); input = new Scanner(System.in); name=input.nextLine(); } } } It took me forever to figure out that I had to redeclare "input" on line 21. Why is this necessary? Why can't I just overwrite "input" like a normal variable?
https://www.daniweb.com/programming/software-development/threads/389497/redeclaring-scanner
CC-MAIN-2018-30
refinedweb
135
55.3
nghttp2_select_next_protocol¶ Synopsis¶ #include <nghttp2/nghttp2.h> - int nghttp2_select_next_protocol(unsigned char **out, unsigned char *outlen, const unsigned char *in, unsigned int inlen)¶ A helper function for dealing with NPN in client side or ALPN in server side. The in contains peer's protocol list in preferable order. The format of in is length-prefixed and not null-terminated. For example, h2and http/1.1stored in in like this: in[0] = 2 in[1..2] = "h2" in[3] = 8 in[4..11] = "http/1.1" inlen = 12 The selection algorithm is as follows: If peer's list contains HTTP/2 protocol the library supports, it is selected and returns 1. The following step is not taken. If peer's list contains http/1.1, this function selects http/1.1and returns 0. The following step is not taken. This function selects nothing and returns -1 (So called non-overlap case). In this case, out and outlen are left untouched. Selecting h2means that h2is written into *out and its length (which is 2) is assigned to *outlen. For ALPN, refer to See for more details about NPN. For NPN, to use this method you should do something like: static int select_next_proto_cb(SSL* ssl, unsigned char **out, unsigned char *outlen, const unsigned char *in, unsigned int inlen, void *arg) { int rv; rv = nghttp2_select_next_protocol(out, outlen, in, inlen); if (rv == -1) { return SSL_TLSEXT_ERR_NOACK; } if (rv == 1) { ((MyType*)arg)->http2_selected = 1; } return SSL_TLSEXT_ERR_OK; } ... SSL_CTX_set_next_proto_select_cb(ssl_ctx, select_next_proto_cb, my_obj);
https://nghttp2.org/documentation/nghttp2_select_next_protocol.html
CC-MAIN-2021-25
refinedweb
241
60.11
When Based on:,, & There has been a lot of talk about the contents of this site recently. Lets look at some basics:? Subscribe - Exciting! The highlight of any day. Download - Snore, boring. Can't let Paul get the last word in. Wow, I didn't know if we would make it this far and what kind of form we would have ended up in. I wanted to take some time to go thru and look at some of the really cool events of 2005. January February March April May June July August September October November December People that I have met or remet along the way. MS people. Wilco Bauwer. David Yack. Ambrose Little. Plipper. Bill Ryan. Jason Salas. ASPInsiders. Craig Shoemaker. Frappr Map. Goto the podcast site and get in on the fun. Send me your pictures for the listener gallery. Iterating through a DataTable in Atlas. get_length() getItem(i) for the rows. getProperty(“Column Name”) for the column. function MethodReturn(result) { var i = 0; var str = ""; var strRowsInfo = ""; var strReturn = "<br />"; /; } Jonathon Hawkings has a really good explanation regarding the download problems with Atlas. Apologies for the belated wishing of a Merry Festivus for all. Ok, assuming you have some datatable and you are using Atlas, you can get the datatype of your datacolumn object in Atlas. You can call the get_dataType() method on your datacolumn. Kinda cool. I've been trying to go over the Web.Data namespace recently in Atlas. I realize that some people here want to get a free copy of some stuff and pimp products. Heck, I do it at times and even get way off technical topics, but this continual repost of the exact same thing is annoying. I realize that the vendor wants you to put up a post on your blog, but geez, the same dry generic post that the vendor gives you is just a little bit too much. We had this problem about 12-18 months ago and it got ugly then. I would accept a product review. If you like the product say so. The same dry advert from the vendor should not be just blindy copied to your blog. Additional thoughts: 1. If you were involved in writing the product, please post about your involvement, some technical features, and how to develop with it. 2. If you want to try it out for free, that's fine, just remember not to put the post on the main page. PS. You may now return to your regularly scheduled development. We'd like to know where the listeners of the ASP.NET Podcast are at. As such, thanks to a suggestion by Simone Chiaretta of Italy, we know have a frappr map at. Please add yourself in. I was just reading the forums that according to Nikhil Kothari (and he would know!), there is a debug.assert in Atlas. Anything that helps the debugging experience in Atlas/javascript is a good thing given the pain of the existing client side debugging experience.
http://weblogs.asp.net/wallym/archive/2005/12.aspx
CC-MAIN-2014-10
refinedweb
504
75.3
Actually the topic name should be "How to profile code". Well I am writing SIMD math library. I got 2 implementations SSE and scalar. I'm not shure how measure the code speed. Currently Im not using optimization, and no debug symbols are generated for profiling. I'm creating a loop that repeats the operation... The compiler is cl I'm expecting SSE dot product to be slower than scalar version? But the cross product is also slower!?!@ SGE_FORCE_INLINE SGVector vec3_cross(const SGVector& a, const SGVector& b) { #if defined(SGE_MATH_USE_SSE) __m128 T = _mm_shuffle_ps(a.m_M128, a.m_M128, SGE_SIMD_SHUFFLE(1, 2, 0, 3)); //(Y Z X 0) __m128 V = _mm_shuffle_ps(b.m_M128, b.m_M128, SGE_SIMD_SHUFFLE(1, 2, 0, 3)); //(Y Z X 0) //i(ay*bz - by*az) + j(bx*az - ax*bz) + k(ax*by - bx*ay) T = _mm_mul_ps(T, b.m_M128);//bx * ay, by * az, bz * ax V = _mm_mul_ps(V, a.m_M128);//ax * by, ay * bz, az * bx V = _mm_sub_ps(V, T); V = _mm_shuffle_ps(V, V, SGE_SIMD_SHUFFLE(1, 2, 0, 3)); return SGVector(V); #else const float x = (a.y*b.z) - (b.y*a.z); const float y = (b.x*a.z) - (a.x*b.z); const float z = (a.x*b.y) - (b.x,a.y); return SGVector(x, y, z, 0.f); #endif } where SGVector is struct with union{ struct {float x,y,z;}; float arr[4]; __m128 m_M128}. (maybe that is the problem?!) EDIT : maybe __forceinline is involed too!? I will remove it. Edited by imoogiBG, 23 October 2013 - 03:33 PM.
http://www.gamedev.net/topic/649289-how-to-profile-simd/
CC-MAIN-2014-41
refinedweb
257
70.19
27 September 2011 04:51 [Source: ICIS news] GUANGZHOU (ICIS)--?xml:namespace> In addition, the company will build a 2.2m tonne/year catalytic cracker and 2.6m tonne/year diesel hydrogenation unit as downstream facilities at the CDU, according to a statement from Sinopec. The project costs yuan (CNY) 6.4bn ($1bn) to build, according to a Sinopec statement released last year. The company is expected to start up the CDU and the downstream units by late 2012, a company source said. Sinopec Shijiazhuang currently operates a 5m tonne/year CDU at the same site, according to its website. ($1 = CNY6.40) Additional reporting by Amy
http://www.icis.com/Articles/2011/09/27/9495225/sinopec-shijiazhang-starts-refinery-expansion-downstream-units.html
CC-MAIN-2014-35
refinedweb
107
68.97
Basics of Files in C Programming The C programming library offers functions for making a new file, writing to that file, and reading data from any file. To bolster those basic file functions are a suite of file manipulation functions. They allow your programs to rename, copy, and delete files. The functions work on any file, not just those you create, so be careful! How to rename a file in C programming The rename() function is not only appropriately named but it’s also pretty simple to figure out: x = rename(oldname,newname); oldname is the name of a file already present; newname is the file’s new name. Both values can be immediate or variables. The return value is 0 upon success; -1 otherwise. The rename() function is prototyped in the stdio.h header file. The source code shown in Creating and Renaming a File creates a file named blorfus and then renames that file to wambooli. CREATING AND RENAMING A FILE #include <stdio.h> #include <stdlib.h> int main() { FILE *test; test=fopen("blorfus","w"); if(!test) { puts("Unable to create file"); exit(1); } fclose(test); puts("File created"); if(rename("blorfus","wambooli") == -1) { puts("Unable to rename file"); exit(1); } puts("File renamed"); return(0); } Lines 9 through 15 create the file blorfus. The file is empty; nothing is written to it. The rename() function at Line 17 renames the file. The return value is compared with -1 in Line 18 to see whether the operation was successful.. That’s how files are copied. Duplicate That File demonstrates how a file can be duplicated, or copied. The two files are specified in Lines 9 and 10. In fact, Line 9 uses the name of the Exercise file, the source code from Duplicate That File. The destination file, which contains the copy, is simply the same filename, but with a bak extension. DUPLICATE THAT FILE #include <stdio.h> #include <stdlib.h> int main() { FILE *original,*copy; int c; original=fopen("ex2308.c","r"); copy=fopen("ex2308.bak","w"); if( !original || !copy) { puts("File error!"); exit(1); } while( (c=fgetc(original)) != EOF) fputc(c,copy); puts("File duplicated"); return(0); } The copying work is done by the while loop at Line 16. One character is read by the fgetc() function, and it’s immediately copied to the destination by the fputc() function in Line 17. The loop keeps spinning until the EOF, or end-of-file, is encountered. Exercise 2: Copy the source code form Duplicate That File into your editor. Save the file as ex2308.c, build, and run. You’ll need to use your computer operating system to view the resulting file in a folder window. Or you can view the results in a terminal or command prompt window. How to delete a file in C programming Programs delete files all the time, although the files are mostly temporary anyway. Back in the bad old days, many programmers complained about programs that didn’t “clean up their mess.” If your code creates temporary files, remember to remove them before the program quits. The way to do that is via the unlink() function. Yes, the function is named unlink and not delete or remove or erase or whatever operating system command you’re otherwise used to. In Unix, the unlink command can be used in the terminal window to zap files, although the rm command is more popular. The unlink() function requires the presence of the unistd.h header file, which you see at Line 3 in File Be Gone! FILE BE GONE! #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main() { if(unlink("wambooli") == -1) { puts("I just can't kill that file"); exit(1); } puts("File killed"); return(0); } The file slated for death is listed in Line 9 as the unlink() function’s only argument. It’s the wambooli file, created back in Exercise 1! So if you don’t have that file, go back and work Exercise 1. Exercise 3: Type the source code from File Be Gone! into your editor. Build and run.
https://www.dummies.com/programming/c/basics-of-files-in-c-programming/
CC-MAIN-2019-13
refinedweb
682
76.32
This chapter describes configuring your system for communication on your network. It includes the following sections: The Web Administrator graphical user interface (GUI) enables you to configure your system for communication on your network. After you configure network communication and services, you need to configure your file system, user access rights, any other features, and any options that you purchased. This chapter follows the same sequence as the configuration wizard. It does not cover all of the features you might want to set up. If you want to set up a specific feature that is not covered in this chapter, look it up in the index to find the instructions. In order to configure your system for communication, you must set up a server name that identifies the NAS server on the network. To set the server name: 1. From the navigation panel, choose Network Configuration > Set Server Name. 2. Type the server name in the Server Name field. The server name identifies the system or identifies the server unit, for dual-server high-availability (HA) systems on the network. The server name begins with a letter of the alphabet (a-z, A-Z) or number 0-9 and can include up to 30 characters: a-z, A-Z, 0-9, hyphens (-), underscores (_), and periods (.). 3. Type the contact information for your company. The system includes this information in any diagnostic email messages that it sends. For more information about diagnostic email messages, see Sending a Diagnostic Email Message. 4. Click Apply to save your settings. This section provides information about logical unit numbers (LUNs) and how to set and restore LUN paths. The following subsections are included: A logical unit number (LUN) path is a designation that describes how a file volume in a LUN is accessed by which NAS server and controller. To every file volume there are two LUN paths from the NAS server controllers to the disk array controllers: primary and alternate. If one fails, the system uses the other available LUN path to access the desired file volume. The number of LUN paths and their implementations depend on the model and configuration of the system. In a cluster configuration, a server (head) induces a head failover (see Enabling Server Failover) if both the primary and alternate paths fail. For more information, see Setting LUN Paths. FIGURE 2-1 shows a single-server appliance or gateway configuration. The primary logical unit number (LUN) path to a file volume in L0 (:LUN 0) is C0-L0, and the alternate path is C1-L0. The primary LUN path to a volume in L1 is C1-L1, and the alternate path is C0-L1. As illustrated above, the system has the following LUN paths. Each LUN can be accessed through either controller 0 (C0) or controller 1 (C1). FIGURE 2-2 shows a cluster appliance or gateway system configuration. The primary logical unit number (LUN) path to L0 (LUN 0) path on server H1 is C0-L0; the alternate path is C0-L1. The primary L0 path on server 2 is C1-L0 and the alternate path is C1-L0. File volumes are normally accessed through the primary LUN path designated for the LUN to which the file volumes belong. In a cluster configuration, a server induces a failover if its primary and alternate paths fail (see Enabling Server Failover). By setting a logical unit number (LUN) path, you designate the current active LUN path. The current active LUN path can be either the primary or alternate path. For optimal performance, set the active path to the primary path. A LUN can be reassigned only if there are no file systems on that LUN. On a cluster appliance, only the server that "owns" a LUN can reassign it to another server. Note: When you first start a cluster appliance, all LUNs are assigned to one server (H1). Use server H1 to reassign some LUNs to server H2 for even distribution of data. The global limit (for both servers, combined) is 255 LUNs. This limit can be divided between the two servers in any way. For example, you might have 200 LUNs on one server, and 56 on the partner server. You use the Set LUN Path panel to set active paths. For a cluster appliance, you can set an unassigned path from either server. You can specify the primary and alternate path for each LUN, or you can have the paths assigned automatically by clicking the Auto-assign LUN paths button in the Set LUN Paths window. Note: The Sun StorEdge 5310 NAS Appliance Version 4.5 documentation set does not show the graphic user interface's change from Fault Tolerance to High Availability. When a procedure in that documentation instructs you to select Fault Tolerance, select High Availability. To set a LUN path: 1. From the navigation panel, choose High Availability > Set LUN Path. Note: LUNs that have no LUN path assigned might initially appear multiple times in the Set LUN Path panel, as their presence is advertised by multiple controllers over multiple paths. After a LUN has a path assigned, it is displayed one time, on its current path. 2. Select a LUN and click Edit. 3. Select the controller that you want from the Primary Path drop-down menu. Example: The drop-down option "1/0" assigns the selected LUN to controller 0 (C0). The option value is X/Y, where X is the HBA and Y is the controller ID (SID) through which the LUN is seen by the NAS server. 4. Evenly divide LUN assignments to the two available paths. For example, the first and third LUN to 1/0 and the second and fourth LUN to 1/1. 5. Click Apply. The current active path of a logical unit number (LUN) can be different from its primary path. The Restore option on the Set LUN Panel enables you to restore a current active path of a LUN to its primary LUN path. Note: Restoring a LUN path does not recover any data; it is not a disaster recovery function. Instead, for optimal performance, the active path must be the primary path for a LUN. To restore a LUN path: 1. From the navigation panel, choose High Availability > Set LUN Path. 2. Select the LUN that you want to restore. 3. Click Restore. If you are restoring the primary LUN path because of a physical path failure, scan the disks to make the alternate path available again. To rescan the disks, use the Web Administrator to navigate to Volume Operations > Create File Volumes and then click Scan for New Disks. This section provides information about enabling server failover on Sun StorageTek 5310 and Sun StorageTek 5320 cluster appliances and cluster gateway systems. The following subsections are included: Note: Failover processing is only available on Sun StorageTek 5310 and Sun StorageTek 5320 cluster appliances and cluster gateway systems. It does not apply for Sun StorageTek 5210 NAS appliances. A cluster appliance or gateway system includes a pair of active-active servers, sometimes called heads, that share access to the redundant array of independent disks (RAID) controllers and several different networks. The RAID controllers are connected to each server through Fibre Channel controllers. A dedicated heartbeat cable connects the first network interface card (NIC) between the two servers and lets each server monitor the other's health status. In normal operation, each server operates independently, with responsibility for a subset of logical unit numbers (LUNs). If one server suffers a hardware failure that renders a data path unavailable, the working server automatically takes ownership of Internet Protocol (IP) addresses and LUNs formerly managed by the failed server. All operations of the failed server, including RAID volume ownership and network interface addressing, are transferred to the working server. This is known as head failover. Note: Volume names must be unique in a cluster configuration. If two volumes in a cluster have the same name and a failover occurs, an `x' is appended to the name of the file system on the failed server to avoid a conflict with the working server. Following a cluster failover, client operations using Network File System/user datagram protocol (NFS/UDP) transfer immediately, while Network File System/transmission control portal (NFS/TCP) requires a reconnect. This is performed transparently in the context of an NFS retry. Common Internet File System (CIFS) also requires a reconnect, although different applications might do so transparently, notify the user, or require user confirmation before proceeding. You can initiate the recovery process, known as "failback," when the failed server is repaired and brought back online. This is described under Initiating Recovery. Note: A power cycle (or power failure) of a single controller unit in a cluster configuration causes both servers to reset. This is expected behavior because each server is designed to protect against partial volume loss. Caution:In a cluster configuration, do not configure both heads to be in the same switch zone as the tape device. In the event of a head failover during a backup, data on the media is lost. Configure one of the heads to be in the same zone as the tape device. In the event of a server failure, failover causes the working server to take temporary ownership of the Internet Protocol (IP) addresses and logical unit numbers (LUNs) formerly managed by the failed server. Note: When you enable head (server) failover, Dynamic Host Configuration Protocol (DHCP) is disabled. To enable head failover: 1. From the navigation panel, choose High Availability > Enable Failover. 2. Select the Automatic Failover checkbox. 3. Select the Enable Link Failover checkbox. Enabling link failover ensures that head failover occurs when any network interface that is assigned a "primary" role fails. This type of failure is referred to as a "link down" condition. If the partner's network link is down, the server that wants to induce the failover must wait the specified amount of time after the partner server reestablishes its network link. 4. Type the following: 5. Click Apply to save your settings. 6. Reboot both servers. This section provides information about manually initiating failback (recovery) for a cluster appliance or cluster gateway system, in the event that a failed server is brought back online. It applies for Sun StorageTek 5310 and Sun StorageTek 5320 cluster appliances and cluster gateway systems, and includes the following subsections: After a failed server is brought back online and fully functional, you must manually initiate recovery (failback) of your cluster appliance or gateway system. This allows the server that originally failed to "recover" ownership of its original file volumes. For example, if volume A was assigned to server H1, which failed, server H2 would take ownership of volume A during the failover process. When server H1 is fully functional again, you can log in to server H2 and return ownership of volume A to server H1. Caution:Make sure that the failed server is fully functional before attempting recovery. After a cluster appliance or cluster gateway system has undergone head failover, and the failed server is brought back online, you must manually initiate recovery (failback) of the server that was brought back up. To initiate recovery: 1. Log in to Web Administrator on the server that took over for the failed server. Note: You cannot initiate recovery from the failed (and now, recovered) server. 2. From the navigation panel, choose High Availability > Recover. 3. Click Recover. (Ignore the redundant array of independent disks (RAID) lists at the center of the screen; they are not used during server recovery.) Under a heavy processing load, some LUNs might not be fully restored. Repeat the procedure if any LUN remains in the failover state. This section provides information about configuring appliance and gateway-system network ports and adapters. The following subsections are included: Each network port on your NAS appliance or gateway system must have an assigned role. Take either of the following actions to configure network ports on your NAS appliance or gateway system: You can bond two or more ports together to create a port bond. A port bond has higher bandwidth than the component ports assigned to it. More information and instructions for bonding network ports are provided in About Port Bonding. The NAS appliance and gateway-system ports are identified based on their type, and their physical and logical location on the server. To identify the network port locations, see Back Panel Ports and LEDs and the NAS appliance and gateway system Getting Started Guide. Note that configurations vary, and those shown are examples. The relationship of network interface cards (NICs) to ports is also shown in the Getting Started Guide for your NAS appliance or gateway system. To configure network adapters: 1. From the navigation panel, choose Network Configuration > Configure TCP/IP > Configure Network Adapters. 2. If your network uses a Dynamic Host Configuration Protocol (DHCP) server to assign Internet Protocol (IP) addresses and you want to enable it, select the Enable DHCP checkbox. Enabling DHCP allows the system to dynamically acquire an IP address from the DHCP server. Clear this checkbox to manually specify a static IP address and netmask. If you do not enable DHCP, the netmask is still disabled if the port is a member of an aggregate port. See About Port Bonding for more information on creating and setting up aggregate ports. Note: On cluster appliances and gateway systems, you cannot enable DHCP unless you have disabled head failover. Instead, you must assign static IP addresses to ports so that they remain consistent in the event of a failover. 3. From the Adapter list, select the port you want to configure. If you have already created a port bond and want to add alias IP addresses to it, select the port bond from this list. (See About Port Bonding for more information on creating port bonds.) Independent ports are labelled PORTx and port bonds are labelled BONDx. After you create a port bond, you cannot add alias IP addresses to the individual ports, only to the bond. 4. Type the IP address for the selected port or port bond. 5. Type the IP subnet mask for the selected port or port bond. The subnet mask indicates which portion of an IP address identifies the network address and which portion identifies the host address. The read-only Broadcast field is filled automatically when you enter the IP address and netmask. The broadcast address is the IP address used to send broadcast messages to the subnet. 6. Select one of the following roles for each port, referring to About Port Locations and Roles for details: 7. To add an alias IP address to the selected port, specify that address in the IP-Aliases field. Then click the Add button to add it to the IP-Aliases list. Typically aliases specify the IP addresses of obsolete systems that have been replaced by NAS storage. You can have up to nine aliases per interface for single-server systems and up to four aliases for dual-server systems. To remove an alias from the list, select it and click the Trash button. Changes are not saved until you click Apply. 8. Repeat Step 3 through Step 7 for all ports in the Adapter list. 9. Click Apply to save your changes. The default gateway address is the Internet Protocol (IP) address of the gateway or router on the local subnet that is used by default to connect to other subnets. A gateway or a router is a device that sends data to remote destinations. You must specify the default gateway address for the system. To set the default gateway address: 1. From the navigation panel, choose Network Configuration > Configure TCP/IP > Set Gateway Address. 2. Type the gateway address in the Gateway text box. 3. Click Apply to save your settings. This section provides information about setting up Windows security so that name services can be used, and provides information about setting up various name services. For more detailed information about name services, see Active Directory Service and Authentication. The following subsections are included: To use name services in a Windows environment, you must configure Windows security. Configuring the domain, workgroup, or Active Directory Service (ADS) is a Windows function. If you are running a pure UNIX network, you do not need to configure either Windows Domains or Windows Workgroups. Note: In a cluster configuration, Windows security changes made on one server are propagated immediately to the other server. Changing the security mode requires a server reboot. Therefore, perform this procedure during a scheduled maintenance period. Enable Windows Workgroup, NT Domain security, or ADS through the Configure Domains and Workgroups panel. By default, your system is configured in Windows Workgroup mode, with a workgroup name of "workgroup." Note: Domain security and Workgroup security settings are mutually exclusive. Changes made to Domain security will negate Workgroup security and vice versa. To configure Windows security: 1. From the navigation panel, choose Windows Configuration > Configure Domains and Workgroups. 2. To enable Windows domain security, select the Domain option, and fill in the Domain, User Name, and Password fields to create an account on the domain for this server. You must specify a user account with rights to add servers to the specified domain. For more information about these fields, see Configure Domains and Workgroups Panel. 3. To enable Windows workgroup security, select the Workgroup option, and type the name of the workgroup in the Name field. The workgroup name must conform to the 15-character NetBIOS limitation. 4. (Optional) In the Comments field, type a description of the NAS appliance or gateway system. 5. To enable ADS, select the Enable ADS checkbox and fill in the ADS-related fields. For more information about these fields, see Configure Domains and Workgroups Panel. For more detail about ADS, refer to About Active Directory Service. Note: Prior to enabling ADS, you must verify that the system time is within five minutes of any ADS Windows domain controller. To verify the time, choose System Operations > Set Time and Date from the navigation panel. 6. Click Apply to save your settings. If you change the security mode from workgroup to NT domain, or from NT domain to workgroup, the server reboots when you click Apply. Windows Internet Naming Service (WINS) is a Windows function. If you are running a pure UNIX network, you do not need to set up WINS. Follow the steps below to set up WINS: Note: In a cluster configuration, WINS changes made on one server are propagated immediately to the other server. 1. From the navigation panel, choose Windows Configuration > Set Up WINS. 2. To enable WINS, select the Enable WINS checkbox. Checking this box makes the system a WINS client. 3. Type the Internet Protocol (IP) address of the Primary WINS server in the space provided. The primary WINS server is the server consulted first for NetBIOS name resolution. 4. Type the Secondary WINS server in the space provided. If the primary WINS server does not respond, the system consults the secondary WINS server. 5. (Optional) Type the NetBIOS Scope identifier in the Scope field. Defining a scope prevents this computer from communicating with any systems that do not have the same scope configured. Therefore, use caution with this setting. The scope is useful if you want to divide a large Windows workgroup into smaller groups. If you use a scope, the scope ID must follow NetBIOS name conventions or domain name conventions and is limited to 16 characters. 6. Click Apply to save your settings. Domain Name Service (DNS) software resolves host names to Internet Protocol (IP) addresses for your NAS appliance or gateway system. Note: If you are using DNS without Dynamic DNS, add the host name and IP address of the server to your DNS database. If you are using Dynamic DNS, you do not need to manually update the DNS database. See your DNS documentation for more information. Follow the steps below to set up DNS: Note: In a cluster configuration, DNS changes made on one server are propagated immediately to the other server. 1. From the navigation panel, choose Network Configuration > Configure TCP/IP > Set Up DNS. 2. Select the Enable DNS checkbox. 3. Type the DNS server Domain Name. 4. Type the IP address of a DNS server you want to make available to the network, and then click the Add button to add the server to the Server List. Repeat this step for each DNS server you want to add. You can add a maximum of two DNS servers to this list. The system first queries the DNS server at the top of the server list for domain name resolution. If that server cannot resolve the request, the query goes to the next server on the list. 5. To rearrange the search order of the DNS servers in the list, click the server you want to move and click the Up or Down button. To remove a server from the list, select the server IP address and click the Trash button. 6. Select the Enable Dynamic DNS checkbox to let a Dynamic DNS client add the NAS appliance or gateway system into the DNS namespace. Do not enable this option if your DNS server does not accept dynamic updates. You must also configure the Kerberos realm and KDC server in Configuring Windows Security. If you enable Dynamic DNS by selecting this checkbox, non-secure dynamic updates occur automatically if they are allowed by the DNS server. 7. To enable secure Dynamic DNS updates, select the Enable Dynamic DNS checkbox and fill in the DynDNS User Name and DynDNS Password fields. For more information about these fields, see Set Up DNS Panel. 8. Click Apply to save your settings. Network information service (NIS) is a name service that enables the distribution of system configuration data, such as user and host names, between computers in a computer network. This is a UNIX function so if you are running a pure Windows network, you do not need to set up NIS. Use the Set Up NIS panel to enable NIS and specify the domain name and server Internet Protocol (IP) address. Follow the steps below to set up NIS: Note: In a cluster configuration, NIS changes made on one server are propagated immediately to the other server. 1. From the navigation panel, choose Unix Configuration > Set Up NIS. 2. Select the Enable NIS checkbox. Enabling NIS configures the system to import the NIS database for host, user, and group information. 3. Type the name of the domain you want to use for NIS services in the Domain Name field. Use the DNS naming convention (for example, domain.com). 4. Type the IP address or name of the NIS server in the Server field. This is the server from which the database is imported. Leave the Server field blank if you do not know the server IP address. However, if you leave the Server field blank, you must select the Use Broadcast checkbox so that the appropriate IP address can be acquired from the NIS server. 5. Type the frequency rate, in minutes, at which you want NIS information to be refreshed. The default is set to 5 minutes. 6. Select the Use Broadcast checkbox to acquire the NIS server IP address. 7. Select the Update Hosts checkbox to download host information from the NIS server to the system. 8. Select the Update Users checkbox to download user information from the NIS server to the system. 9. Select the Update Groups checkbox to download group information from the NIS server to the system. 10. Select the Update Netgroups checkbox to download netgroup information from the NIS server to the system. 11. Click Apply to save your changes. Network information services plus (NIS+) is a name service that provides the same functionality as NIS, but with added security that ensures a secure environment. This is a UNIX function, so if you are running a pure Windows network, do not set up NIS+. Note: The commands and structure of NIS+ are different from NIS. Note: In a cluster configuration, NIS+ changes made on one server are propagated immediately to the other server. Setting up NIS+ is a two-phase process: 1. Adding the NAS appliance or gateway system to the host credential file. 2. Configuring NIS+. To add an appliance or gateway system to the host credential file on the NIS + server: 1. Log in as root. 2. Type the following command: nisaddcred -p unix.server@domain -P server.domain. des where server is the name of the NAS server, and domain is the name of the NIS+ domain that the appliance or gateway system is joining. Note: Include a period at the end of the domain name only after the -P argument. For example, if a NAS appliance is named SS1, and its NIS+ domain is sun.com, enter: nisaddcred -p unix.ss1@sun.com -P ss1.sun.com. des 3. At the prompt, enter a password. This password will be used again later in this procedure. To configure NIS+: 1. From a remote client, open a web browser window to the system and log in to Web Administrator. 2. From the navigation panel, choose Unix Configuration > Set Up NIS+. 3. Select the Enable NIS+ checkbox. 4. In the Home Domain Server field, type the NIS+ home domain server IP address. If you don't know the home domain server IP address, leave this field blank and select the Use Broadcast checkbox. When this option is selected, the system acquires the appropriate IP address for the home domain server. 5. In the NIS+ Domain field, type the NIS+ home domain. Note: NIS+ domain names must end with a period ("."). 6. Type the secure RPC password for the NIS+ server. Use the password that you set earlier in this procedure. 7. Type the search path as a colon-separated list of domains. The search path identifies the domains that NIS+ searches through when looking for information. Leave this space empty to search only the home domain and its parents. For example, if the NIS+ domain is eng.sun.com. and the search path is blank, the system first searches eng.sun.com. then sun.com., and so on, when resolving names. Conversely, if you specify a search path like sun.com., the system searches only the domain sun.com when resolving names. 8. Select the Use Broadcast checkbox if you do not know the IP address of the home domain server (see Step 5). 9. Click Apply to save your settings. The name service (NS) lookup order controls the sequence in which the name services are searched to resolve a query. These name services can include LDAP, NIS, NIS+, DNS, and Local. You must enable the selected services to use them for name resolution. Follow these steps to set the order for user, group, netgroup, and host lookup: Note: In a cluster configuration, changes made on one server to user, group, netgroup, and host look-up are propagated immediately to the other server. 1. From the navigation panel, choose Unix Configuration > Configure Name Services. 2. Select the order of user lookup in the Users Order tab by selecting a service from the Services Not Selected box and using the > and < buttons, and then use the Up and Down buttons in the Services Selected box. 3. Select the services used for group lock-up in the Groups Order tab, following the procedure in Step 2. 4. Select the services used for netgroup lock-up in the Netgroup Order tab, following the procedure in Step 2. 5. Select the services used for host lock-up in the Hosts Order tab, following the procedure in Step 2. 6. Click Apply to save your changes. When the system detects an error, it sends a notification email message. To ensure name resolution, you must have either set up the SMTP server host name in the Configure Hosts panel (see About Configuring Hosts) or set up DNS (see Setting Up DNS). Follow these steps to set up SMTP and send email messages to the recipients: Note: In a cluster configuration, SMTP changes made on one server are propagated immediately to the other server. 1. From the navigation panel, choose Monitoring and Notification > Set Up Email Notification. 2. Type the name of the SMTP server that you want to use to send notification. 3. In the Email Address field, type the address of the person to be notified of system errors. 4. Specify the types of email for this recipient. Select Notification, Diagnostics, or both. 5. Click the Add button to add the new recipient to the List of recipients. 6. Repeat Step 3 through Step 5 for all recipients. You can specify a maximum of four email addresses. To remove someone from the list, select the address and click the Trash button. 7. Select the notification level. 8. Click Apply to save your settings. Enabling remote logging lets the system send its log to a designated server and/or save it to a local archive. The designated server must be a Unix server running syslogd. If you will be referring to the logging host by domain name, you must configure the Domain Name Service (DNS) settings on the system before you enable remote logging. Caution:You must enable remote logging or create a log file on local disk to prevent the log from disappearing on system shutdown. Otherwise, the system will create a temporary log file in volatile memory during startup. This is sufficient to retain any errors that might occur during initial startup for later display, but will not persist through a power failure or system restart. To set up remote and local logging: 1. From the navigation panel, choose Monitoring and Notification > View System Events > Set Up Logging. 2. Select the Enable Remote Syslogd box. 3. In the Server field, specify the DNS host name if you have configured the DNS settings. Otherwise, type the Internet Protocol (IP) address. This is where the system log is sent. 4. From the drop-down menu, select the facility code to be assigned to all NAS messages that are sent to the log. 5. Select the types of system events for which to generate log messages, by placing a check mark next to one or more facilities. Each type of event represents a different priority, or severity level, as described under About System Events. 6. To set up a local log, check Enable Local Log. 7. Type the log file's path (the directory on the system where you want to store the log file) and file name in the Local File field. Note: You cannot set up local logging to either the /cvol or /dvol directory. 8. Type the maximum number of archive files in the Archives field. The allowable range is from 1 to 9. 9. Type the maximum file size in kilobytes for each archive file in the Size field. The allowable range is from 100 to 999,999 kilobytes. 10. Click Apply to save your settings. The operating system supports Unicode, which enables you to set the local language for Network File System (NFS) and Common Internet File System (CIFS). Ordinarily, you assign the language when you run the wizard during initial system setup. However, if you need to reset the language at a later time, you can set it manually. To assign the language: 1. From the navigation panel, choose System Operations > Assign Language. 2. Select the local language for from the languages displayed in the drop-down menu. 3. Click Apply to save your changes. You can register your Sun account and NAS server information with Sun Services online. If you do not have a Sun Account, you can create one during the registration. 1. From the navigation panel, choose System Operations > Online System Registration. 2. Read Sun's privacy policy and disclaimer. To continue, click the Agree button. 3. If you do not have a Sun Account, click on the here link at the bottom of the dialog. This opens the Sun Online Account Registration portal Click Register to begin to create the account. 4. If you have a Sun Account, type its ID in the Sun Account ID and enter its password. 5. Click Next to go to the Proxy Server tab. 6. Enter the name of the proxy server you want Sun Services to use and its port number. If the proxy server uses authentication, enter its user name and its password. 7. Click Next to go to the Options tab. 8. Select the type of information you want to send to Sun Services. The heartbeat data is a periodic check without regard to the type of event. The fault events are sent when a failure is occurring. 9. Click Apply to save your changes. After you have completed the system configuration, back up the configuration information in the event of a system failure. For information about backing up configuration information, see Backing Up Configuration Information. At this point, your system is in full communication with the network. However, before your users can begin storing data, you must set up the file system and establish user access rights. For more information, see File-System Setup and Management. To set up quotas, shares, exports, or other access controls, see Shares, Quotas, and Exports. If there is a specific function you want to set up, look it up in the index to find the instructions.
http://docs.oracle.com/cd/E19783-01/819-4284-11/Admin_02_Network.html
CC-MAIN-2013-48
refinedweb
5,597
64.51
. A great number of applications require a database backend to store and efficiently query data. While traditionally, relational database management systems have been the most popular, non-relational models are gaining traction at a rapid rate. One interesting non-SQL database that is focused on ease of use within a programming environment is RethinkDB. RethinkDB is an easy to configure JSON document storage database that can scale effortlessly. One feature that makes RethinkDB simple to use with a programming language is that it supports robust client drivers. These allow you to interact with the database using much of the familiar syntax of your programming language. In this guide, we will install and configure RethinkDB on an Ubuntu 12.04 VPS. We will interact with it using the Python client driver to demonstrate how its querying language can be accessed using native or near-native programming constructs. There are two components that need to be installed to take complete advantage of the RethinkDB design. The first is the database itself. The second is the client driver that provides support for accessing the database from within your selected programming language. We will cover both components here. The RethinkDB software is not in the default repositories of Ubuntu 12.04. Fortunately, the project makes it easy to install by maintaining its own PPA (personal package archive). To add a PPA to Ubuntu 12.04, we must first install the python-software properties package, which includes the commands we need. Update the package index and then install it: sudo apt-get update sudo apt-get install python-software-properties Now that we have the software properties package installed, we can add the PPA of the RethinkDB project. Type the following to add this repository to our system: sudo add-apt-repository ppa:rethinkdb/ppa Now, we need to update our package index to gather information about the new packages we have available. After that, we can install the RethinkDB software: sudo apt-get update sudo apt-get install rethinkdb We now have the database software available and can access its functionality. Although we have installed the database itself, we now should install the client driver for the database system. There are many options for client drivers depending on your programming language of choice. The officially supported languages are JavaScript, Ruby, and Python. The community has also added support for many more languages including C, Clojure, Lisp, Erlang, Go, Haskell, Java, Perl, PHP, Scala, and more. In this guide, we will be using the Python client driver due to the fact that Python is already installed on our system. We will install the client driver using pip, the Python package manager. To conform to some suggested best practices when dealing with Python software, we’ll use virtualenv to isolate our Python environment. This package includes pip as a dependency. sudo apt-get install python-virtualenv Now that we have virtualenv and pip installed, we can create a directory in our home folder to install our virtual environment: cd ~ mkdir rethink Change into the directory and then use the virtualenv command to create the new virtual environment structure: cd rethink virtualenv venv We can activate the environment by typing: source venv/bin/activate This will allow us to install components in an isolated environment without affecting our system’s programs. If we need to leave the environment (do not do this now, as we need the environment), type: deactivate Now that we have a virtual environment enabled, we can install the RethinkDB package by typing: pip install rethinkdb Our Python client driver is now installed and ready to use. To begin exploring the RethinkDB system, we will start up a server and explore it using the built-in web interface. From the command line, we can start a server instance using the following format: rethinkdb --bind all The --bind all parameter is necessary in order for your instance to be accessible from outside of the server itself. Since we are running from a remote VPS, this is a necessary addition. If we visit our droplet’s IP address, followed by :8080, we will see the RethinkDB web interface: <pre> <span class=“highlight”>your_server_ip_address</span>:8080 </pre> As you can see, we have a rich interface to our database server available. We can see some standard health checks and some cluster performance metrics in the main view. Further down the page, the most recently logged activities are shown. We also see some stats about our database. Next to the blue icons, the interface tells us the name of the database, and if any issues have been detected. Furthermore, you can see that RethinkDB has a native understanding of servers and datacenters. This is because RethinkDB is built from the ground up to be easily scalable and distributable. If we click on the “Tables” link at the top of the page, we can see any tables we have added to our database: From here, we can see all the databases that we have in our server. Within each database, we can see tables that have been created. The overview also tells us about the sharding and replication that is configured for each component. We can also add databases and tables from this view. If we click on a single table, we can see an overview of the load, distribution, and document count: We can see more detailed information about the load and configuration of each table here. We can edit the sharding and replication settings and add indexes to query more efficiently. Moving onto the next link across the top, we can see the servers that are available for our databases and tables. From here, we can manage and add databases, which are ways of grouping separate servers together. If you are deploying servers in different physical locations, this is an easy way of keeping track of where everything is. Changing the datacenter that a server is associated with is very easy as well. Once again, you can click on an individual server to get an overview of its properties: Moving on to the next link, titled “Data Explorer”, we are given an interface with which to interact with the server using the querying language: We can create, delete, and modify tables and data from within this interface. If we enter a query or a command, we can see the results below. We can view the information in a variety of formats and also do a query profile to see how the database decided to return the results that it did: As you can see, we have a great tool for high level management of our databases and clusters. Although the web interface is clean and easy to use, it probably is not the way that you will be interacting with the database in most cases. Most databases are used from within programs. If you are unfamiliar with managing background processes, we will briefly explain how to start your server in the background to allow you to continue working in the terminal. You can shutdown the server by pressing “Ctrl-C” in the terminal. You can then restart it in the background, so that you can access the terminal, by restarting it with: rethinkdb --bind all & The & starts the process in the background and allows you to continue working. Another option is to not kill the initial server process and simply suspend the server and then resume it in the background. You can do this by instead typing “Ctrl-Z”. Afterwards, resume the process in the background by typing: bg You can see the process at any time by typing: jobs [1]+ Running rethinkdb --bind all & If you need to bring the task to the foreground again (perhaps to kill it when you are finished), you can type: fg The task will then be available in the foreground again. If you have multiple background processes, you may need to reference the job number by using this format: <pre> fg %<span class=“highlight”>num</span> </pre> Once your server is in the background, we can begin exploring the database through Python. Start the Python interpreter so that we can begin to interact with the database: python From here, we simply need to import the client driver into the environment: import rethinkdb as r We can now connect with the local database by using the connect command: r.connect("localhost", 28015).repl() The .repl() at the end allows us to call commands on the connection that is formed without specifying the connection explicitly within the .run() call. This is used for convenience in testing situations like this. Now, we have a connection to our server and we can begin working with the database immediately. We can create a database to play around with by typing: r.db_create("food").run() We now have a database called “food”. The .run() command chained at the end is very important. RethinkDB commands look like local code, but they are actually translated by the RethinkDB client drivers to native database code and executed remotely on the server. The run command is what sends this to the server. If we hadn’t added the .repl() command to the initial server connection, we would have to list the connection object in the run command like this: conn = r.connect("localhost", 28015) r.db_create("food").run(conn) These first few commands give you a general idea of how command chaining works with RethinkDB. Complex commands can be created to do multiple operations at once. This allows you to make readable, sequential command chains that are all translated and sent to the database at once, instead of having multiple calls. Now that we have a database, let’s make a table: r.db("food").table_create("favorites").run() We can then add some data to the table. RethinkDB uses a flexible schema design, so you can add any kinds of key-value pairs you would like. We will add some people and then add their favorite foods: r.db("food").table("favorites").insert([ { "person": "Randy", "Age": 26, "fav_food": [ "banana", "cereal", "spaghetti" ] }, { "person": "Thomas", "Age": 8, "fav_food": [ "cookies", "apples", "cake", "sandwiches" ] }, { "person": "Martha", "Age": 52, "fav_food": [ "grapes", "pie", "avocado" ] } ]).run() This will create three JSON documents in our “favorites” table. Each object defines a person, an age, and an array with the person’s favorite foods. We can print out the documents by querying for them. To do this, we simply have to ask for the database and table, and the server will return an iterable object that we can then process with a loop. The server will continuously give out data as the object, called a cursor, is processed. For instance, we can print everything by typing: c = r.db("food").table("favorites") for x in c: print x {u'person': u'Martha', u'Age': 52, u'fav_food': [u'grapes', u'pie', u'avocado'], u'id': u'b888ec64-f2c9-4f85-9db6-f8b8a66626c6'} {u'person': u'Thomas', u'Age': 8, u'fav_food': [u'cookies', u'apples', u'cake', u'sandwiches'], u'id': u'3aa7ae68-85b0-48b6-9726-76e810ea4c55'} {u'person': u'Randy', u'Age': 26, u'fav_food': [u'banana', u'cereal', u'spaghetti'], u'id': u'f027a270-d5ac-4c33-ad91-53a7541ace82'} This prints each line in turn. The cursor object, represented by the variable “c” in our example, is given new data by the server as it is processed. This allows for quick execution of the code. You may have noticed that each of the records that we added to the “favorites” table has been given an ID. This is done automatically and is used to index the contents of each table. We can filter results by just adding another link in the command chain: c = r.db("food").table("favorites").filter(r.row["fav_food"].count() > 3).run() for x in c: print x {u'person': u'Thomas', u'Age': 8, u'fav_food': [u'cookies', u'apples', u'cake', u'sandwiches'], u'id': u'3aa7ae68-85b0-48b6-9726-76e810ea4c55'} As you can see, we simply added a .filter() command. We used the r.row to reference the “fav_food” keys and then counted the number of entries for each row. We did a simple comparison to filter out those people who had 3 or fewer favorite foods. As you can see, we can manipulate the data in our RethinkDB system easily and naturally. RethinkDB prides itself on being easy from a development standpoint without sacrificing the ability to scale easily and seamlessly. This guide has only covered the basics in order to introduce you to some ways to work with RethinkDB. If you are considering using this in a production environment, it would probably be useful to explore the scaling and replication capabilities of the system and its database-aware networking capabilities. don’t think it is a good idea to use --bind-allon a public server! That would mean anyone could go to example.com:8080and have full admin access! The RethinkDB security tutorial, recommends not using bind all on a public machine. It recommends instead proxying through the machine to access the web console. Need Help! I need to download the whole **Rethinkdb Installation Package into a USB for Ubuntu 14.04 server’ FYI, my ubuntu server do not have internet. People suggested me to download the repositories from: But here I see too many .deb files and I loose track when I am downloading them. I also do not know how to install them on my ubuntu server yet. Is there any command line for ubuntu to download all .deb at a time. I also need command line to to install them on my Ubuntu Server. Or does anybody know how I can get rid of this situation? I’m just wondering what is the difference in your approach of installing RethinkDB and the one on their site @sam.parkinson3: You could setup firewall rules to only allow a specific IP address to connect to the port. It would be something like:
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-rethinkdb-on-an-ubuntu-12-04-vps
CC-MAIN-2022-40
refinedweb
2,333
61.06
Assigns a variable by name with the data. Assign ( "varname", "data" [, flag = 0] ) If there is a need to use Assign() to create/write to a variable, then in most situations, Eval() should be used to read the variable and IsDeclared() should be used to check that the variable exists. Eval, Execute, IsDeclared #include <MsgBoxConstants.au3> ; Assign the variable string sString with data. Assign("sString", "This is a string which is declared using the function Assign") ; Find the value of the variable string sString and assign to the variable $sEvalString. Local $sEvalString = Eval("sString") ; Display the value of $sEvalString. This should be the same value as $sString. MsgBox($MB_SYSTEMMODAL, "", $sEvalString)
https://www.autoitscript.com/autoit3/docs/functions/Assign.htm
CC-MAIN-2018-34
refinedweb
110
64.71
This is the mail archive of the cygwin mailing list for the Cygwin project. I have cygserver running in the background (default options) on a W2K box. CYGWIN is set to 'server'. The following test program: #include <sys/shm.h> #include <errno.h> int main(int argc, char **argv) { int pid = fork(); int id; if (pid == 0) { sleep(5); id = shmget(1, 100, 0666); printf("child (%d): %d (%d)\n", getpid(), id, errno); } else { id = shmget(1, 100, 01666); printf("parent (%d): %d (%d)\n", getpid(), id, errno); sleep(10); shmctl(id, IPC_RMID, 0); } return (0); } Produces the following output: parent (35492): 196609 (0) child (3876): 0 (0) No errors are reported in the cygserver log. This shows that: 1. The parent created the shared memory segment and got back its ID (196609). 2. The child process tried to attach to the parent's shared memory segment (using the same key = 1), but shmget() returned 0 with no error! Can anyone enlighten me as to what might be wrong? -- Unsubscribe info: Problem reports: Documentation: FAQ:
http://cygwin.com/ml/cygwin/2004-06/msg00111.html
CC-MAIN-2015-18
refinedweb
175
73.58
table of contents - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10-1 NAME¶mkstemp, mkostemp, mkstemps, mkostemps - create a unique temporary file SYNOPSIS¶ #include <stdlib.h> int mkstemp(char *template); int mkostemp(char *template, int flags); int mkstemps(char *template, int suffixlen); int mkostemps(char *template, int suffixlen, int flags); mkstemp(): mkostemp(): _GNU_SOURCE mkstemps(): /* Glibc since 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _SVID_SOURCE || _BSD_SOURCE mkostemps(): _GNU_SOURCE DESCRIPTION¶¶On success, these functions return the file descriptor of the temporary file. On error, -1 is returned, and errno is set appropriately. ERRORS¶ -¶mkostemp() is available since glibc 2.7. mkstemps() and mkostemps() are available since glibc 2.11. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶mkstemp(): 4.3BSD, POSIX.1-2001. mkstemps(): unstandardized, but appears on several other systems. mkostemp() and mkostemps(): are glibc extensions. NOTES¶()).
https://manpages.debian.org/unstable/manpages-dev/mkstemp.3.en.html
CC-MAIN-2021-04
refinedweb
154
62.95
Hide Forgot Description of problem: When upgrading an operator, InstallPlan execution fails if any CRD replacing an existing one has an empty versions field (but still defines the deprecated version field) Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create an OLM catalog that contains two bundles corresponding to different CSVs that provide the same CRD, CSV-A and CSV-B, where CSV-B replaces CSV-A. The CRD should be the same in both bundles and should both specify a version field and have an empty versions field. The catalog should have a package with two channels, Channel-A and Channel-B, which have the aforementioned CSVs as their respective HEAD entries. The catalog can be image or ConfigMap sourced. 2. Create a CatalogSource in the default namespace for the catalog produced in step 1. 3. Create an OperatorGroup that supports CSV's A and B in the default namespace. 4. Create a Subscription in the default namespace for Channel-A of the package. 5. Wait for CSV-A to be installed successfully. 6. Update the Subscription to target Channel-B. Actual results: The upgrade fails and CSV-B never transitions to Succeeded. Expected results: The upgrade succeeds. Additional info: This seems to be due to some CRD upgrade validation logic not handling the deprecated version field properly (see) Hi, Nick > The CRD should be the same in both bundles and should both specify a version field and have an empty versions field. I'm confused. As k8s documents description: "The version field is deprecated and optional, but if it is not empty, it must match the first item in the versions field." in So, since `specify a version field and have an empty versions field` is a wrong way, the OLM should stop it when installing the CSV-A. So, why CSV-A installed successfully?.
https://bugzilla.redhat.com/show_bug.cgi?id=1732914
CC-MAIN-2020-10
refinedweb
314
52.49
We learned about namespaces in the last tutorial, today we will learn aliasing of the namespace in this Aliasing of PHP Namespaces tutorial. - The ability to refer to an external fully qualified name with an alias is called aliasing or importing. - Aliases allow the user to reference a long namespace name with a shorter name. - Let us first see the behavior of the use keyword in a program. - We will define 2 files with two different namespaces and import them in another program. - To demonstrate this, create a new folder named namespaces in the htdocs folder in xampp folder. Now open a new notepad++ document and save it as app1.php in the newly created namespaces folder. - Now write the following code in app1.php file: - In the above code, we have defined a namespace App\Lib1; and so the following code belongs to App\Lib1 namespace. - We have defined a constant CON, a function fun() and a class call with a static function receive(). - The function fun() returns __FUNCTION__. This is an inbuilt constant which returns the fully qualified name of the function from which it is returned. - Similarly the method receive() of class call returns __METHOD__. This is also a constant which returns the fully qualified name of the class and the method name from which it is returned. - Let us now have a look on code in another file app2.php. The code is given below: - The above code is same as in app1.php file. The only difference is that it is under App\Lib2 namespace. - Now it’s time to access the data in app1.php and app2.php in another file. - But before that we will have a look on the ways of accessing it. - To access data from a function, class, etc. we need to use its fully qualified name or qualified name or unqualified name. - Example of unqualified name: Suppose you are accessing a function in same namespace, you can use unqualified name. I mean if you have defined a function F1 in file A1 in namespace N1. Now you want to access the function F1 from file A1 in file A2 which is also under namespace N1, you can directly call it as F1(). It is an unqualified name of function F1(). This means an unqualified name is the name accessed in the current namespace. - Example of qualified name: Suppose you are accessing a function in sub-namespace from the root namespace, you can use qualified name. I mean if you have a function F1 defined in namespace N1\N2. And now you want to access it in namespace N1, then you can call it as N2\F1(). - Example of fully qualified name: the fully qualified name is the full path of the function and it can be used to access the function, class etc. from anywhere outside the namespace. Suppose you have a function F1 in namespace N1\Ns1 in file A1. You want to access it in namespace N2 in file A2, you can call it as N1\Ns1\F1(). - Now let us access the data in app1.php and app2.php in index.php file. The code in index.php is given below: - We have a namespace App\Lib1; defined in the above code. So the whole following code belongs to namespace App\Lib1. - We have linked our index.php file to app1.php and app2.php file using require_once() function. - Next we have accessed the CON, fun() and receive() in an unordered list. - Here we have used unqualified names for the constant and functions. So let us see what happens. The output is shown below: - Here we can see that the constant CON, the function fun() and the method receive() of class call contained in namespace App\Lib1 are displayed. - This was possible because we are accessing it with unqualified names within the same App\Lib1 namespace where these functions and constants are defined. - Now imagine that you have defined namespace App\Lib2; in index.php instead of namespace App\Lib1; You will get the output as follows: - To call the element in current namespace we can also use namespace keyword. - For example: to call the method receive() in class call, we can write it like this: - And to call the function fun() you can write it like this: - Now let us access the elements in above app1.php and app2.php in another program that has some other namespace or a global namespace i.e. no namespace. This is called importing namespace. - To access/import data in such situations the keyword use is used to specify the desired namespace name. - Let us write the program that accesses data from app1.php and app2.php files: - For this comment the code present in index.php file and write the following code below it: - Here we have used the statement use App\Lib1; This allows us to use the functions, classes, interfaces, constants, etc in namespace App\Lib1. - In this program we cannot use unqualified name to access the elements, because now we are not in the same namespace where the elements are defined. - We have used qualified names like Lib1\CON and Lib1\fun() here, because we have specified its complete namespace using the use keyword. - I mean we can call it like this also: - But like this, we need to write the fully qualified name every time to access anything from another namespace. - If the namespace is very big it will become very tedious and annoying to write such a big namespace every time. To avoid this problem the use keyword is provided which allows us to write the complete namespace at the top just once and then use its qualified name later in the whole program. - Output of the above code is shown below: - You can even specify the class you want to use in the program in the use statement. For example, you want to use the class call from App\Lib2 namespace. You can write the use statement as follows: - You can have any number of use statements in a program or can specify number of namespaces in a single use statement separated by commas. PHP PHP PHP PHP PHP PHP PHP PHP - Aliasing a namespace: - We saw the use statement that allows us to write the complete namespace once and then use only the qualified name to access its elements. - But we can simplify it more by giving the namespace a shorter name which will be used instead of it in the program. - This is done with conjunction of keyword as with use. - Let us access the elements of namespaces App\Lib1 and App\Lib2 in index.php file. Comment the previous code and write the following new code there: - We have defined a namespace N1 in the above code. Hence the whole code belongs to namespace N1. - Here, we have imported the App\Lib1 namespace and the class call from the App\Lib2 namespace using the use statement. - The App\Lib1 namespace is named as alias Lib and App\Lib2\call is named as alias O. - Now the elements from App\Lib1 will be accessed using Lib and the elements from class call of App\Lib2 will be accessed using O. - This is called aliasing of namespace. - The constant CON, function fun() and method of calls call i.e. receive() of App\Lib1 namespace are accessed using Lib and the method of class call of App\Lib2 namespace are accessed using O in the above code. - The output is shown below: - In the same way if we want to use some class which is not in any namespace, we can use it by going into the root namespace using a backslash (\) before the class name. - For example if you want to use DateTime class in the above program. You know that DateTime class is not in the namespace N1, it is outside namespace N1. So to come out of namespace N1 backslash is used before the class name as shown below: - This can also be written as: PHP PHP PHP Thus we learned how to deal with different namespaces while working in a project in this Aliasing of PHP Namespaces tutorial.
https://blog.eduonix.com/web-programming-tutorials/aliasing-of-php-namespaces/
CC-MAIN-2019-22
refinedweb
1,370
72.87
Getters and setters published a story In previous articles, you already learned how to declare your own full-fledged classes with fields and methods. This is serious progress, well done! But now I have to tell you an unpleasant truth. We didn't declare our classes correctly! Why? At first sight, the following class doesn't have any mistakes: public class Cat { public String name; public int age; public int weight; public Cat(String name, int age, int weight) { this.name = name; this.age = age; this.weight = weight; } public Cat() { } public void sayMeow() { System.out.println("Meow!"); } } But it does. Imagine you're sitting at work and write this Cat class to represent cats. And then you go home. While you're gone, another programmer arrives at work. He creates his own Main class, where he begins to use the Cat class you wrote. public class Main { public static void main(String[] args) { Cat cat = new Cat(); cat.name = ""; cat.age = -1000; cat.weight = 0; } } It doesn't matter why he did it and how it happened (maybe the guy's tired or didn't get enough sleep). Something else matters: our current Cat class allows fields to be assigned absolutely insane values. As a result, the program has objects with an invalid state (such as this cat that is -1000 years old). So what error did we make when declaring our class? We exposed our class's data. The name, age and weight fields are public. They can be accessed anywhere in the program: simply create a Cat object and any programmer has direct access to its data through the dot ( .) operator Cat cat = new Cat(); cat.name = ""; Here we are directly accessing the name field and setting its value. We need to somehow protect our data from improper external interference. What do we need to do that? First, all instance variables (fields) must be marked with the private modifier. Private is the strictest access modifier in Java. Once you do this, the fields of the Cat class will not be accessible outside the class. public class Cat { private String name; private int age; private int weight; public Cat(String name, int age, int weight) { this.name = name; this.age = age; this.weight = weight; } public Cat() { } public void sayMeow() { System.out.println("Meow!"); } } public class Main { public static void main(String[] args) { Cat cat = new Cat(); cat.name = "";//error! The Cat class's name field is private! } } The compiler sees this and immediately generates an error. Now the fields are sort of protected. But it turns out that we've shut down access perhaps too tightly: you can't get an existing cat's weight in the program, even if you need to. This is also not an option. As it is, our class is essentially unusable. Ideally, we need to allow some sort of limited access: - Other programmers should be able to create Catobjects - They should be able to read data from existing objects (for example, get the name or age of an existing cat) - It should also be possible to assign field values. But in doing so, only valid values should be allowed. Our objects should be protected from invalid values (e.g. age = -1000, etc.). That's a decent list of requirements! In reality, all this is easily achieved with special methods called getters and setters. These names come from "get" (i.e. "method for getting the value of a field") and "set" (i.e. "method for setting the value of a field”). Let's see how they look in our Cat class: public class Cat { private String name; private int age; private int weight; public Cat(String name, int age, int weight) { this.name = name; this.age = age; this.weight = weight; } public Cat() { } public void sayMeow() { System.out.println("Meow!"); } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } } As you can see, they look pretty simple :) Their names often consist of "get"/"set" plus the name of relevant field. For example, the getWeight() method returns the value of the weight field for the object it is called on. Here's how it looks in a program: public class Main { public static void main(String[] args) { Cat smudge = new Cat("Smudge", 5, 4); String smudgeName = smudge.getName(); int smudgeAge = smudge.getAge(); int smudgeWeight = smudge.getWeight(); System.out.println("Cat's name: " + smudgeName); System.out.println("Cat's age: " + smudgeAge); System.out.println("Cat's weight: " + smudgeWeight); } } Console output: Cat's name: Smudge Cat's age: 5 Cat's weight: 4 Now another class ( Main) can access the Cat fields, but only through getters. Note that getters have the public access modifier, i.e. they are available from anywhere in the program. But what about assigning values? This is what setter methods are for public void setName(String name) { this.name = name; } As you can see, they are also simple. We call the setName() method on a Cat object, pass a string as an argument, and the string is assigned to the object's name field. public class Main { public static void main(String[] args) { Cat smudge = new Cat("Smudge", 5, 4); System.out.println("Cat's original name: " + smudge.getName()); smudge.setName("Mr. Smudge"); System.out.println("Cat's new name: " + smudge.getName()); } } Here we're using both getters and setters. First, we use a getter to get and display the cat's original name. Then, we use a setter to assign a new name ("Mr. Smudge"). And then we use the getter once again to get the name (to check if it really changed). Console output: Cat's original name: Smudge Cat's new name: Mr. Smudge So what's the difference? We can still assign invalid values to fields even if we have setters: public class Main { public static void main(String[] args) { Cat smudge = new Cat("Smudge", 5, 4); smudge.setAge(-1000); System.out.println("Smudge's age: " + smudge.getAge()); } } Console output: Smudge's age: -1000 years The difference is that a setter is a full-fledged method. And unlike a field, a method lets you write the verification logic necessary to prevent unacceptable values. For example, you can easily prevent a negative number from being assigned as an age: public void setAge(int age) { if (age >= 0) { this.age = age; } else { System.out.println("Error! Age can't be negative!"); } } And now our code works correctly! public class Main { public static void main(String[] args) { Cat smudge = new Cat("Smudge", 5, 4); smudge.setAge(-1000); System.out.println("Smudge's age: " + smudge.getAge()); } } Console output: Error! Age can't be negative! Smudge's age: 5 years Inside the setter, we created a restriction that protected us from the attempt to set invalid data. Smudge's age wasn't changed. You should always create getters and setters. Even if there are no restrictions on what values your fields can take, these helper methods will do no harm. Imagine the following situation: you and your colleagues are writing a program together. You create a Cat class with public fields. All the programmers are using them however they want. And then one fine day you realize: "Crap, sooner or later someone might accidentally assign a negative number to the weight! We need to create setters and make all the fields private!" You do just that, and instantly break all the code written by your colleagues. After all, they've already written a bunch of code that accesses the Cat fields directly. cat.name = "Behemoth"; And now the fields are private and the compiler spews a bunch of errors! cat.name = "Behemoth";//error! The Cat class's name field is private! In this case, it would be better to hide the fields and create getter and setters from the very beginning. All your colleagues would have used them. And if you belatedly realized you needed to somehow restrict the field values, you could have just written the check inside the setter. And nobody's code would be broken. Of course, if you want access to a field to just be "read only", you can create only a getter for it. Only methods should be available externally (i.e. outside your class). Data should be hidden. We could make a comparison to a mobile phone. Imagine that instead of the usual enclosed mobile phone, you were given a phone with an open case, with all sorts of protruding wires, circuits, etc. But the phone works: if you try really hard and poke the circuits, you might even be able to make a call. But you'll probably just break it. Instead, the manufacturer gives you an interface: the user simply enters the correct digits, presses the green call button, and the call begins. She doesn't care what happens inside with the circuits and wires, or how they get their job done. In this example, the company limits access to the phone's "insides" (data) and exposes only an interface (methods). As a result, the user gets what she wanted (the ability to make a call) and certainly won't break anything inside. Was published on CodeGym blog . Learn Something New Everyday, Connect With The Best Developers!
https://hashnode.com/post/getters-and-setters-cjwb2b8sf000jfvs1109n7nju
CC-MAIN-2020-24
refinedweb
1,561
76.52
Simulates a global browser environment using jsdom Simulates a global browser environment using jsdom. Previously named node-browser-environment. This allows you to run browser modules in Node.js 4 or newer with minimal or no effort. Can also be used to test browser modules with any Node.js test framework. Please note, only the DOM is simulated, if you want to run a module that requires more advanced browser features (like localStorage), you'll need to polyfill that seperately. ❗️Important note This module adds properties from the jsdomwindow namespace to the Node.js global namespace. This is explicitly recommended against by jsdom. There may be scenarios where this is ok for your use case but please read through the linked wiki page and make sure you understand the caveats. npm install --save browser-env Or if you're just using for testing you'll probably want: npm install --save-dev browser-env // Init;// Now you have access to a browser like environment in Node.js:typeof window;// 'object'typeof document;// 'object'var div = document;// HTMLDivElementdiv instanceof HTMLElement// true By default everything in the jsdom window namespace is tacked on to the Node.js global namespace (excluding existing Node.js properties e.g console, setTimout). If you want to trim this down you can pass an array of required properties: // Init'window';typeof window;// 'object'typeof document;// 'undefined' You can also pass a config object straight through to jsdom. This can be done with or without specifying required properties. 'window' userAgent: 'My User Agent' ;// oruserAgent: 'My User Agent' ; You can of course also assign to a function: var browserEnv = ;;// or;; MIT © Luke Childs
https://www.npmjs.com/package/browser-env
CC-MAIN-2017-04
refinedweb
273
57.98
Log message: Don't redefine standard identifiers. Bump revision.upnp: Update to 1.8.4. ******************************************************************************* Version 1.8.4 ******************************************************************************* 2017-11-17 Marcelo Jimenez <mroberto(at)users.sourceforge.net> GitHub #57 - 1.8.3 broke ABI without changing SONAME Opened by jcowgill This change in 1.8.3 broke the ABI and therefore the SONAME should have been changed (ie: age reset to 0): EXPORT_SPEC int UpnpAddVirtualDir( /*! [in] The name of the new directory mapping to add. */ - const char *dirName); + const char *dirName, + /*! [in] The cookie to associated with this virtual directory */ + const void *cookie, + /*! [out] The cookie previously associated, if mapping is already present */ + const void **oldcookie); If only the cookie argument was added, you could probably get away with this because all that would happen is that a garbage value is passed around without being used. With the addition of oldcookie, any old programs will not initialise this value and will probably segfault when libupnp tries to write to it. ******************************************************************************* Version 1.8.3 ******************************************************************************* 2017-09-07 Dave Overton <david(at)insomniavisions.com> Add userdata/cookie to virtualDir callbacks As with the main Device APIs (UpnpRegisterRootDevice etc), it is useful to have a userdata/cookie pointer returned with each callback. This patch allows one cookie per registered path which enables a variety of functionality in client apps. 2017-09-03 Uwe Kleine-König <uwe@kleine-koenig.org> Fix large file system support libupnp uses large file support (if available). If a program linking to libupnp does not however it creates mismatches in callframes. See Issue #51 for the results. This simplifies LFS support by using AC_SYS_LARGEFILE_SENSITIVE instead of manually defining _LARGE_FILE_SOURCE and _FILE_OFFSET_BITS (which is useless on architectures where the size of off_t is fixed). Furthermore additional logic is introduced to catch a library user without 64 bit wide off_t on such a platform. upnp.h also makes use of off_t, but as this file includes FileInfo.h, the latter is the single right place for this check. This fixes #52 which is a generalized variant of #51. 2017-08-19 Uwe Kleine-König <uwe@kleine-koenig.org> configure.ac: Drop copying of include files The comment suggests this is for windows compilation. It should be easily possible to add the source directory as an include path to the windows compiler, too, so drop this. (Otherwise this should better be done using AC_CONFIG_COMMANDS.) 2017-09-03 Uwe Kleine-König <uwe@kleine-koenig.org> Let source code use autoconfig.h not the public upnpconfig.h The former is the one supposed to be used for internal code. upnpconfig.h is only for public stuff. 2017-08-19 Uwe Kleine-König <uwe@kleine-koenig.org> configure.ac: Fix typo s/optionnal/optional/ 2017-08-08 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Fix broken samples when configured with --disable-ipv6. ******************************************************************************* Version 1.8.2 ******************************************************************************* 2017-07-24 Michael Osipov Initialize in_addr and in6_addr to avoid garbage output if never written If any of the address families isn't available in UpnpGetIfInfo(), especially IPv6, always init both structs with zero to avoid garbage output with inet_ntop() to gIF_IPV4 and gIF_IPV6. See v00d00/gerbera#112 () for consequences: bind for IPv6 will fail. 2013-10-28 Vladimir Fedoseev <va-dos(at)users.sourceforge.net> Attached patch allows to register multiple clients from single app. 2014-11-14 Philippe <philippe44ca(at)users.sourceforge.net> Hi - I recently compiled libupnp on C++ Builder XE7 and had to do a few changes to make it work. In thase this helps, I've generated a small patch file. 2015-04-30 Hugo Beauzée-Luyssen <chouquette(at)users.sourceforge.net> When building using a strict mode (-std=c++11 instead of -std=gnu++11, for instance), the WIN32 macro isn't defined. The attached patch fixes it by using _WIN32 instead. 2015-02-06 Jean-Francois Dockes <jf@dockes.org> Queue events on their subscription object instead of adding them to the thread pool immediately. Events destined for a non-responding control point would flood the thread pool and prevent correct dispatching to other clients, sometimes to the point of disabling the device. Events are now queued without allocating thread resources and properly discarded when a client is not accepting them. 2015-02-03 Jean-Francois Dockes <jf@dockes.org> genaInitNotify()/genaInitNotifyExt() and genaNotifyAll()/genaNotifyAllExt() are relatively complicated methods which only differ by the format of an input parameter. This update extracts the common code for easier maintenance, esp. relating to the queueing modifications to follow. ******************************************************************************* Version 1.8.1 ******************************************************************************* 2017-04-26 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Fix some compiler warning messages on md5.c 2017-03-07 Fabrice Fontaine <fontaine.fabrice(at)gmail.com> Enable IPv6 by default 2017-03-07 Fabrice Fontaine <fontaine.fabrice(at)gmail.com> Move threadutil source code to libupnp With this patch, threadutil library is removed as the only public header that has been kept in 1.8.x is ithread.h which is mainly a wrapper to pthread with inline functions. threadutil source code will now be a part of libupnp library. ******************************************************************************* Version 1.8.0 ******************************************************************************* 2014-01-15 Peng <howtofly(at)gmail.com> Fix memory leaks. 2013-04-27 Thijs Schreijer <thijs(at)thijsschreijer.nl> Renamed SCRIPTSUPPORT to IXML_HAVE_SCRIPTSUPPORT for consistency. Also updated autoconfig and automake files, so it also works on non-windows. Option is enabled by default, because it adds an element to the node structure. Not using an available field is better than accidentally using an unavailable field. 2012-07-11 Thijs Schreijer <thijs(at)thijsschreijer.nl> Changed param to const UpnpAcceptSubscriptionExt() for consistency 2012-06-07 Thijs Schreijer <thijs(at)thijsschreijer.nl> updated ixmlDocument_createAttributeEx() and ixmlDocument_createAttribute() to use parameter DOMString instead of char * (same but now consistent) 2012-05-06 Thijs Schreijer <thijs(at)thijsschreijer.nl> Added script support (directive SCRIPTSUPPORT) for better support of garbage collected script languages. The node element gets a custom tag through ixmlNode_setCTag() and ixmlNode_getCTag(). And a callback upon releasing the node resources can be set using ixmlSetBeforeFree() See updated readme for usage. 2012-03-24 Fabrice Fontaine <fabrice.fontaine(at)orange.com> SF Bug Tracker id 3510595 - UpnpDownloadXmlDoc : can't get the file Submitted: Marco Virgulti ( mvirg83 ) - 2012-03-23 10:08:08 PDT There is a problem, perhaps, during downloading a document by UpnpDownloadXmlDoc. During debugging i've found that in an not exported api (unfortunately i forgot the code line...) where it is setted a local variable "int timeout" to -1 then passed directly to another function for sending data through tcp socket. I patched this setting it to 0 (there is an IF section that exits if timeout < 0). It is normal behavior or it is a bug? 2012-03-08 Fabrice Fontaine <fabrice.fontaine(at)orange-ftgroup.com> Check for NULL pointer in TemplateSource.h calloc can return NULL so check for NULL pointer in CLASS##_new and CLASS##_dup. 2012-03-08 Fabrice Fontaine <fabrice.fontaine(at)orange-ftgroup.com> Replace strcpy with strncpy in get_hoststr Replace strcpy with strncpy to avoid buffer overflow. 2012-03-08 Fabrice Fontaine <fabrice.fontaine(at)orange-ftgroup.com> Memory leak fix in handle_query_variable variable was never freed. 2011-02-07 Chandra Penke <chandrapenke(at)mcntech.com> Add HTTPS support using OpenSSL. HTTPS support is optional and can be enabled by passing the --enable-open-ssl argument to the configure script. The following methods are introduced to the public API: UpnpInitOpenSslContext When enabled, HTTPS can be used by using "https://" instead of "http://" when passing URLs to the HTTP Client API. 2011-02-07 Chandra Penke <chandrapenke(at)mcntech.com> Refactor HTTP Client API to be more generic. The following features are added: - Support for persistent HTTP connections (reusing HTTP connections). Tthis is still a work in progress and relies on applications to interpret the 'Connection' header appropriately. - Support for specifying request headers when making requests. Useful for interacting with web services that require custom headers. - Support for retrieving response headers (this is a API only change, some more work needs to be done to implement the actual functionality. Specifically copy_msg_headers in httpreadwrite.c needs to be implemented) - Common API for all HTTP methods. - Support for PUT, and DELETE methods. The following methods are introduced to the public HTTP Client API UpnpOpenHttpConnection, UpnpCloseHttpConnection, UpnpMakeHttpRequest, UpnpWriteHttpRequest, UpnpEndHttpRequest, UpnpGetHttpResponse, UpnpReadHttpResponse. Removed a lot of duplicate code in httpreadwrite.c 2011-01-17 Chandra Penke <chandrapenke(at)mcntech.com> Include upnpconfig.h in FileInfo.h to automatically include large file macros 2011-01-17 Chandra Penke <chandrapenke(at)mcntech.com> Fix for warnings Apple systems related to macros defined in list.h. In list.h, in apple systems, undefine the macros prior to defining them. 2011-01-16 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Fix for UpnpFileInfo_get_LastModified() in http_MakeMessage(). UpnpFileInfo_get_LastModified() returns time_t, and http_MakeMessage() takes a "time_t *". Thanks to Chandra Penke for pointing the bug. 2010-11-22 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Template object for ssdp_ResultData. 2010-11-10 Fabrice Fontaine <fabrice.fontaine(at)orange-ftgroup.com> Support for "polling" select in sock_read_write. Currently, in sock_read_write function, if the timeout is 0, pupnp realizes a "blocking" select (with an infinite timeout). With this patch, if timeout is set to 0, pupnp will realize a "polling" select and returns immediately if it can not read or write on the socket. This is very useful for GENA notifications when pupnp is trying to send events to a disconnected Control Point. "Blocking" select can now be done by putting a negative timeout value. 2010-09-18 Chandra Penke <chandrapenke(at)mcntech.com> This is a minor build fix. The new Template*.h files added in the latest code need to be exported. Patch against the latest sources is attached. 2010-08-22 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * upnp/src/api/Discovery.c: Fix a serious bug and memory leak in UpnpDiscovery_strcpy_DeviceType(). Thanks to David Blanchet for the patch. 2010-04-25 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Separation of the ClientSubscription object. 2010-04-24 Marcelo Jimenez <mroberto(at)users.sourceforge.net> Protect the object destructors agains null pointers on deletion, which should be something valid. 2010-03-27 Marcelo Jimenez <mroberto(at)users.sourceforge.net> SF Patch Tracker [ 2987390 ] upnp_debug vs. ixml_debug Thanks for the load of updates, I'm still assimilating them ! Could I make a suggestion though? The addition of printNodes(IXML_Node) to upnpdebug a dds a new dependency on ixml.h for anything using upnpdebug.h. I'm making quite a bit of use of upnpdebug in porting things to version 1.8.0, and I'd prefer it if printNodes could be added to ixmldebug.h instead. I'm attach ing a patch, what do you think ? Nick 2010-03-27 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Forward port of svn revision 505: SF Patch Tracker [ 2836704 ] Patch for Solaris10 compilation and usage. Submitted By: zephyrus ( zephyrus00jp ) 2010-03-20 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2969188 ] 1.8.0: patch for FreeBSD compilation Submitted By: Nick Leverton (leveret) Fix the order of header inclusion for FreeBSD. 2010-03-20 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Forward port of svn revision 502: SF Patch Tracker [ 2836704 ] Search for nested serviceList (not stopping at the first lis Submitted By: zephyrus ( zephyrus00jp ). 2010-03-20 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2973319 ] Problem in commit 499 Submitted By: Nick Leverton (leveret) Afraid that this doesn't compile, it seems retval should be retVal in two places. 2010-03-16 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Fix for the ithread_mutex_unlock() logic in UpnpInit(). Thanks for Nicholas Kraft. 2010-03-15 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2962606 ] Autorenewal errors: invalid SID, too-short renewal interval Submitted By: Nick Leverton (leveret) Auto-renewals send an invalid SID due to a missing UpnpString_get_String call. They also send a renewal interval of 0 instead of copying it from the original subscription. 2010-03-15 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2964685 ] patch for avoiding inet_ntoa (1.8.0) Submitted By: Nick Leverton (leveret) Seems like SF's tracker won't let me add a patch to someone else's issue ?! This refers to The calls to inet_ntoa are in getlocalhostname(), which is called from UpnpInit when it is returning the bound IP address. UpnpInit/getlocalhostname hasn't been updated to IPv6, I presume this is deliberate so that it doesn't start returning IPv6 addresses and overwriting the caller's IPv4-sized allocation. The attached patch just updates getlocalhostname to use inet_ntop instead of inet_ntoa, and also documents the fact that UpnpInit is IPv4 only whilst UpnpInnit2 is both IPv4 and IPv6. A fuller solution might be to change UpnpInit to use some variant on UpnpGetIfInfo. UpnpInit could still be left as IPv4 only if desired - perhaps UpnpGetIfInfo could take an option for the desired address family. getlocalhostname and its own copy of the interface scanning code would then be redundant. I don't have IPv6 capability here though so I'm reluctant to change the IPv6 code, as I have no way to test it. 2010-03-15 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2724578 ] patch for avoiding memory leaks when add devices each time a device been added, UpnpInit() is called, on exit, UpnpFinish() is called, but the memories allocated by ThreadPoolInit() may lost because there's no code to call ThreadPoolShutdown() to release the memories. And inet_ntoa() is not thread safe, so in my patch, I substitute inet_ntoa() with inet_ntop(). 2010-03-14 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Patch Tracker [ 2964687 ] Add new string based accessors to upnp object API As per email to pupnp-devel, this is the patch to add the _strget_ accessors for string-like objects in the interface. Will add a further patch shortly to udpate the sample programs. 2008-06-27 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Nicholas Kraft's patch to fix some IPv6 copy/paste issues. He reported to be getting infinite loops with the svn code. 2008-06-13 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Bug Tracker [ 1984541 ] ixmlDocumenttoString does not render the namespace tag. Submitted By: Beliveau - belivo Undoing the patch that fixed this problem. In fact, there was no problem and the patch was wrong. 2008-06-11 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Ingo Hofmann's patch for "Content-Type in Subscription responses". Adds charset="utf-8" attribute to the CONTENT-TYPE header line. Hi, I have found an inconsistency regarding the text/xml content-type returned by libupnp. It looks like only subscription responses send "text/xml" where all other messages contain "text/xml; \ charset="utf-8"". Since I'm working on an DLNA device the latter behaviour is mandatory. I changed the according lines in gena_device.c (see attached patch). I'm not sure if it would be ok for other device to have the charset field but it would help me a lot :) Best regards, Ingo 2008-06-04 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * SF Bug Tracker [ 1984541 ] ixmlDocumenttoString does not render the namespace tag. Submitted By: Beliveau - belivo The problem occurs when converting a xml document using ixmlDocumenttoString containing a namespace tag created with ixmlDocument_createElementNS. The namespace tag doesn't get rendered. example: The following code fragment prints: <?xml version="1.0"?> <root></root> instead of: <?xml version="1.0"?> <root xmlns="urn:schemas-upnp-org:device-1-0"></root> Code: #include <stdlib.h> #include <upnp/ixml.h> int main() { IXML_Document* wDoc = ixmlDocument_createDocument(); IXML_Element* wRoot = ixmlDocument_createElementNS(wDoc, "urn:schemas-upnp-org:device-1-0", "root"); ixmlNode_appendChild((IXML_Node *)wDoc,(IXML_Node *)wRoot); DOMString wString = ixmlDocumenttoString(wDoc); printf(wString); free(wString); ixmlDocument_free(wDoc); return 0; } The problem was in the printing routine, not in the library data structure. 2008-05-31 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Charles Nepveu's suggestion of not allocating a thread for MiniServer when it is not compiled. 2008-05-24 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Ported Peter Hartley's patch to compile with mingw. 2008-05-24 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Added some debug capability to ixml. 2008-05-02 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Merged Charles Nepveu's IPv6 work. libupnp now is IPv6 enabled. 2008-02-06 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Breaking API so that we now hide internal data structures. 2008-02-06 Marcelo Jimenez <mroberto(at)users.sourceforge.net> * Rewrote Peter Hartley's patch to include a new extra header field in FileInfo. ******************************************************************************* Version 1.6.22 ******************************************************************************* 2017-07-07 James Cowgill <james410(at)cowgill.org.uk> Replace MD5 impmplementation with public-domain version Currently the RSA MD5 implementation is used. Unfortunately the license has some potential issues: * The license does not explicitly allow distributing derivative works. This was the original argument used in [Debian #459516](). * The license contains an advertising clause similar to the BSD 4-clause license. This is incompatible with the GPL and if it were enforced, would require RSA to be mentioned by pretty much everyone who uses pupnp. The simple solution is to replace it with a public domain implementation. I've taken OpenBSDs implementation and tweaked it slightly for use by pupnp by: - Adjusting the includes. - Removing the __bounded__ attributes which are specific to OpenBSD. - Using the standard integer types from stdint.h. - Using memset instead of explicit_bzero. 2016-12-16 Peter Pramberger <peterpramb(at)users.sf.net> ixml/test/test_document.c is missing the string.h include, therefore the compiler complains about an implicit declaration. ******************************************************************************* Version 1.6.21 ******************************************************************************* 2016-12-16 Gabriel Burca <gburca(at)github> If the error or info log files can not be created, use stderr and stdout instead. 2016-12-08 Uwe Kleine-König <uwe(at)kleine-koenig.org> Fix out-of-bound access in create_url_list() (CVE-2016-8863) If there is an invalid URL in URLS->buf after a valid one, uri_parse is called with out pointing after the allocated memory. As uri_parse writes to *out before returning an error the loop in create_url_list must be stopped early to prevent an out-of-bound access Bug: Bug-CVE: … -2016-8863 Bug-Debian: Bug-Redhat: 2016-11-30 Uwe Kleine-König <uwe(at)kleine-koenig.org> miniserver: fix binding to ipv6 link-local addresses Linux requires to have sin6_scope_id hold the interface id when binding to link-local addresses. This is already in use in other parts of upnp, so portability shouldn't be in the way here. Without this bind(2) fails with errno=EINVAL (although ipv6(7) from manpages 4.08 specifies ENODEV in this case). Fixes: 2016-09-15 Mathew Garret <(at)mjg59 (twitter)> SF Bug Tracker #132 CVE-2016-6255: write files via POST Submitted by: Balint Reczey in 2016-08-02 From Debian's BTS … bug=831857 : From: Salvatore Bonaccorso carnil@debian.org To: Debian Bug Tracking System submit@bugs.debian.org Subject: libupnp: write files via POST Date: Wed, 20 Jul 2016 11:03:34 +0200 Source: libupnp Version: 1:1.6.17-1 Severity: grave Tags: security upstream Justification: user security hole Hi See … 6/07/18/13 and . Proposed fix: … 4f1a972cbd Regards, Salvatore From Mathew Garret's commit: Don't allow unhandled POSTs to write to the \ filesystem by default ******************************************************************************* Version 1.6.20 ******************************************************************************* 2016-02-22 Jean-Francois Dockes <medoc(at)users.sf.net> SF Bugs #131, Creator: Jean-Francois Dockes I know it sounds crazy that nobody ever saw this, but the CONTENT-LENGTH value in GENA NOTIFY messages is too small by one. It appears that most current control points don't notice the extra character (an LF, which is validly there but not included in Content-Length), probably because their protocol handler is reasonably lenient, and because the missing body LF does not prevent parsing the XML. But there is a least one anal CP (Linn Kazoo) which barfs, because it reads all data until connection close and the size mismatch triggers a bug. "Proof": In gena_device.c:217 (notify_send_and_recv()) ret_code = http_SendMessage(&info, &timeout, "bbb", start_msg.buf, start_msg.length, propertySet, strlen(propertySet), CRLF, strlen(CRLF)); start_msg has all the headers, including the empty line. Content-length should be strlen(propertySet) + strlen(CRLF) (2) In gena_device.c:433 (AllocGenaHeaders()) rc = snprintf(headers, headers_size, "%s%s%"PRIzu"%s%s%s", HEADER_LINE_1, HEADER_LINE_2A, strlen(propertySet) + 1, HEADER_LINE_2B, HEADER_LINE_3, HEADER_LINE_4); HEADER_LINE_2A is "CONTENT-LENGTH: ". The following value should be strlen(propertySet) + 2 2016-01-07 Marcelo Roberto Jimenez <mroberto(at)users.sourceforge.net> Fix for a reported integer overflow 2016-01-07 Jean-Francois Dockes <medoc(at)users.sf.net> 2016-01-07 Nick Leverton <nick(at)leverton.org> SF Patches #60, Creator: Jean-Francois Dockes When libupnp is configured with --enable-ipv6 but ipv6 is not available on the system (for example because the ipv6 code is not loaded in a Linux kernel as is the case by default on Raspbian), the ipv6 socket creation call will fail in miniserver.c and the library init will fail, even if the ipv4 initialisation would have succeeded. Let a library configured with --enable-ipv6 initialize in ipv4-only mode if ipv6 is not available instead of failing. This can happen if no ipv6 code is configured or loaded in the kernel. Don't fail if IPv6 is unavailable. We might be an IPv6 enabled distro build running on an IPv4-only custom kernel. 2016-01-07 Nick Leverton <nick(at)leverton.org> SF Bug Tracker #128, Creator: Nick Leverton redefining strndup causes "error: expected identifier or '(' before \ '__extension__'" Fix redefinition of strnlen and strndup These are available when HAVE_STRNDUP and HAVE_STRNLEN are defined, but libupnp provides an extern prototype anyway. Recent versions of glibc define this prototype differently, causing the following compile error: src/api/UpnpString.c:47:15: error: expected identifier or '(' before \ '__extension__' extern char *strndup(__const char *__string, size_t __n); 2016-01-07 Nick Leverton <nick(at)leverton.org> SF Bug Tracker #129, Creator: Nick Leverton shutdown() on UDP sockets logs ENOTCONN message. Fix ENOTCONN "Error in shutdown: Transport endpoint is not connected" When logging is enabled, ssdpserver logs bursts of "Error in shutdown: Transport endpoint is not connected" This is because shutdown() is not supported for UDP sockets and under recent UNIX specifications it returns ENOTCONN if used. 2016-01-07 Nick Leverton <nick(at)leverton.org> SF Bug Tracker #127, Creator: Klaus Fischer Miniserver uses INADDR_ANY instead of HostIP The internal miniserver.c uses INADDR_ANY instead of the HostIP/IfName provided when initializing libupnp. But, this HostIP/IfName gets used for the UDP socket when multicasting SSDP messages. Because of this, miniserver may end up sending from different IP address than ssdpserver. This patch causes miniserver to use the already known interface address. 2016-01-07 Marcelo Roberto Jimenez <mroberto(at)users.sourceforge.net> SF Bug Tracker #130, Creator: Shaddy Baddah infinite loop in UpnpGetIfInfo() under WIN32 Original code makes no sense. This patch should fix it. 2015-02-04 Shaun Marko <semarko@users.sf.net> Bug tracker #124 Build fails with --enable-debug Build environment Fedora 21 X86-64 * gcc 4.9.2 How to repeat $ ./configure --enable debug $ make libtool: compile: gcc -DHAVE_CONFIG_H -I. -I.. -I../upnp/inc -I./inc \ -I../threadutil/inc -I../ixml/inc -I./src/inc -pthread -g -O2 -Wall -MT src/api/libupnp_la-UpnpString.lo -MD -MP -MF src/api/.deps/libupnp_la-UpnpString.Tpo -c src/api/UpnpString.c -fPIC -DPIC -o src/api .libs/libupnp_la-UpnpString.o src/api/UpnpString.c:47:16: error: expected identifier or '(' before 'extension' extern char *strndup(const char *string, size_t __n); ^ Makefile:1016: recipe for target 'src/api/libupnp_la-UpnpString.lo' failed Reason for failure Build enables -O2 optimization flags which causes the inclusion of a macro implementation of strndup from include/bits/string2.h. Workarounds Disable optimization when configuring or making: $ configure CFLAGS='-g -pthread -O0' --enable-debug $ make or $ configure --enable-debug $ make CFLAGS='-g -pthread -O0' Define NO_STRING_INLINES $ export CFLAGS="-DNO_STRING_INLINES -O2" $ ./configure --enagble-debug $ make Fix * Don't declare strndup in src/api/UpnpString.c if it exists 2015-02-01 Jean-Francois Dockes <medoc@users.sf.net> Out-of-tree builds seem to be currently broken, because ixml and threadutil files need an include path to include UpnpGlobal.h, and configure tries to copy files into a directory which it does not create. The patch fixes both issues. 2014-01-03 Peng <howtofly(at)gmail.com> rewrite soap_device.c 1) separate HTTP handling from SOAP handling 2) remove repeated validity check, each check is performed exactly once 3) fix HTTP status code per UPnP spec, SOAP spec and RFC 2774: add a workaround for install(1) from GNU coreutils, PR pkg/48685. Log message: Update libupnp to 1.6.19. bug fixes
https://pkgsrc.se/net/libupnp
CC-MAIN-2020-16
refinedweb
4,146
51.04
When researching various web-based imaging cropping tools, I’ve found that there are a lot of good looking JQuery, MooTools, and other JavaScript based solutions out there, but there’s not a ton of good server-side support. The one ASP.NET control I found that I really did like was “Asp.net 2.0 Web Crop Image Control“. It’s actually an ASP.NET wrapper around JQuery JCrop. Here’s how to make a basic image cropper with the “Web Crop Image Control”: (1) Download the source code from CodePlex. (2) Create a new ASP.NET website in Visual Studio and add the included code from the sample website, including the DLL file it self from the “bin” directory, as well as the “scripts” directory and the “css” directory.. (3) Create a new ASP.NET Page and add in the following header reference to reference the crop control: <%@ Register assembly=”CS.WebControls.WebCropImage” namespace=”CS.WebControls” tagprefix=”cc1″ %> (4) Add an ASP.NET Image Control to the Page: <asp:Image ID=”Image1″ runat=”server” ImageUrl=”images/samplephoto.jpg” /> You can also set the vlaue of the image programmatically by setting the “ImageURL” property of your control. (5) Add a Crop Button to your page and give it an event handler: <asp:Button ID=”btnCrop” runat=”server” Text=”Crop” onclick=”btnCrop_Click” /> Your corresponding C# code would look like this: protected void btnCrop_Click(object sender, EventArgs e) { wci1.Crop(Server.MapPath(“~/images/filename.jpg”)); } (6) Add the Web Crop Image Control <cc1:WebCropImage ID=”wci1″ runat=”server” CropImage=”Image1″ IncludeJQuery=”true” ScriptPath=”scripts/” W=”50″ H=”150″ CropButtonID=”btnCrop” /> Make sure that the “CropImage” property is set to the name of the ASP.NET Image Control you created and that the CropButtonID is set to the ID of the button you created. You can change the default height and width by setting the H and W properties. If all is well with your code, you should be good to go. Here’s what the finished product will look like: I made a valiant attempt at this one on the CMS, but ended up cutting it out due to time… Someday I will revive this project. That CMS was…. uhhh… I would be happy to never look at it again Looking through this article, I noticed the IncludeJquery parameter as well as the discussion about the asp.net wrapper. That reminded me of another interesting jquery plugin I encountered at. I am considering implementing it although I need to test it more before then, seems to have odd behavior in FF if you click on the reshape, effects, etc buttons at the top right. I want to do exactly what your demo does… HOWEVER, the difference is, when the user uploads an image, it will be a FULL high res, i.e 2816*1880. How can I serve up a smaller version for web view to the user, have them crop that with this tool, then apply those changes back to the full size image? OR does the tool already do that? THANKS in advance! DESPERATE to find an answer before spending tons of money on a 3rd party tool! Hi Jennifer, you definitely want to resize the images in ASP.NET after cropping them. I recommend taking the cropped image after you create it and resize it to whatever you’d like using ASP.NET’s built in image resizing tools. I don’t think there’s anything built into the cropping control that will necessarily be of great help. This link on codeplex should help: Hi Jennifer, I found a tool called the Better Image Processor that allows you to set a max width and/ or a max height property so that the image displayed will be dynamically resized. I’m not sure if it maintains the entire quality of the image as it does this check it out at I keep getting invalid virtual path, which I know is a valid virtual path any ideas on how to solve this ? How a reduce the image’s weight on result after dropping
http://adventuresindevelopment.com/2009/05/22/how-to-crop-images-in-aspnet-with-web-crop-image-control/
CC-MAIN-2017-30
refinedweb
683
62.78
In this guide, you’ll learn how to do over-the-air (OTA) updates to your ESP32 boards using the AsyncElegantOTA library. This library creates a web server that allows you to upload32 to upload new firmware and files to the filesystem wirelessly in the future. We have a similar tutorial for the ESP8266 NodeMCU board: ESP8266 NodeMCU32 board - Upload files to SPIFFS via OTA to ESP32 board We recommend that you follow all the tutorial steps32 boards. For example, in the Arduino IDE, under the Examples folder, there is the BasicOTA example (that never worked well for us); the OTA Web Updater (works well, but it isn’t easy, serial port. This sketch should contain the code to create the OTA Web Updater so that you are able to upload code later using your browser. - The OTA Web Updater sketch creates a web server you can access to upload a new sketch via32 will be programmed using Arduino IDE. If you want to learn how to do the same using VS Code + PlatformIO, follow the next tutorial: ESP32 AsyncTCP and ESPAsyncWebServer Libraries You also need to install the32 Basic Example Let’s start with the basic example provided by the library. This example creates a simple web server with the ESP32._4<< In your local network, open your browser and type the ESP32 AsyncElegantOTA. Upload New Firmware OTA (Over-the-Air) Updates – ESP32 Every file that you upload via OTA should be in .bin format. You can generate a .bin file from your sketch using the Arduino IDE. With your sketch opened, you just32. 3. Generate a .bin file from your sketch. Go to Sketch > Export Compiled Binary. A new .bin file should be created under the project folder. 4. Now, you you should see when you access the ESP IP address on the root (/) URL. You can click on the button to turn the ESP32 on-board LED on and off. AsyncElegant. ESP32 Filesystem Upload Plugin Before proceeding, you need to have the ESP32 Uploader Plugin installed in your Arduino IDE. Follow the next tutorial before proceeding:32 simply go to Sketch > Show Sketch Folder. This is where your data folder should be located and how it looks: After this, with the ESP32 disconnected from your computer (that’s the whole purpose of OTA), click on ESP32 Data Sketch Upload. You’ll get an error because there isn’t any ESP32 board connected to your computer – don’t worry. Scroll up on the debugging window until you find the .spiffs.bin file location. That’s that file that you should upload (in our case the file is called Web_Server_OTA_ESP32_Example_2.spiffs.bin. And this is the path where our file is located: C:\Users\sarin\AppData\Local\Temp\arduino_build_675367\Web_server_OTA_ESP32_Example_2.spiffs.bin To access that file on my computer, I need to make hidden files visible (the AppData folder was not visible). Check if that’s also your case. Once you reach the folder path, you want to get the file with .spiffs.bin extension. To make things easier you can copy that file to your project folder. Now that we have a .bin file from the data folder, we can upload that file. Go to your ESP32 IP address followed by /update. Make sure you have the Filesystem option selected. Then, select the file with the .spiffs.bin extension. After successfully uploading, click the Back button. And go to the root (/) URL again. You should get access to the following web page that controls the ESP32 outputs using Web Socket protocol. AsyncElegantOTA. Wrapping Up In this tutorial you’ve learned how to add OTA capabilities to your Async Web Servers using the AsyncElegantOTA library. This library is super simple to use and allows you to upload new firmware or files to SPIFFS effortlessly using a web page. In our opinion, the AsyncElegantOTA library is one of the best options to handle OTA web updates. We hope you’ve found this tutorial useful. Learn more about the ESP32 with our resources: - Build ESP32 Web Servers with Arduino IDE (eBook) - Learn ESP32 with Arduino IDE - More ESP32 Projects and Tutorials… Thanks for reading. 55 thoughts on “ESP32 OTA (Over-the-Air) Updates – AsyncElegantOTA using Arduino IDE” SCORE! Well done….. This is great. I have been (trying) to use OTA for years and very rarely has it worked. This seems to work every time (sort of) so that’s a plus. The only thing odd is that after loading the “Web Server Sketch – Example” it will not connect to the Wifi. I see “Connecting to WiFi..” repeating but it never connects unless I reboot/reset the ESP32, then it connects every other time. In other words, after uploading the code it doesn’t work until I reboot the ESP32. Then it connects. If I reboot it again, it doesn’t connect. If I reboot it again, it works. Very odd behavior. Any thoughts. Hi Bruce. That’s indeed a very odd behavior. We’ve experimented with the library with different examples, and it never failed. We also tried it with the ESP8266, and everything went fine. Do you have another board to experiment with? Regards, Sara experimenting too. currently I have a a program (on an ESP8266 using the proper ESP8266 code) with both STA and AP,Async webserver that connects with static IP and the elegantOTA does not seem to want to start. Have not figured out why yet. disabling the AP makes no difference. Will try to find it (the example runs fine) OK I found the problem. As you can see, most of the examples, make their WiFi connection, then issue a ‘server,on(“/”……..’ request, and then do their elegant ota and server.begin() requests, placing it at the end of setup(). That is what I did…….but I had a couple more server requests,,,,one of them called ‘update’. need i say more 🙂 Hi Sara An excellent description – as usual! I’m wondering if this works with LittleFS too as the SPIFF file system seems to be deprecated for the ESP32. Have you tried it and it it possible to make it work in a similar way? Regards, Juerg Hi. SPIFFS is only deprecated for the ESP8266. This tutorial also works with littlefs (at least with the ESP8266). We’ll publish a similar tutorial for the ESP8266 by the end of this week (with littlefs). Regards, Sara Hi Sara I have just tested the OTA library with an ESP32 with LITTLEFS: Indeed everything works fine and smooth, even with a username and password for the OTA website (IoT security). The ESP32 just had to be rebooted after the LITTLEFS update, after a firmware update the ESP32 restarted itself. Best Regards, Juerg Wow, Very good job Great,I have been trying this with the ESP8266 and works very well. Never had real trouble with the regular OTA, including webOTA, but had the occasional odd behaviour. Main advantage here is that it can be used with Async webpages and it looks a lot better. Very well explained Great example. I will definitely use this. I am a little concerned that just about anyone can corrupt your esp32 server by uploading a new (possibly a virus) program to it. There seems to be no security and a very generic “update” catch phrase to access it. I clicked on view raw of the second example link and did a <ctrl-a, ctrl-c ctrl-v then I tried to compile the example but I get a lot of error-messages Do you RE-test EVERYTHING by following your tutorial? I guess not otherwise you would have recognized the bugs yourself exit status 1 control reaches end of non-void function [-Werror=return-type] Hi. Can you provide more details about the error? I’ve just compiled the code again and it was fine. Regards, Sara Hi Sara, thank you for answering so quick. Did you do a Copy & paste from the website and then compile it? The I have actovated erbose output so the complete output is more than the forum-software allows I guess this part is important C:\Users\Stefan\Documents\Arduino\Web_Server_LED_OTA_ESP32\Web_Server_LED_OTA_ESP32.ino: In function ‘String processor(const String&)’: Web_Server_LED_OTA_ESP32:202:1: error: control reaches end of non-void function [-Werror=return-type] } cc1plus.exe: some warnings being treated as errors Multiple libraries were found for “WiFi.h” Used: P:\Portable-Apps\arduino1.8.13\portable\packages\esp32\hardware\esp32\1.0.4\libraries\WiFi Not used: P:\Portable-Apps\arduino1.8.13\libraries\WiFi Using library WiFi at version 1.0 in folder: P:\Portable-Apps\arduino1.8.13\portable\packages\esp32\hardware\esp32\1.0.4\libraries\WiFi Using library AsyncTCP at version 1.1.1 in folder: C:\Users\Stefan\Documents\Arduino\libraries\AsyncTCP Using library ESPAsyncWebServer at version 1.2.3 in folder: C:\Users\Stefan\Documents\Arduino\libraries\ESPAsyncWebServer Using library FS at version 1.0 in folder: P:\Portable-Apps\arduino1.8.13\portable\packages\esp32\hardware\esp32\1.0.4\libraries\FS Using library AsyncElegantOTA at version 2.2.5 in folder: C:\Users\Stefan\Documents\Arduino\libraries\AsyncElegantOTA Using library Update at version 1.0 in folder: P:\Portable-Apps\arduino1.8.13\portable\packages\esp32\hardware\esp32\1.0.4\libraries\Update exit status 1 control reaches end of non-void function [-Werror=return-type] best regards Stefan Hi. Yes, I did that and it compiles fine for me. What’s the Arduino IDE and esp32 boards version that you have? Modify your processor() function to be like this: String processor(const String& var){ Serial.println(var); if(var == "STATE"){ if (ledState){ return "ON"; } else{ return "OFF"; } } return String(); } Let me know if this solves your issue. Regards, Sara I did modify the first code and tested the OTA with this modified first code-version. The one that just says Hi! I am ESP32. This worked. When I inserted your version of String processor there was no formatting. no indentions so I pressed ctrl-t for autoformatting. but nothing changed. then I moved the function String processor above the html-code and voila there autoformatting worked. So my conclusion is that something is written “non-international” inside the rawliteral. Something inside the raw-literal has a syntactical error. No idea what this could be as I have never worked with html-code. Now idea if – when I post the html-code section here that all the character-translations done by the forum-software keeps the error anyway I post it here // Create AsyncWebServer object on port 80 AsyncWebServer server(80); AsyncWebSocket ws(“/ws”); const char index_html[] PROGMEM = R”rawliteral( ESP Web Server; } ESP Web Server ESP WebSocket Server Output – GPIO 2 state: %STATE% Toggle var gateway = `ws://${window.location.hostname}/ws`; var websocket; window.addEventListener(‘load’, onLoad); function initWebSocket() { console.log(‘Trying'); } )rawliteral”; void notifyClients() { ws.textAll(String(ledState)); } best regards Stefan Posting html-code in this commenting-software SUCKS, The comment-software eats half of the code to be honest: Sara and Rui you should consider using a completely DIFFERENT commenting-software. All this is about programming. Which means SOURCECODE is a very important part. So the Commenting-part should be able to show Sourcecode as Sourcecode with a FIXED-DISTANCE-font and as CODE-SECTIONS like ANY other programming-user-forum does. I did some reading about PROGMEN and rawliterals and found the syntactical error. Then I did do the following: I took my mouse holding down the left mousebutton to start marking the sourcecode of the seconds sourcecode (the one that is called Web_Server_LED_OTA_ESP32) I finished the marking of the source-code by keeping the shift-key-pressed using cursor down to mark all characters that belong to the sourcecode. Then I pasted this into the Arduino-IDE. As a cross-checking I pasted it into UltraEdit, Notepad++ , and standard-notepad. Always the same result: At the end of the html-code there is )rawliteral”; “)” is one line BELOW the “>” the closing bracket of the command rawliteral is one line below the closing “edgy” bracket “>” of the html-tag it is the same inside the “RAW-Code”-page as soon as I removed the CR so the source-code looks like this )rawliteral”; the closing-tag and closing bracket and rawliteral all in the SAME line the code compiled. This is why I highly doubt that you did do an EXACT copy and paste into Arduino-IDE without any additional hand-formatting How would it be possible that your Arduino-IDE could remove a CR (a carriage-return) while my version and any other texteditor does not? I developed the habit of testing / repeating ALL steps (if I say ALL steps I mean ALL steps!!) before uploading code into a user-forum. Even after changing a single character. Which means for example I upload a code-example to the Arduino-forum as a code-section. I use the select all link to copy the code into the clipboard I paste this clipboard-content into an absolutely empty new opened sketch and do a REAL upload into the microcontroller and testing the code and watch the serial output does the code behave like expected? So this means I do ALL steps another user will do if he tries my code. Maybe you have tested it with PlatForm-IO or did remove the CR because you thought it was a type by you There is nothing more frustrating for newbees as if a tutorial pretends to explain everything step by step and therefore seems to be fool-proof and then it does not work because of an error in the description. best regards Stefan I’ve also tested the exact code on my computer right now and it works fine for me. I honestly don’t know what’s missing either. I’ve copied the exact code from our website again to both Arduino IDE and VS Code and both compiled just fine. again this forumsoftware eat up the html-code so I add it modified by inserting underlines between each character of the html-code-part your codeversion looks like this </_h_t_m_l>_) rawliteral”; it should be in ONE line </_h_t_m_l>_)rawliteral”; best regards Stefan Hi Stefan. I’m sorry for the issue. But I did test the code, as I told you. I copied the raw code, paste it into Arduino IDE, and recompiled it. It went fine, as you can see here:. This is all very weird, and I can’t find any explanation about this. What’s the version of Arduino IDE that you are using? I’m using 1.8.13. All I want is that the tutorials work for everyone straight away. I’ll try to investigate a bit more about this issue. Regards, Sara I tried it with Arduino Ide 1.8.12 and 1.8.13 both showed the same problem. Then I tried it with Arduino-IDE 1.8.13 on another computer same problem. the combination of – changing the code of function “processor” to return String() with – putting the html-tag and the keyword rawliteral in the same line of code makes the compiler compiling. Beside this weird syntax-pickiness this OTA-functionality is great. Thank you very much for providing this. What really astonished me was the fact that – if I call the ESP32’s website with the LED-toggle-button from multiples computers that on each computer the written state of the LED gets updated the same tenth second I click on the button. No reloading of the website requiered. That is really great! Do you happen to know a WYSIWYG-website designer that makes it easier to create the HTML-code? Or do you have a tutorial that shows how the HTML-elements like buttons, sliders etc. are programmed? I mean giving an example and explaining the details through variations: positioning the button button-size button color button-text how to evaluate the button-click how to change the button text/color on run-time etc. best regards Stefan Hi. That’s very strange behavior. Rui also tested the code, and it went fine. What operating system do you use? We are on Windows. Yes, the synchronization of all clients is a great feature, thanks to the WebSocket protocol. I’m not familiar with software like that. We design our own webpages. In our latest course, “Build Web Servers with ESP32 and ESP8266,” we go into more detail about writing the HTML and CSS for your web pages, handling the HTML elements, and how to use JavaScript. Whenever I have a doubt about HTML elements and how to style them, I usually refer to the w3schools website. Regards, Sara We updated the code with your suggestions. Regards, Sara Hi Sara, thank you very much. Now a simple copy & paste compiles. From both “sources” the Website-section (with the colored code) and the raw code (just text) OK I’m gonna take a look into your Build Web Servers with ESP32-course best regards Stefan Really great tutorial – excellent level of clarity. Where I cam unstuck was my Arduino IDE was an Ubuntu SNAP package and for the life of me I could not find the …SPIFFS.bin file. Solved by installing the IDE from the web site but then had to solve all the python2 – python3 problems. Anyway working really well so now I need to think of a use for it! H Sara, I get the following error message when attempting to run the Arduino IDE: Arduino: 1.8.13 (Mac OS X), Board: “Adafruit ESP32 Feather, 80MHz, 921600, None, Default”/TinkersHome/Library/Arduino15/packages/esp32/hardware/esp32/1.0.4/libraries/WiFi Not used: /libraries/WiFi exit status 255 /arduino-builder returned 255 Error compiling for board Adafruit ESP32 Feather. Would you please tell me what my problem is? Thanks I downloaded esp32 and esp 8266 but it didn’t go into Arduino IDE library. It went somewhere .when I re download It said its already been installed.I have windows 10. What can be done? “I downloaded ESP32 and ESP8266” is a pretty vague description of what you have might done. For programming ESP32/8266 you need to add two additional board-definition url into the preferences. So the minimum is that you describe in detail what you have really have done. And I guess you tried to compile he code. You have to activate verbose output and then analyse the output for the errors that were found during compilation and post the errors here. best regards Stefan When I tried adding this to an existing project and go to, all that shows is the word “OK” i.e. no upload box or any other text. Please ignore the above… I had not put some files in the correct place Will this work with an ESP-01 I put the index and cc files in the correct place but still no joy. The serial monitor reports, “14:14:06.077 -> No message sent” and “ok” appears in the browser. I worked it out. I was already using /update for something else. Another excellent tutorial – many thanks Can I suggest changing: request->send(200, “text/plain”, “Hi! I am ESP32.”); to: request->send(200, “text/plain”,FILE “\nCompiled: ” DATE ” ” TIME); This will return the path and INO name currently loaded IDE and when it was compiled. Also… recommend using fixed IP address otherwise you need to connect to USB to find IP address which defeats the use of OTA Thanks for the suggestions. Regards, Sara Perfect! It works with ESP32 as AP too, no external scripts needed. Thanx for the tip How does one add a logon to the upload page so that not anyone can upload firmware. If you encounter a problem or if you want a new feature do a 5 minute-research. 5 minutes is almost nothing and in a part of all cases you have success in just 5 minutes. I developed this habit. For googling and for coding. So I took a look into the file AsyncElegantOTA.h and VOILÀ: you can find there two lines for setting up a username and a password I haven’t tested this myself yet. best regards Stefan Thanks Stefan I did indeed google but had no joy. Looking at the Code I thought that the credentials referred to the Wifi Password… my coding is a tad rusty 🙂 I will do a bit more digging in the *.h file. Thanks again. Ok so this works i.e. adding credentials to this line, “void begin(AsyncWebServer server, const char username = “username”, const char* password = “password”) I tried setting them where they are declared in at the bottom of the file but that did not work. I just need to now work out how to log off as once you have logged on it appears to keep you logged in. Hello, very good article. A question would be possible, say on any device with ESp32, installed on a user, company client …, to change / register a new wifi network and password on esp32 through the use of the web server page, when so desired? Hello Sara and Rui. I’m trying to upload my .bin file into ESP32CAM through OTA, but is occurring a problem…The web page shows up well and I can choose the .bin file, however after show up 100%, doesn’t appear the message OTA Succes and the button BACK. The web page stay on 100% message always. After that the ESP32 CAM reboot and run the old firmware. Can you help me with this issue? Very neat solution! A little typo: You explain that the update interface is started by: Now, you need to upload that file using the ElegantOTA page. Go to your ESP IP address followed by /updated. It should be /update only – without the final “d” – else you get a blank screen. Thanks and regards Hi. Yes, you are right. Thanks for noticing. It is fixed now. Regards, Sara Hi Sara, Thank you for sharing the awesome technology. Excellent code base. I just have 2 queries Can we update firmware and files from intermate, Like if I kept both .bin file on my google drive, every power on Node MCU will check if files are updated and update itself with new .bin files of firmware and other files(html, java, css, image) I need to work on some data in csv file, how I can get the csv file from my google drive to my Node MCU Hi. Thanks for reaching out. I think that should be possible, but I don’t have any tutorials about that subject. Regards, Sara I am trying to use AsyncElegantOTA with an existing sketch, but am getting a bunch of “Multiple definition ” errors. I have the sketch set up to allow me to switch Access points, so, I am also including <WebServer.h>. No matter what I tried, I could not get away from multiple errors, so, I loaded your scetch in a new project, It compiles fine. But, as soon as I add #include <WebServer.h> to you sketch, I get a host of errors, leading me to believe WebServer.h and AsyncElegantOTA.h are causing problems for each other. Is there a way to include both? Is sketch\menu\pin_manager.cpp.o:(.bss.AsyncElegantOTA+0x0): multiple definition of sketch\AsyncIOTWebServer.ino.cpp.o:(.bss.AsyncElegantOTA+0x0): first defined here sketch\system\temperatures.cpp.o:(.bss.AsyncElegantOTA+0x0): multiple definition of sketch\AsyncIOTWebServer.ino.cpp.o:(.bss.AsyncElegantOTA+0x0): first defined here sketch\wifi_code.cpp.o:(.bss.AsyncElegantOTA+0x0): multiple definition of `AsyncElegantOTA’ sketch\AsyncIOTWebServer.ino.cpp.o:(.bss.AsyncElegantOTA+0x0): first defined here collect2.exe: error: ld returned 1 exit status exit status 1 Error compiling for board ESP32 Dev Module. Hi. If you’re using the WebServer.h library, it is better to follow this tutorial instead: Regards, Sara Thanks for the answer Sara, will this also be compatible with <ESPAsyncWebServer.h>? It may seem as if I cannot make up my mind what I want to use, but, I am only using “WebServer.h” to add new WiFi credentials if I am not within range of a currently known AP, but the project runs on <ESPAsyncWebServer.h> once an AP has been established. Hi. I’m not sure if using both libraries at the same time conflict. This OTA procedure in this tutorial is compatible with the ESPAsyncWebServer. I haven’t tested if it is compatible with WebServer.h. You have to try it and see what happens. We have this tutorial about Wi-Fi functions with the ESP32 that might be useful for your projects: Regards, Sara OK, Thanks, I will try it, I have already been round and round with so many different ways of trying to acomplish this, one more attempt it is. You & Rui have been an excellent source of quality information! Thanks! Hi, nice tutorial and just what i need. I consider myself a novice where esp32 is concerned. I have just 1 question, i need to upload temperature logs to a server hosted outside of my network. So, i will do an http get request with temp values. Can i use the OTA library as well as do the above mentioned tasks. Temp logs get uploaded every 5 minutes OTA means a compiled binary file that contains a PROGRAM will be transferred over WiFi into a reserved part of the ESP32-flash-memory. After storing the new compiled program into the OTA-area successfully, this compiled program is transferred into the regular program-flash-memory. compiled binary PROGRAM——>——-any Computer——–>WiFi———–>ESP32——Flash-PROGRAM-memory That is a completely different thing than ESP32–temperature-Data-in-RAM-Memory—–>——WiFi——->——–external Server different direction: Computer—>–ESP32 versus ESP32—–>—-Server different datalocation: flash / versus RAM different dataNATURE: PROGRAM versus TemperatureDATA there could not much more be different between those two tasks Hi Sara, I am using the ESP32-CAM for my test. All programs work fine on it When I do OTA update it hangs at 58% (several attempts done) Reloading the web page gives me the update page again with Browse again. Do I need to change something for ESP32-CAM? I had problems when update firmware “abort() was called at pc 0x40143f45 on core 1”. ESP32 immediately reset. What do I need to do to make it work properly?
https://randomnerdtutorials.com/esp32-ota-over-the-air-arduino/
CC-MAIN-2021-25
refinedweb
4,420
65.73
The Storage Team Blog about file services and storage features in Windows Server, Windows XP, and Windows Vista.. In this posting I will show some exploratory uses of the Dfsutil tool. If you are working on a Windows Server 2008 system you have Dfsutil already available. On a Windows vista SP1 system you need to install the RSAT pack. To install the RSAT pack you can refer to for simple installation guidelines. Once Dfsutil is installed we can start doing some simple experiments. First let's have a look at the help: Dfsutil /? This is the best way to start exploring the tool. The help shows you the nine main commands Dfsutil offers: The commands are organized in a tree like structure. So if you issue Dfsutil cache the result is: Then you can select one of the three cache commands. If you pick Referral, for instance, the full command line will be Dfsutil cache referral. This is the new Dfsutil interface. Dfsutil also supports the old interface. You can obtain the help for the old interface by doing Dfsutil /oldcli. With Dfsutil you can create/modify/remove DFS namespaces roots and links, add/remove targets, modify/view site costing properties, modify/view DFS registry keys, etc. It's a powerful tool! But let's start with something simple. How about listing all the namespaces in a domain? For that use the Domain command: Dfsutil domain DomainName That will give you the list of namespaces roots for the domain DomainName. You can use the FQDN if you prefer. Also, you can list the namespaces roots hosted on a specific machine by doing: Dfsutil server MachineName MachineName is the root server. This command lists domain and standalone namespaces hosted in the root server. The root server can be a remote machine. Now to look at the individual namespaces, you can do: Dfsutil root \\DomainName or MachineName\RootName Here's an example: Alright, so we have the standalone root myroot on the root server 432233e0630-79. By using Dfsutil root, we can see that this root has a link called link0 and the target for this link is the share dlink0. Doing net share dlink0 you find the directory the link \\432233e0630-79\myroot\link0 points to. Using Dfsutil and these few commands you can map all namespaces in your domain. Hope this helped you get started with Dfsutil. More info: Technet documentation: Dfsutil Overview ------- Marcello Hasegawa PingBack from Trademarks | Privacy Statement
http://blogs.technet.com/filecab/archive/2008/06/26/dfs-management-command-line-tools-dfsutil-overview.aspx
crawl-002
refinedweb
410
66.03
? How is it that (Score:2, Interesting) Why should kids learn programming when they'll only be able to compete for a programming job if they take an East Indian's dollar-a-week salary? to learn objects. Wait don't scream. I said "learn". I'd been object oriented programming for years in Java and other languages. But I truly did not understand how all the pieces worked till I wrote perl objects. In perl it's like one of those "visible man" models. You learn how inheritance works. You learn how. (Score:3, Interesting) Re:You know, though this is a du have to do some actual building to make things work. Sample any number of the host of languages available under CP/M. Use a line editor. Print out program listings, etc. Use the computer as a *tool* to learn something else - like math. Number fun - magic squares - rectangular, triangular, perfect numbers. Find prime numbers and pythagorean triplets, etc. Do number base conversions. Learn dimensional analysis and units, etc, etc, etc. Let the child enjoy saying "Whoopie. I made this." Re:You know, though this is a du:Flame Baby Flame (Score:2, Interesting) One scary article I encountred (on ora.com) suggested starting kids out on tcl/tk. YMMV. Re:Despite the Dupe - I *Hated* BASIC; PASCAL Baby (Score:3, Interesting) It was a catastrophe. When I first started composing this, I was going to blame it on BASIC itself, then on the crappy line editor I was trying to use. But as frustrating as these things were, my greatest shortcoming was that I had no adult supervision. When you try and teach yourself, rather than learning from an expert, you tend to not realize when you've missed something very, very important. I feel a deep sense of shame even today for admitting this level of stupidity, but I didn't know what a subroutine was. Knowing that I could have called the same snippet of code from different parts of the program would have saved me much heartache, but I had the concept of a flowchart firmly in my head, and it seemed to demand a single, unbroken flow of execution. Which demanded cut-n-paste. Which I couldn't do with that crappy line editor. Thinking on it, I should probably try tackling that project again, so that next time I set down to writing a long anti-BASIC diatribe, I'll at least know what the hell I'm talking about. Re:Perl OO (Score:2, Interesting) In perl there are references. A reference is created by \ on a container. or you can create a reference with anonymous array and/or hash syntax. A reference can have a namespace associated with it. This is done with bless(). Such a blessed reference is called an "object". Subroutines can be written to work on objects. They expect their first parameter to be the object being worked on. Subroutines that expect an object as their first parameter are called "methods". Often this parameter, by convention, is named $self. If you use an '->' between an object variable and a subroutine then the parser rewrites this to provide the object as the first argument to the subroutine. A method call is first searched for in the package the object is blessed into. If it is not found there the package's @ISA array is examined. Each namespace in the @ISA array is searched (while in turn any @ISA's in that namespace is searched if the method is not located in the namespace) until the first method is found or none is. That's it. Everything else you can put together from general OO techniques. Here's a small Point class. _init() is seperated from new() so that any sub-classes of Point (those packages that have a @ISA list with 'Point' as an element) can override it without having to rewrite new(). Alternately a sub-class could do some additional work and call $self->SUPER::_init(...) to call _init() in some super class.
https://slashdot.org/story/06/09/15/186217/david-brin-laments-absence-of-programming-for-kids/interesting-comments
CC-MAIN-2017-34
refinedweb
677
75.4
About THE CYCLOTRON BIKE - Revolutionary Spokeless Smart Cycle €169,105 132 STOP! - Please take your time and read through all the features of this groundbreaking vehicle. We know there are a lot, please ask if you don't understand. Also take a look at the FAQ & Updates for the project. THE CYCLOTRON BIKE - MEDIA FEATURES The Cyclotron - The Future of Cycling You_4<< All areas of our life are progressing and evolving very fast, new inventions appear literally every day, outdating the ones from yesterday. But looking at bicycles, they haven't evolved with nearly comparable speed within the last decades. That's why we've created the world's most advanced and versatile Smart Bike. The Cyclotron is fully connected, packed with tons of innovative features and controlled by the powerful Cyclo-App. The Features - Revolutionary Design & Unmatched Functionality We believe a bicycle as a "utility" that should enable you to reach your goals. Wheather it is to get your groceries, taking your kids to the playground or simply having fun riding it. To handle these different "tasks" we made the Cyclotron the most versatile and technically advanced bike available today. During the last years working on the Cyclotron project, we've developed a great number of ideas and solutions that make this bike truly unique. We're currently closer checking on more than 15 patents that -_11<<months, 80€/year) ( ** = optional service: 3€/month, 8€/3months, 25€/year) The Mission - World's most advanced and versatile Smart Bike The Cyclotron Bike is it's own small universe, where technology and creative minded people can form a unique experience for every rider. It's an open source Smart Bike Platform, that everybody can connect and contribute to, to improve the Cyclotron Universe with each day. In order to create the foundation for the "Cyclo-Verse" we had to overcome the boundaries of conventional bikes and tear down some walls. This is were we started our work more than 3 years ago. We quickly noticed that the possibilities for improvement were quite limited, when sticking with the "regular" bicycle platform. We questioned every aspect of the "Motion on two wheels" and investigated the latest high-tech materials and production methods from many other kinds of vehicles, by sea, land & air. Materials and production methods have rapidly evolved within the last decade. Whenever speed & performance is an relevant issue, we always encountered the extensive use of carbon fiber in these constructions. The Frame - Space Grade Carbon Fiber Monocoque Construction The Cyclotron Bike is made of Space Grade Carbon Fiber Composite, wich makes it rigid, stiff & ultra lightweight at the same time. While sorting out the optimal production process for the Cyclotron Monocoque Frame, we did a lot of testing on material properties and identified that we could achieve the stiffest most lightweight and durable construction with a "Carbon Fiber Sandwich". To achieve a true Space Grade Composite Construction we combined two layers of carbon fiber with an ultra lightweight core structure. This way we can use fewer layers of carbon fiber and less resin, without impairing stability. Advanced aerodynamic frame shape But we didn't stop with a light construction. To minimize air resistance, drag & turbulences, we use aerodynamic wing-shaped tube profiles. They literally "slice" through the air like a knife, when you go fast. The tube shapes of the Cyclotron frame are a combination of Ultralight Glider Planes with the edgy lines of Stealth Jets. Paired with the advanced carbon fiber frame construction, the Cyclotron is lightyears ahead of any conventional bicycle available today. Frame Sizes The Cyclotron is available in three different frame sizes: S, M, L for you to chose from. When you're right in between two sizes, you can choose weather you like a more agile or comfort oriented bike, by chosing the smaller or larger frame size. Fully Integrated - No Cables, No Wires, No External Components The Cyclotron is the world's first fully integrated bike. Yes, many other manufacturers claim their bicycles also to be fully integrated, but they can't manage to integrate the drivetrain components, like the chain and derailleur into the frame. With the groundbreaking construction of the Cyclotron we managed to integrate literally everything. All wires and cables run inside the frame, which also results in tremendous enhanced aerodynamics, no more dirt exposure and a clean and uncluttered overall look. NO brake cables, No shifting cables, No light cables. We even managed to integrate the modified caliper brakes to be completely concealed within the hubless wheel housing, but still being accessible for ease of maintenance. One Bike - Two Riding Styles The Cyclotron Bike features two different riding styles. SPORT - A very streamlined and low-profile one for optimal speed and performance. COMFORT - A more relaxed and upright one for cruising and better comfort. This gives you the choice of two different bicycles to pick from at any time. Just two small adjustments change the entire seating geometry of your Cyclotron bike. Just flip the bars and lower the seating height! It's really that easy. THE USM's - Utility Slot Modules One of the big advantages of the Cyclotron's hubless wheels is the usable space that they offer. Instead of having just "swirled air" between your wheels, you can add different USM's (Utility Slot Modules) to your bike. The USM's are tool-less attached to the inside of your front or back wheel within seconds. The Utility Modules can securely be locked inside the hubless wheel, when not needed, you can easily store them away, so you're not riding around with some unnecessary weight on your Cyclotron. From the launch you can choose from four USM's. Within our Stretch Goals three more modules can be unlocked. You can order multiple modules together with your Cyclotron Bike by simply adding the amount to your pledge. These USM's (Utility Slot Modules) are available for the Kickstarter launch: The Polygon Basket - The simplest way to add stowage (+ 49 €) The polygon basket is made from composite fiber and is both: lightweight and durable. The dimensions are perfectly balanced, so you'll have a maximum space for stowage without getting too heavy when fully loaded. Within one Polygon Basket you'll get enough space so stow two large grocery bags or two 6-packs of water. If you mount them in both wheels it equals 24(!) bottles of water, and you still have some space left in between. The Butterfly Basket - The most elegant stowage that can fold away (+ 89 €) Like the Polygon basket the Butterfly Basket is also light and durable. The amazing feature is, that it can be completely folded away, to a slim disk of just 2cm thickness when not needed. Please Note: The front and back end of the Butterfly basket will be closed by a flexible mesh, that doesn't interfere with the folding mechanism and keeps your belongings inside the basket. The Wingman - Innovative child seat system (+ 399 €) The Wingman is a groundbreaking child seat system that is fixed to the back wheel of the Cyclotron. You can choose weather you like to mount it on the left, the right or even on both sides of the Bike. When your children are still too young to pedal for themselves, it has always been a problem taking them with you for a ride. As most of the Cyclotron team has kids, they encounter this problem, too. The "easy-to-attach-quickly-removed" solutions are often insecure & sketchy and don't give you the confidence of having your child safely secured. On the other hand, safe and properly attached seats require a lot of time to mount and can't be removed easily when not needed. In most cases they're also quite heavy, bulky and hard to stow away when space is limited. The Wingman Seat will come with independent safety test certificate, comparable to European "GS" and "TÜV", to ensure it meets the highest safety standards. The seat is adjustable so it can fit your child from the age of 2-8 years. Underneath the seat there is a large additional stowage space, that compensates for not being able to put any Basket USM inside the back wheel, while having the Wingman attached to your Bike. There there are many more modules to come after we shipped your Kickstarter orders. We've already gathered tons of ideas for a lot of cool & practical USM's, and we hope that also the Cyclotron Community will come up with amazing ideas themselves. The USM Maker Store - Create your groundbreaking Utility Modules The USM Maker Store is a place where you let your ideas for revolutionary USM's come true. Everything is possible, from 3D printed wind power generators to salmon leather messenger bags. You decide wheather to manufacture & ship by yourself, or hand the business to us. We believe, that riders know best what riders need. So we'd like to encourage the whole community to develop their own "Utility Slot Modules" to make the Cyclotron Bike a continuously evolving organism that perfectly adapts to your lifestyle. If you have a truly great, but yet unfinished concept of your USM, you also could team up with other makers to finish the remaining work together. These small groups can sell their USM's as a "Maker-Group" and share the earnings. This is another way, to get your idea in front of the community, too. The Decals - Individual Vinyl Art Pieces for your Cyclotron Bike We believe the Cyclotron not just to be a technical revolution of gears and bolts, but also a platform for visual art & creativity. (Decal Sets starting at 49€) The Decals for your Cyclotron Bike are printed on ultra durable outdoor vinyl, so they'll last for years. They're self-adhessive, very easy to apply and removable without any residues. We use the same quality of vinyl, that is used in professional car-wraping and facade advertising. We'll launch The Decal Creator Store, so everybody can create their very own and individual look for the Cyclotron. You can easily choose an existing design, or create your own throughout the online editor. Just upload your artwork and we'll print & ship your quality vinyl decals in no time. If you like to share your design with the community, you can even sell it within the Creator Store, and make other riders happy. With a few easy steps you can turn your bike into a "Rolling Art Piece" that reflects your personal style and is highly individual. The Gear Box - We unchained the Bicycle One of the biggest problems of regular bicycles we solved, is the exposed and vulnerable power transmission from the pedals to the rear wheel via a metal bike chain and gears. The oily chain and gears are mandatory "collecting" dust & dirt from the road, which reduces the efficiency of power transmission with every ride. Also the shifting components of a regular Bike, like the derailleur and transmission cables are easy prey for dirt and damage. On a regular bicycle, they have to be serviced, greased & re-adjusted frequently. We choose to equip the Cyclotron with a Sequential Gear Box, that is the latest development for bicycle power transmission available on the market today. That results in NO MORE exposed mechanical components and VERY LITTLE need for maintenance. We recommend changing gear oil only every 10.000km / 6300mi or at least once a year. You don't need to be a bike technician, there are just two screws that need to be removed. We offer the Cyclotron Bike with tree different Gear Box Setups: In this video you can see how easy shifting is with the 18-Speed electronic E-Gear Box. The shifting process is accomplished by a high precision step motor in less than 0.2 seconds. The paddle shifters operate with the ease of a simple mouse click. The Cyclotrons gear box offers up to 18 "Real Gears", wich means that they could be used without reducing efficiency. A classic 3x10 derailleur system offers, due to chain skew and overlap, only up to 15 gears. The Cyclo App - Take Control The Cyclotron App syncs seamless with all integrated on-board sensors of your Bike. The data for each ride is displayed in real-time and is automatically being saved to your Cyclo-Log. But we don't want the Cyclotron App to be "just a nice display" on your handle bars, this is why we made it really smart. It continually learns from your habits and adjusts accordingly. The App analyzes your rides and after a short learning phase it assists you with: - Suggestions for optimal Gear Selection (Mechanical Gear Box) - Fully Automatic Up- & Downshifts (only with E-Gear) - Smart Battery Charging & Dynamo automatic engage/disengage The Sensors - Track your rides in real time Your Cyclotron is loaded with more than 10 Sensors that operate on Bluetooth Smart / LE standard. All relevant cycling data is displayed within the Cyclo-App while you ride, and will automatically be saved for later reviewing or sharing. While other people need to spend hundreds of Dollars for aftermarket Sensors, that have to be installed with lots of wires & mounts on the exterior of the bike's frame, the Cyclotron comes with pre-installed & fully integrated Sensors. The Smart Cyclo-Coach - Get the most out of every ride Every Backer on Kickstarter receives a 1 year Cyclo Smart Coach subscription for free with th Cyclotron bike. If you like to challenge yourself and take your riding abilities to a whole new level, then there is no better way to train than with the Cyclo Smart Coach. Track your rides with the App and master the challenges the coach throws at you. The smart coach analyzes your riding abilities and adapts the training to your individual fitness level. At the end of each week you can give the coach a feedback weather the number of workouts and intensity was manageable. The Quality - Warranty & Returns We believe in the quality of the Cyclotron, so we're offering our customers an extensive warranty. - 3 year Warranty - Lifetime Frame Warranty - Free replacement in case of theft (*) (**) - 10 days return (*) ( * = costs for shipping not included) ( ** = mandatory subscription "Theft Prevention & GPS Bike Finder") The Demo Days - Test-ride the Cyclotron To further ensure that our backers don't need to buy "a pig in a poke", we'll host the Cyclotron Demo Days for everyone that likes to test-ride the bike prior to deciding about the frame size, color or wich USM to order. We understand, that the Cyclotron is a highly innovative bike and can't be compared to anything currently available on the market. You'll be able to modify your initial order right on the spot, to make sure you'll be happy with your Cyclotron Bike. If your test ride will not 100% convince you, we'll refund you on the same day. With no questions asked! (via Kickstarter or Paypal) Why Kickstarter - We want you! By concepting the Cyclotron as an "Open Source Smart Cycle Platform", we believe that riders know best what riders need. We like the community to evolve together with the bike and on the other hand, help evolving the Bike with their ideas and creativity. We've already secured more than 1.5 million Euros in funding, plus our own funds we've put up. This money is for taking on the last technical tweaks on the Bike and the App/Web development. The pledges we collect on Kickstarter are solely used for production of the final Cyclotron Bikes. The Cyclotron Platform should work like a perfect "Cycle", where the inspiration from riders, makers and digital creatives merge and create a better experience for everyone. We're also looking forward on hopefully a whole bunch of feedback & inspiration from our backers during and after the campaign. This is why crowdfunding our project on Kickstarter is the perfect way to start. The Stretch Goals - Let's go for it The stretch goals marked as "Free" will be added free of charge to the Cyclotron Bike, the "Unlocked" goals are optional for you to chose and require additional payment (increase of pledge amount). LET'S GO FOR IT! Unlocked Goals: - 55,000€: Free RGB LED Halo Lights: Programmable RGB Halo Rim Lights, choose any color you like and adjust the brightness via the Cyclo-App. LED Light color can also be linked to: Speed, cadence or power. - 60,000€: Charge your bikes battery with the Solaris USM. This drop-in solar panel charges your battery in no time, even when you don't ride and just hang out on a sunny day in the park. - 70,000€: A FREE Set of Vinyl Decals ships with every Cyclotron Bike. - 75,000€: The Messenger USM is a slim and sleek to carry your laptop, tablets when heading to the office. - 80,000€: Every Cyclotron bike ships with a FREE aero bottle cage + aero bottle. - 85,000€: Tandem Mount feature integration FREE for the every Cyclotron Bike. The front fork can be attached to the back wheel of a second Cyclotron to get an articulated tandem bike. - 90,000€: Turn Signals feature for FREE. Use your Halo Lights to indicate where you're heading and increase your safety in road traffic. - 100,000€: Drop-in E-Motor USM unlocked. (for technical specifications please see FAQ) The Timeline - What's next We set up a realistic time frame for the Cyclotron project, so we could handle even unforeseen events without running late with the fulfillment to our backers. The Team - The Creators Idea / Design (France) - She created the initial concept / idea of the Cyclotron Bike and the Utility Modules. As a former triathlete Sina truly appreciates innovations in cycling & sports. Inspired by the many concept cycles encountered on design blogs, she decided to gather a team to bring the bike of the future to life. Engineering (Germany) - A carbon fiber expert that has a strong love for lightweight & fast vehicles. As a student at RWTH Aachen he already helped build an ultralight glider plane with his classmates. The past years he has worked with an automotive concept design team on an electric car. He is supervising the design and development of the advanced CFC Frame & E-Gearbox of the Cyclotron. App Dev (Germany) - Former Fin-Tech developer, these days IT-Consultant for German Commerzbank. At Cyclotron he'll lead the development of the UI and the SaaS components. He recently took a sabbatical with his family in Central & South America. Web Dev (Netherlands) - He is the guy supervising the setup of the storefronts and e-commerce sections of the Cyclo Stores. Jost gained most of his experience from working for furniture manufacturer Ikea, where he helped developing the web shop and kitchen editor. Sourcing (China) - Richard worked several years within the automotive industry at Toyota Motor Corporation and Tata Motors. Eight years ago he co-founded a sourcing agency to match manufacturers from Europe and suppliers from Asia. After he retired he joined the Cyclotron Team with his impressive knowledge and experience. QC (China) - Huan has been working at Richards agency as Head of Quality Control for three years. She has extensive knowledge of a wide range of trustworthy suppliers all over Asia and is currently a sourcing agent in Hong Kong and Taiwan. Special thanks go to Zack Hemsey (Creator of "Mind Heist") for the amazing soundtrack of our campaign video. Please help us and spread the word about the Cyclotron project! Please share the Cyclotron Project with all of your friends and make sure to like us on Facebook and follow us on Twitter. Risks and challenges The Cyclotron Team has successfully been working together for more than three years now and everyone is truly dedicated to the project. We've carefully picked our partners, always looking for the most reliable and long term collaboration and not just the cheapest quote. Our partners for production and fulfillment have decades of experience and can adapt to small and large order quantities. In any case we'll keep our backers informed & updated with all the latest infos & developments of the project. Our commitment is to deliver in time and exceed your expectations with the quality and performance of the Cyclotron. Questions about this project? Check out the FAQ Support Funding period - (30 days)
https://www.kickstarter.com/projects/1989795590/the-cyclotron-bike-revolutionary-spokeless-smart-c/
CC-MAIN-2018-13
refinedweb
3,410
60.45
1.00 Introduction to Computers and Engineering Problem Solving Quiz 1 March 7, 2003: SOLUTION Name: Email Address: TA: Section: Teaching Assistant You have 90 minutes to complete this exam. For coding questions, you do not need to include comments, and you should assume that all necessary files have already been imported. Good luck. Question Points Question 1 / 14 Question 2 / 9 Question 3 / 9 Question 4 / 33 Question 5 / 35 Total / 100 1.00 Spring 2003 Quiz 1 1/8 5/26/2003 View Full Document This preview has intentionally blurred sections. public class Quiz1 { private static int quizCount = 0; private final int pageCount; public Quiz1(int pc) { pageCount = pc; } public void printInfo() { // omitted } public static void printStats() { // omitted } public void partF() { int x = 5, y = 10; double z = (double)(x/y); System.out.println(z); } } Refer to the code above when answering the true or false questions below. Circle TRUE or FALSE for each of the following statements: A. The static data member quizCount is associated only with a specific instance of the class. TRUE FALSE B. The final data member pageCount cannot be changed after it is initialized. FALSE TRUE C. The non-static method printInfo can access static data members of the class. TRUE - Spring '05 - GeorgeKocur - Object-Oriented Programming, Subroutine, static variable, private final int, private static int, public class Bulb Click to edit the document details
https://www.coursehero.com/file/6588655/quiz1sol-sprng03/
CC-MAIN-2018-09
refinedweb
232
54.42
I have thinking sphinx setup with my search form, but it is returning all users instead of users with the data I searched. In the users controller: Code Ruby: def index @users = params[:query].blank? ? User.all : User.search(params[:query]) user_index.rb (inside /indices): Code Ruby: ThinkingSphinx::Index.define :user, :with => :active_record do # fields indexes name, :as => :user, :sortable => true indexes [ethnicity, religion, about_me, sexuality, children, user_smoke, user_drink, age, gender] # attributes has id, created_at, updated_at end I have nothing for thinking sphinx inside my User model, not sure if I need to go that route. Any help would be appreciated
http://www.sitepoint.com/forums/printthread.php?t=1163061&pp=25&page=1
CC-MAIN-2014-10
refinedweb
101
52.8
Estimate the autocorrelation time of a time series quickly. Project description This is a direct port of a C++ routine by Jonathan Goodman (NYU) called ACOR that estimates the autocorrelation time of time series data very quickly. Dan Foreman-Mackey (NYU) made a few surface changes to the interface in order to write a Python wrapper (with the permission of the original author). Installation Just run pip install acor with sudo if you really need it. Otherwise, download the source code as a tarball or clone the git repository from GitHub: git clone Then run cd acor python setup.py install to compile and install the module acor in your Python path. The only dependency is NumPy (including the python-dev and python-numpy-dev packages which you might have to install separately on some systems). Usage Given some time series x, you can estimate the autocorrelation time (tau) using: import acor tau, mean, sigma = acor.acor(x) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/acor/
CC-MAIN-2018-39
refinedweb
181
62.38
We think our WebSocket feature is pretty awesome; it gives you a real-time stream of the audio from your phone call (and allows you to stream audio back) all within your web framework. Having access to this real-time stream opens up a vast world of possibilities to do interesting things with the content of the call, not just the signaling. For example, you can have two-way conversations with AI bots, or perhaps you just want to feed the audio of the call to another platform for real-time sentiment analysis, or maybe you just want to watch for keywords within a call so that you can track conversations with your customers. In the majority of these examples, the first thing you will need to do is convert the audio into text; this is known as speech recognition or transcription. Usually, services need to perform speech recognition in real-time, but to do so, they must limit their lexicon to a few predefined words or phrases. You might have encountered this when calling your bank, and the auto-attendant asks you what you want to do. Transcription can handle the full conversation but historically it was an offline batch process; you would have to record the audio of the call, and then when the call has ended pass that recording to a transcription service. After the service had transcribed the recording, they would then notify you via a callback. With the recent developments of AI platforms we are now able to get the best of both worlds, real-time, full-text transcription. One of the platforms that is doing this especially well is IBM Watson. Watson exposes a WebSocket interface that allows you to feed it the audio stream of the call. The format of this interface looks a lot like the Nexmo WebSocket interface. Connecting to Watson IBM provides the Watson speech-to-text service over several different channels, such as REST, HTTP with webhook callbacks, and WebSockets. Broadly, they all work in the same way: you pass in a chunk of audio and Watson responds with a transcription. There are various options you can enable, such as interim results which give you a partial transcription that then may be updated when Watson has a better idea of the speech with more context. You also need to specify a language model to be used for the transcription. Watson has models for numerous languages including separate UK and US English. Where the source audio is coming from a phone call, you should use the Narrow Band models for the best results. For this demo, we will connect to the WebSocket interface. This means that we can stream the audio from Vonage straight into Watson without having to do anything like silence detection to break the stream up into chunks. Because Watson responds with the transcription data on the same WebSocket connection as you send the audio, we can’t quite directly connect Nexmo to IBM. Instead, we need to run a relay server to receive the audio from Nexmo and forward the packets onto Watson; then we can receive the transcription messages back from Watson and handle them in our application. You’ll find the code on Github. Let’s walk through what is going on below. Handling the Call Like all Nexmo voice applications, we need to set up an application with an answer URL that will return an NCCO. We will be serving that NCCO from our web app server. This NCCO will instruct Nexmo to play a short hello message then connect the call to our WebSocket. Here’s the NCCO: [{ "action": "talk", "text": "Please wait while we connect you to Watson" }, { "action": "connect", "endpoint": [{ "type": "websocket", "uri" : "ws://example.com/socket", "content-type": "audio/l16;rate=16000", "headers": {} }] }] As you can see this is a reasonably straightforward NCCO, we greet the caller and then connect the call to our WebSocket server. Handling the Connection With Our WebSocket Server When Vonage connects the call to our WebSocket server, we then need to initiate a new connection to the Watson WebSocket interface. To connect to Watson, we need to request a token using our username and password. You can get these by signing up for a set of Watson []service credentials. These credentials will look something like the object below: { "url": "", "username": "aaaaaaaa-1111-bbbb-2222-cccccccccccc", "password": "ABC123def456" } Using this object, we can then build a function to request a return a token: def gettoken(): resp = requests.get('', auth=(d['username'], d['password']), params={'url' : d['url']}) token = None if resp.status_code == 200: token = resp.content else: print resp.status_code print resp.content return token We can use this function to construct the URI for the Watson WebSocket service: uri = 'wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize?watson-token={}&model={}'.format(gettoken(), language_model) We have already specified the language_model in another variable at the beginning of the code. With this URI we then create a new WebSocket connection to Watson and create that as an object within our incoming WebSocket connection. (self.watson_future) Handling Messages When a message arrives on the WebSocket from Vonage, we will handle it with the on_message function within our WSHandler. Firstly, we call yield on our watson_future object so that we have a reference to the Watson connection, then we parse the message. The first message that we receive from Vonage on a new connection will be a text message containing the audio format; we need to add a few additional parameters to the message to tell Watson how we want it to transcribe the stream then we write that new message to the Watson socket. That message will look something like this: { "interim_results": true, "action": "start", "content-type": "audio/l16;rate=16000" } The key parameter here is the “action”: “start”; it tells Watson that this is the start of a transcription stream. We have also enabled interim-results, this means that Watson will send you its first guess at a transcription and then potentially update that in a later message when it has a better answer. As you may receive multiple messages from Watson for a single transcription, you will need to look at the IDs to construct your text. Responses from Watson When the socket connection to Watson receives a message it will invoke the on_watson_message callback. This function solely prints the message to the screen at the moment, but you could extend out from this example to handle the transcription however you wanted. Once you have successfully connected to Watson you will receive a message like the one below: { "state": "listening" } Then as you stream audio to Watson you will receive transcription messages as follows: { "results": [ { "alternatives": [ { "confidence": 0.617, "transcript": "hello this is the test " } ], "final": true } ], "result_index": 0 } The key things to look for in these responses are as follows: Confidence—this is how sure Watson is that the transcription is accurate; a value of 1 represents maximum confidence. As you can see from the test above I said “Hello This is a test”, but Watson got it slightly wrong. However, it had a confidence of only .617; the response still makes sense and the essential parts of the message are there. Sometimes you will even get more than one transcript option with associated confidence values; it's up to you to decide which one to use. Similarly, you can use the confidence value to decide how to proceed; you might want to ask the user the question again for example. Final—this means it’s the final pass at transcribing that phrase. Sometimes you’ll get interim results where Watson has transcribed only part of the message like below: { "results": [ { "alternatives": [ { "transcript": "one two three four " } ], "final": false } ], "result_index": 3 } { "results": [ { "alternatives": [ { "transcript": "one two three four five six seven eight " } ], "final": false } ], "result_index": 3 } { "results": [ { "alternatives": [ { "confidence": 0.982, "transcript": "one two three four five six seven eight nine ten " } ], "final": true } ], "result_index": 3 } In this example, I counted to 10 reasonably slowly so Watson sent a transcription event part way through my count. If you look at the result_index value you can see that it’s the same, indicating that these are multiple passes of the same bit of voice. Only the last one has final set to true and contains the full string. Ending the Call When the user hangs up the call Nexmo will close the WebSocket connection; we can use the on_close handler to capture this event and send a stop action to Watson before closing that connection.
https://developer.vonage.com/blog/17/10/03/real-time-call-transcription-ibm-watson-python-dr
CC-MAIN-2022-40
refinedweb
1,432
57.2
gir has "introspectable=0" for Accounts.list() Bug Description The gir /usr/share/ """ <namespace name="Gwibber" <class name="Account" ... <method name="list" <type name="GLib.List" c: <type name="gpointer" c: </type> </method> """ To test: $ python -c 'from gi.repository import Gwibber; accounts= This still isn't working, but it has been partially fixed. I think we need to add a GI overrides for libgwibber. This bug was fixed in the package libgwibber - 0.1.1-0ubuntu1 --------------- libgwibber (0.1.1-0ubuntu1) natty; urgency=low * New upstream release. libgwibber1. symbols libgwibber- dev.install, debian/ libgwibber- gtk-dev. install - GIR fixes, use valac to generate the GIRs instead of g-ir-scanner, it generates more accurate metadata (LP: #702185) * debian/ - Added new symbol * debian/ - Handle the renamed .pc files - Install the .deps files along with the .vapi -- Ken VanDine <email address hidden> Wed, 23 Feb 2011 22:16:55 -0500
https://bugs.launchpad.net/ubuntu/+source/libgwibber/+bug/702185
CC-MAIN-2015-40
refinedweb
149
63.05
Download presentation Presentation is loading. Please wait. Published byAndra Stevens Modified over 5 years ago 1 CAPITAL BUDGETING AND CAPITAL BUDGETING TECHNIQUES FOR ENTERPRISE Chapter 5 2 objective Capital Budgeting Techniques of Capital Budgeting Pay back period Return on investment (ROI) Net Present Value (NPV) Profitability Index (PI) Internal Rate of Return (IRR) 3 Capital budgeting Capital budgeting is the process of evaluating long-range investment proposals for the purpose of allocating limited resources effectively and efficiently. Capital Budgeting techniques are employed to assess the financial viability of the project. Suppose, for instance, a company wants to introduce a new soap and launching of the new product demands changes in the manufacturing process, the company will have to purchase new equipment in the form of fixed assets. Capital budgeting is a technique used to evaluate the value of investment and projects in fixed assets. It is also used to assess the working capital requirements. 4 CAPITAL BUDGETING Is the activity worth the investment? Which assets can be used for the activity? Of the suitable assets, which are the best investments? –Screening decision –Preference decision Which of the best investments should the company choose? –Mutually exclusive projects –Independent projects –Mutually inclusive projects 5 Pay back period: In this technique, we try to figure out how long it would take to recover the invested capital through positive cash flows of the business. Decision Criteria: In two projects, the project with less payback period should be acceptable. 6 Limitations 1 2.First and the foremost problem is that it does not take into account the concept of time value of money. 7 There is a project of initial investment of $16000. its cash flow for eight years is 3000, 4000, 4000, 4000, 5000,3000, 2000, 2000 respectively. Find out its payback period? Venture Capitalism! 8 The cafe example Q1. An initial investment of $ 200,000 is required to start the business; $ 10,000 per month are expected to be earned for the first year, and $ 20,000 would be earned every month in the second year. Acceptance policy for payback period is 16 months. Q2. Consider Capital Budgeting project A which yields the following cash flows over its five year life with initial investment of $1000. Year Cash Flows, -1000, 500, 400, 200, 200, 100. find out its payback period? If your company policy for acceptable payback period is 3, will you reject or select this project? 9 Return on Investments A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. It implies the annual average cash flow a business is making as a percentage of investment. In other words, it is an average percentage of investment recovered in cash every year. The concept of return on investment loosely defined, as there are a number of ratios that can be used to analyze return on investment. The formula for return on investment is as follows: ROI= (ΣCF/n)/IO Dividing the average annual cash flow by the initial investment, we can calculate the return on investment. 10 Limitations of ROI 1.it does not take into account the time value of money concept. 2.Ignores the risk associated with a project or investment. 11 Example of ROI Taking the same example of a café, the initial investment of $ 200,000, $ 10,000 per month profit in the 1st year in $ 20,000 per month profit for the second year, we can easily calculate the ROI. ROI= ((120,000+240,000)/2)/200,000= 0.90 = 90% Where, $ 120,000=cash flow for 1st year at $ 10,000 per month $ 240,000=cash flow for the 2nd year at $ 20,000 per month. n=2 years 12 Decision criteria A high ROI ratio is considered better and 90% is a very good rate of return but before deciding whether or not this project should be taken up, we should compare this project with the alternative opportunities on hand. 13 Net Present Value (NPV) NPV is a mathematical tool which uses the discounting process, something that we have found missing in the aforementioned capital budgeting techniques. Net Present Value is defined as the value today of the Future Incremental After-tax Net Cash Flows less the initial investment. Determines whether the rate of return (ROR) on a project is equal to, higher than, or lower than the desired ROR Accept if: –If NPV = 0, actual ROR = desired ROR –If NPV > 0, actual ROR > desired ROR Reject if: –If NPV < 0, actual ROR < desired ROR Does not determine expected ROR 14 NPV The formula for calculating NPV is as follows: NPV=-IO+ΣCFt/ (1+i) t Where, CFt=cash flows occurring in different time periods -IO= Initial cash outflow i=discount /interest rate t=year in which the cash flow takes place Initial cash outflow, being an outflow, is always expressed as a negative figure. 15 Limitation The disadvantage with the NPV is that it is difficult to calculate since these calculations are based on too many estimates. Decision criteria: If the NPV of a project is more than zero, it should be accepted. If two or more projects under contemplation, then the one with the higher NPV, should be accepted. 16 Example Taking the same example of a café, an initial investment of $ 200, 000, $10,000 per month profit in the 1st year in $ 20,000 per month profit for the second year. Assume the discount rate is 10 percent. Where, CFt=cash flows occurring in different time periods, i.e., $ 120,000 in the first year and $ 240,000 in the second year -IO= Initial cash outflow = -200,000 i=discount /interest rate = 10 percent t = 2 years Putting in the values in the formula NPV=-IO+ΣCF/i =-200,000+120,000/(1+0.10)+240,000(1+0.10)2 = - 200,000 + 109,091 + 198,347 =+$107438 At the end of 2nd year, the NPV is +ve, you can also solve this example by monthly compounding if you want to have a more precise answer. 18 Example Let us suppose that you invest Afs 100,000 in a Savings Certificate. After 1 Year you will receive a coupon payment (or profit) of Afs 12,000 and you also reclaim your initial investment (principal). Now a day applied interest rate offered by the banks is 10%. Sol: NPV = -Io + CF1 / (1+ i) + CF 2 / (1+ i) = -100,000 + 12,000/ (1+0.10) + 100,000/ (1+0.10) = -100,000 + 10,909 + 90,909 = + Afs 1,818 NPV positive so investment acceptable NOTE: PV = NPV + Io = 1,818 + 100,000 = Afs 101,818 19 Profitability Index It is quite similar to the NPV in terms of concept and calculation. Profitability index may be defined as the ratio of the present value of future cash flows to the initial investment. The profitability index can be calculated using the following formula. PI = [Σ CFt / (1+ i) t ]/ IO Decision criteria: Those projects with a profitability index ratio of more than one (PI >= 1.0) are considered acceptable. Here it is important to mention that those projects, which are ranked as acceptable using the NPV method, would also be acceptable on the profitability index criteria. 20 Example Example of a café, an initial investment of $ 200, 000, $10,000 per month profit in the 1st year in $ 20,000 per month profit for the second year. Assume the discount rate is 10 percent. The profitability index for the café example can be calculated as under. PI = [120,000]/ (1+ 0.1) + [240,000 / (1+ 0.1)2]/200,000 = (109,091 + 198,347) / 200,000 = 1. 54 PI = 1.54 > 1.0 If there were two or more projects that need ranking, the one with the highest profitability index would be acceptable. 21 Internal Rate of Return (IRR) IRR is a widely used and an important measure, which is more common in practice than the NPV. IRR, unlike NPV that is expressed in dollar amounts, is always quoted in terms of percentage, which makes it comparable to the other market interest rates or the inflation rate. IRR calculation involves the same equation that we have earlier used for the calculation of NPV. The only difference is that while calculating IRR we would set the value of NPV equal to zero and then solve the equation for the value if ‘i’. In other words, the value of ‘i’, at which the net present value of the project equals zero would be considered as the internal rate of return of the project. 22 IRR is calculated by a trial and error method or iteration. In a trial-and-error method, we tryout a value of ‘i’, and see if the equation comes to the value of zero; if it does not, try another value, even if the second value does not bring the equation down to zero and so on. NPV= -IO +CF1/ (1+IRR) + CF2/ (1+IRR) 2 23 Solving the equation assuming IRR to be 10 percent, we have obtained a figure of 107,438, which was calculated as our NPV for the café project. However, in order to bring the NPV down to zero, we need to apply a higher rate as an assumed IRR. If we assume IRR to be 50 percent the equation can be solved as follows. NPV= -IO +CF1/ (1+IRR) + CF2/ (1+IRR) 2 = 0= -200,000 + 120,000/(1+0.5) + 240,000/(1+0.5)2 The calculation gives us a figure of -13,333, which is lesser than zero. In order to bring the value equal to zero we would use a rate lesser than 50 percent. Trying out various IRR rates, we can finally reach a rate of 43.6 percent at which the value of NPV would come down to -48 which is close to zero. If we try out IRR with more decimal places, we can bring the value of NPV equal to zero. However, with approximation, 43.6 percent is the actual IRR of the project. 24 Example Consider the Same Savings Certificate example for IRR calculation. The only difference is that this time, we will not assume any value for “i” as we had done in the NPV calculation. We set the NPV = 0 and solve the equation for “i” (or IRR). NPV = 0 = -Io + [CF1 / (1+IRR)] + [CFI1 / (1+IRR)] We add Rs 12,000 & Rs 10,000 as both appearing at the same time. 0 = -100,000 + [(CF1 + CFI1) / (1+IRR)] 0 = -100,000 + [(12,000+100,000) / (1+IRR)] IRR= (112,000 / 100,000) - 1 (No need for trial & error because you have one variable & one unknown) = 1.12 - 1.00 = 0.12 = 12 % per annum Similar presentations © 2020 SlidePlayer.com Inc.
https://slideplayer.com/slide/4582697/
CC-MAIN-2020-40
refinedweb
1,806
61.06