text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Please do not reply directly to this email. All additional comments should be made in the comments box of this bug report. Summary: Review Request: html401-dtds - HTML 4.01 document type definitions ------- Additional Comments From veillard redhat com 2006-02-12 14:35 EST ------- you can't mix sgml and xml resources in the XML catalogs. SGML definition will generate fatal errors when loaded by an XML parser. Anything reachable from /etc/xml/catalog must be XML only. So html-4... just can share things with xhtml1. W.r.t. using /usr/share/sgml for the XML catalog this is the unfortunate result of SGML nutheads blocking XML from the LSB standard a few years ago, so Red Hat had to keep them there instead of a far more logical /usr/share/xml subtree ! W.r.t. XHTML 1.1 and SMIL 2.0, they are extensible languages, i.e. the basic language defined in the DTDs are supposed to be extended with foreign elements in different namespaces, which is actually something which doesn't work with DTDs, so the usefulness of shipping those 2 gets very limited, as DTD based validation will just fail in general (but Relax-NG or XSD schemas would work better though there isn't good ways to reference them for catalog access from the instances). Please be very careful when trying to handle XML resources if you're competent in SGML but not really aware of XML, this is a very different field with very different rules and specifications, this has bitten us in the past hard and I don't want this to happen again. Daniel (libxml2 author and member of W3C XML Core Working Group) -- Configure bugmail: ------- You are receiving this mail because: ------- You are the QA contact for the bug, or are watching the QA contact. | http://www.redhat.com/archives/fedora-extras-list/2006-February/msg00747.html | CC-MAIN-2013-20 | refinedweb | 306 | 58.01 |
Learn ASP.NET MVC 5 Step by Step in 30 days– Day 1
ASP.NET MVC is a very wanted skill in todays programming world.In this article we will learn MVC step by step in 30 days. In day 1 we look in to from where to install visual studio , how to create a simple first MVC application and also we will look in to issues which arises when we publish our first MVC program..
Day 1 :- Displaying Simple HelloWorld using MVC
Introduction
Step 1:- Download Visual studio ultimate edition from.
Step 2 :-Once you have installed visual studio click on start – program files and open visual studio ultimate edition.
Click on "Visual C#" menu, click on Web and select "ASP.NET Web Application" template as shown in the below figure.
In case you are wondering where all the other templates of ASP.NET have gone just , do not panic. In VS 2013 Microsoft introduced the concept of ASP.NET one. If you see logically "Web Form" , "MVC" and "Web API" they all use ASP.NET framework internally. So why we need to have different templates for the same.
In case you ever want to switch to the old way you can click on "Visual studio 2012" menu where you would find those separate templates.
Step 4 :-Once you go inside visual studio you need to make a choice of the technology on which you want to code. For now we will select MVC.Now because this is our first MVC hello world program we will not get involved in to complexities of unit tests and security.
So Click on Change Authentication and set it to "No Authentication".
Step 5 :-Once you click ok the visual studio project gets created as shown in the below figure. Let us first add a controller.
So right click on the controllers folder -> Add -> Controller menu as shown in the below figure.
Select "MVC 5 Controller – Empty" Scaffold templates as shown in the below figure. Scaffolds are nothing but templates. Because we are just displaying our first hello world program we will use empty template to keep it simple.
In case you want to see how scaffolding template works see this video
Once you have clicked on the empty scaffold give a controller name. But please do not delete the word controller. Controller is a reserved keyword.
In this controller we have created an action called as "SayHello" and this action returns a view "HelloView". Below is the code for the action "SayHello".
public class FirstMVCController : Controller
{
//
// GET: /FirstMVC/
publicActionResultSayHello()
{
return View();
}
}
Step 6 :-Once you have coded the controller the next step is to add "HelloView" view. Now if you see your project folders you will see he has already created a folder with the name "FirstMVC".
So right click on that folder and click -> Add -> View.
Give a nice view name as shown in the below figure. Because this is our first MVC application let not select any model and uncheck the check box "Create as a partial view".
In the view put the below code.
Step7 :-Press control + F5 to run the program.Now in order to invoke the controller and action we need to type the URL in a proper format. The url should be typed with server name followed by the controller name minus the word "Controller" and then action name.
So in our case it will be. If you type the URL format properly and press enter you should see your MVC application running.
Below is a very common error you will get when you run your MVC application for the first time.
Let's try to understand what the aboveerror says. The above error says that the view i.e. "HelloView" view should be a part of shared folder or the controller folder. For example in our situation the controller name is "FirstMVC" so either you shift your view in to "FirstMVC" folder or you move it to the "shared" folder as shown in the below figure.
Awesome! Article. I am looking forward for next articles | https://www.dotnetspider.com/resources/45948-Learn-ASPNET-MVC-5-Step-by-Step-in-30-days-Day-1.aspx | CC-MAIN-2022-27 | refinedweb | 680 | 73.98 |
from lightning import Lightning from sklearn import datasets
lgn = Lightning(ipython=True, host='')
Connected to server at
The
image-poly visualization let's you draw polygonal regions on images and then query them in the same notebook!
Try drawing a region on the image. Hold command to pan, and option to edit the region.
Note that we assign the visualization to an output variable, so that we can query it later, but we must print that output to get the image to show.
imgs = datasets.load_sample_images().images viz = lgn.imagepoly(imgs[0]) viz
Draw some regions on the image above. Then check the value of
viz.coords.
p = viz.polygons()
lgn.imagepoly(imgs[0], polygons=p)
In this case we drew four regions, so we get four arrays of coordinates (in original image space).
The region overlays themselves do not persist on the image after a refresh, so you'll have to draw your own to see them. | https://nbviewer.jupyter.org/github/lightning-viz/lightning-example-notebooks/blob/master/images/image-poly.ipynb | CC-MAIN-2018-39 | refinedweb | 158 | 63.9 |
<ac:macro ac:<ac:plain-text-body><![CDATA[
<ac:macro ac:<ac:plain-text-body><![CDATA[ Framework: Zend_Http_Request Component Proposal
Table of Contents
1. Overview
2. References
3. Component Requirements, Constraints, and Acceptance Criteria
4. Dependencies on Other Framework Components
5. Theory of Operation
6. Milestones / Tasks
7. Class Index
<ac:macro ac:<ac:plain-text-body><![CDATA[
- 1. Overview
- 2. References
- 3. Component Requirements, Constraints, and Acceptance Criteria
- 4. Dependencies on Other Framework Components
- 5. Theory of Operation
- 6. Milestones / Tasks
- 7. Class Index
- 8. Use Cases
- 9. Class Skeletons
- Zend_Http_Request is a container used for accessing GET, POST, COOKIE, PATH_INFO, and various input from the browser. It uses error correction and fallback methods to emulate certain variables when they are not set or not in the proper format._Http_Request
8. Use Cases
9. Class Skeletons
24 Commentscomments.show.hide
Sep 03, 2006
Simon Mundy
<p>For the most part it's a nice evolution of the idea from the initial discussions, but I'm not quite on board with treating the pathInfo, basePath, etc properties the same as the get/post/request params. I'd rather call the methods by themselves - conceptually the values they provide are quite separate values than you would retrieve via the get/post variables (e.g. BasePath may need to be calculated rather than simply passed through).</p>
<p>FWIW, I'd rather access the $_REQUEST properties via the __get and __set methods, or retrieve the $_GET/$_POST properties more explicitly by a get('post') or get('get') method (there was a slightly confusing method called getGet in the last proposal that was the right method with the wrong name <ac:emoticon ac: Because $_REQUEST is an amalgam of $_GET and $_POST I think this would be more natural and offer a better degree of simplicity.</p>
<p>But apart from that, I'm all for it!</p>
Sep 03, 2006
Michael Sheakoski
<p>Cheers Simon!</p>
<p>If I understand correctly, you are thinking that $request->basePath will simply return an uncalculated value? This is not the case because it will actually attempt to figure out the basePath if $_basePath is null or has not been set by the user. I can add support for the $_REQUEST superglobal, that is an easy fix. As far as accessor methods for get/post/request/cookie, is there any benefit to using accessors on them? They are all simple arrays and an accesssor would seem like an unnecessary layer of bloat to access them while providing no benefits.</p>
Sep 03, 2006
Michael Sheakoski
<p>I forgot to add, you can call both the accessor methods as well as _<em>set/</em>_get for requestUri, pathInfo, baseUrl, basePath.</p>
<p>$request->pathInfo is the same as $request->getPathInfo()</p>
Sep 03, 2006
Simon Mundy
<p>No, I understand that $request->pathInfo maps directly to $request->getPathInfo() but I don't believe it should. Because these kind of properties are calculated values, they should only be accessed via the methods. Everything else, though, is fine.</p>
<p>E.g. If $_GET<ac:link><ri:page ri:</ac:link> = 3 and $_GET<ac:link><ri:page ri:</ac:link> = 'bar' then I should be able to access them via $request->id or $request->get('get')<span style="text-decoration: line-through;">>id (and similarly for $_POST variables). This would also apply to args from a CLI environment - myscript --foo=bar => $request</span>>foo</p>
<p>The big advantage of checking them like this is that $request->notyetset would not throw an E_NOTICE (because the __get would check via __isset). I would say that providing access via both the methods and the __get properties for PathInfo (etc..) would be bloat because it's unnecessary duplication.</p>
Sep 03, 2006
Michael Sheakoski
<p>I mainly added the _get/set methods for consistency so all of the members could be accessed in the same fashion. It is not a big deal to add a get() function though, I'll add it. Here is how I ended up using the class which should explain why I added the __set/_get in the first place:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
<script type="text/javascript" src="<?=$this->request->basePath?>/resources/scripts/common.js"></script>
...
<form id="myForm" action="<?=$this->request->baseUrl?>/user/login" method="post">
]]></ac:plain-text-body></ac:macro>
<p>I wanted the code to look as readable as possible in the views. I understand what you mean about calculated vs uncalculated but there is a fine line in this case because $request->pathInfo = $_SERVER<ac:link><ri:page ri:</ac:link>, $request->requestUri = $_SERVER<ac:link><ri:page ri:</ac:link> which are technically uncalculated values... I simply put a few checks in with the get/set functions to make sure that data is there and in the proper format.</p>
<p>As far as the E_NOTICE, $request->someRandomVarNotSet will ALWAYS return null regardless of _<em>get/</em><em>set/</em>_isset.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
class MyClass
{
public $foo;
public $bar;
}
$my = new MyClass();
var_dump($my->foo);
var_dump($my->bar);
var_dump($my->notSetYet);
----------
X-Powered-By: PHP/5.1.1
Content-type: text/html
NULL
NULL
NULL
]]></ac:plain-text-body></ac:macro>
Sep 03, 2006
Simon Mundy
<ac:macro ac:<ac:plain-text-body><![CDATA[$request = new Zend_Request_Http();
echo $request->get->foo;
]]></ac:plain-text-body></ac:macro>
<p>returns 'Notice: Trying to get property of non-object in /home/zendtest/index.php on line 6';</p>
<p>If you're passing $_GET to the public 'get' property of Zend_Request_Http then you're not performing any isset() checking on those properties and it will throw notices</p>
<p>Currently $request->foo will never work in your component (using the above code).</p>
Sep 03, 2006
Simon Mundy
<p>Actually further to that it also throws the same notice for array access</p>
<p>e.g. request->get<ac:link><ri:page ri:</ac:link>;</p>
<p>'Notice: Undefined index: foo in...'</p>
Sep 04, 2006
Michael Sheakoski
<p>That would be expected at this moment.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$request = new Zend_Request_Http();
echo $request->get->foo;
returns 'Notice: Trying to get property of non-object in /home/zendtest/index.php on line 6';
]]></ac:plain-text-body></ac:macro>
<p>I didn't implement a get object or get() method. It is simply a copy of the $_GET array.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
e.g. request->get['foo'];
'Notice: Undefined index: foo in...'
]]></ac:plain-text-body></ac:macro>
<p>Standard PHP behavior for an array</p>
Sep 04, 2006
Simon Mundy
<p>Yes, I understand it's standard PHP behaviour. Coming back to my original message I was asking if you would implement a $_REQUEST via overloading so that this kind of property testing could be done more gracefully and provide a intuitive access to request variables.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$foo = $request->foo // rather than $request->get->foo
]]></ac:plain-text-body></ac:macro>
<p>All that would need to be done is to convert the $_REQUEST array as a Zend_Hash object (see Rob Allen's proposal) and store that - the hard work would be done already.</p>
Sep 04, 2006
Michael Sheakoski
<p>The only problem is that Zend_Request_Http is not simply a wrapper for $REQUEST, it is a "dispatch token" plus a wrapper for a subset of the $_SERVER vars which facilitate directing a PATH_INFO/GET/POST/Cookie request to the proper controller/action and for convenience also contains a copy of $_GET/$_POST/$_COOKIE/$_REQUEST. Because of this, providing direct access to $_REQUEST through __set/_get would not be a good idea since you would potentially have naming conflicts. What if you had a var called $_REQUEST['get'], $_REQUEST['post'], $_REQUEST['cookie'], $_REQUEST['pathInfo'], etc... If you did $request->post would that mean the $_REQUEST<ac:link><ri:page ri:</ac:link> var or would it mean $_POST?</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$request = new Zend_Request_Http();
echo $request->get['myvar'];
echo $request->request['foo'];
]]></ac:plain-text-body></ac:macro>
Sep 04, 2006
Simon Mundy
<p>I'd suggested to Christopher Thompson that you could use SPL's ArrayAccess for that:-</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
// Vars parse by Path first, then $_GET, $_POST for $_REQUEST
echo $request->id // 3
echo $request['GET']->search // sometext
echo $request['REQUEST']->month // 8
echo $request['COOKIE']->preference // ... etc ...
]]></ac:plain-text-body></ac:macro>
<p>This way you can provide namespaces for the superglobals and still allow a more consistent access to these properties.</p>
<p>As for the pathInfo, baseUrl I still think they should remain as methods only or you could use this namespacing to have</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
echo $request['HTTP']->baseUrl
]]></ac:plain-text-body></ac:macro>
Sep 05, 2006
Gavin
<p>This proposal should be reviewed in parallel with the other Zend_Http_Request proposal.</p>
<ul>
<li>reviewing together increases productivity</li>
<li>ideas can be combined or evolved together</li>
<li>avoids redoing work, if one proposal is accepted, and the other spurs changes to the first</li>
</ul>
Sep 05, 2006
Michael Sheakoski
<p>Agreed. I didn't mean to cause any confusion by making a seperate proposal. The other proposal differs from mine because it was originally a router and now heading towards a combination of a dispatch token + minimal HTTP Response class. Mine is a pure HTTP Response class designed to access just about every form of input that the browser sends.</p>
Sep 05, 2006
Michael Sheakoski
<p>Oops I meant HTTP Request object <ac:emoticon ac:</p>
Sep 11, 2006
Stanislav Malyshev
<p>Mehtods for resolving base URL, path etc. are indeed very welcome addition, however I don't really see why to have plain wrappers around variables like _GET and _POST. It's not likely they are going anywhere, and since you don't do anything but return them why the user won't just use _GET/_POST and save the function call?</p>
Sep 11, 2006
Michael Sheakoski
<p>Mostly to make the class consistent with the Request object of other frameworks/languages which access <strong>all</strong> input from the browser. It wouldn't really make sense to have an object like Http_Request that only supports a subset of the information. In addition, by going through the functions as opposed to directly accessing the GET/POST superglobals you can do things like include them in your templates without worrying about generating an E_NOTICE for undefined indexes as well as filtering or error checking as demonstrated in the getServer() function.</p>
Sep 11, 2006
Stanislav Malyshev
<p>I think it makes a lot of sense for an object to give the information that is not available elsewhere, but do not attempt to copy system variables which can be accessed directly. I am also not sure that using data transformations in get functions is good thing especially when you already have separate functions for the same data. This leads to very interesting effects like $a = getServer(); $a<ac:link><ri:page ri:</ac:link> not being the same as getServer('PATH_INFO').<br />
for it. <br />
As for notices, I'm not sure adding 2 function calls for each variable access (and function calls are not cheap in PHP at all) is worth silencing warning, especially when you have perfectly good @ for it.</p>
<p>I think we should use function which are really very usable - the second half of the class, the path/url resolution function - but let alone the getters for superglobals, there's really not much need to create methods in class just to avoid a notice. </p>
Sep 12, 2006
Michael Sheakoski
<p>All good points. I guess I just feel funny having the class called Zend_Request_Http which sounds like it should give me access to everything but only supporting a subset of that. Perhaps I can find the happy medium and condense the getQuery/Path/Server/Env/etc... into a single function. No matter what goes on in here Zend will of course be the final decision maker. I assume they will take a combination of features from both of the proposals.</p>
Sep 12, 2006
Christopher Thompson
<p>Michael and I have gone around and around on this. I believe that the Http Request should focus on GET/POST and maybe COOKIE. I proposed following the Zend_Hash interface to provide a standard interface for the current incoming data (either GET or POST auto-selected by method). This is the simplest interface and I believe in the ZF style. </p>
<p>I can see some Cookie access for convenience, but I would prefer to see a separate Http_Cookie class that had full support for writing cookies. Likewise I think SERVER and ENV should be in a Server_Info or System_Info class. </p>
Sep 12, 2006
Simon Mundy
<p>I agree that by itself it doesn't add any value, but what if we are extending Zend_Http_Request to accept (for example) a filter? Then it makes plenty of sense to use the $request object to retrieve the value rather than accessing the superglobal.</p>
<p>I think Michael has good reason to implement the _GET/_POST accessors - avoiding notices aside, it encourages developers to use the ZF components over more procedural-style code and it allows us a good base for adding useful features over time.</p>
<p>My only source of confusion at the moment is the way to access these. So far I've seen 5 or 6 different implementations with weird and wonderful ways of accessing request/get/post data. Is this the final proposal or should I be looking at <a class="external-link" href=""></a></p>
<p>My ultimate preference is to access request parameters as public properties. These are set in order of hierarchy: _POST then _GET then path parameters. In 9/10 cases this will be all I need. I don't want write access, only read (because if I change a parameter what does it map back to?). If I need to determine if a parameter exists from a specific source, then getPost() / getQuery() / getParam() will do just fine.</p>
<p>getRequest is redundant, as it (in effect) provides slightly less information as you're gathering here (it contains _GET/_POST/_COOKIE but not path parameters). I think it should go.</p>
<p>I agree with Stan that the getServer() method should return only an unaltered value from the array - if I wish to calculate the REQUEST_URI I'll use that method instead.</p>
<p>You've also got the convenience of having the $_SERVER values within the request object already, so the calls to $_SERVER from setRequestUri and setBaseUrl could instead reference a protected property.</p>
<p>E.g.</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
class Zend_Http_Request implements Zend_Http_Request_Interface
{
protected $_requestUri;
protected $_baseUrl;
protected $_basePath;
protected $_pathInfo;
protected $_request = array('get' => array(),
'post' => array(),
'cookie' => array(),
'server' => array(),
'env' => array());
public function __construct()
}
]]></ac:plain-text-body></ac:macro>
<p>Lastly - and this may be more of a personal preference - but I would see the initial setup of methods like setPathInfo() happen in the constructor. Currently these methods calculate default values only if you pass a null value to the method. I would think that those could be created upon instantiation and available immediately so that you don't need to call an empty method in your app - the value is just there. If you wish to explicitly overwrite that value then do so, but otherwise it seems like an unnecessary call.</p>
<p>I don't wish to seem overly picky - these are only my personal preferences and can of course be completely ignored - but I would like to see as much refinement of this component as possible as it is so critical to the framework.</p>
Sep 12, 2006
Christopher Thompson
<p>I agree with you. The proposal I did which is the second Class Skeleton here (<a class="external-link" href=""></a>) differs from Michaels in a couple of respects. It puts the current request into properties, it adds the Dispatcher_Token info, and it does not contain SERVER or ENV accessor. I provided accessors for each type for completeness, but all you really need additional is access to $_GET when the method is POST, because where the method is GET there are no POST vars. I also made the constructor similer to the InputFilter class. </p>
<p>The PathInfo and BaseURL methods need to be accessed by Intercepting Filters such as the Router so that is why they are methods.</p>
Sep 12, 2006
Michael Sheakoski
<p>Cheers Simon! This is not the final proposal. Gavin added this one to the "under review" section so both implementations could be compared and contrasted.</p>
<p>I was toying with the idea in my head about being able to turn a filter on and off. Something like $request->filter(HtmlEntities, true/false) and filter('stripSlashes', true/false) which would automatically apply to methods such as __get, getQuery/Post('theVar'). Just a rough idea, not a specific implementation yet but I think it is definitely something that would be useful.</p>
<p>As far as writing variables, you are correct that this class should be as read-only as possible. In other popular languages/frameworks the Http_Response object is used for sending data back to the browser.</p>
<p>Stanislav, Christopher, and yourself all make very good points and I will definitely be redoing the items you mentioned. getRequest() can be removed due to its limited usefulness. I still think that getServer() and getEnv() would be useful to access <strong>unaltered</strong> data related to the request via a consistent interface with the added bonus of no E_NOTICE.</p>
<p>As far as using __get() overloading I think it should be consistent with the order of precedence .NET uses which is 1.GET 2.POST 3.COOKIES 4.SERVER. <a class="external-link" href=""></a> – I still think that someone developing an app already knows where the input is coming from and would target getQuery/getPost/etc... for that specific data in the first place but I will add __get() support as a convenience method.</p>
<p>I think that there are still many useful properties that should be top-level in the class as well. <a class="external-link" href=""></a> shows a pretty good idea of them:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$request->requestUri (calculated via the function)
$request->pathInfo (calculated via the function)
$request->baseUrl
$request->basePath
$request->isSecureConnection (this needs to be calculated anyway w/ a few lines of code)
$request->httpReferrer
$request->userAgent
$request->userHost
$request->userIp
etc...
]]></ac:plain-text-body></ac:macro>
<p>Perhaps they should be in the first-letter-capital-format PathInfo, BaseUrl, HttpReferrer, etc... just to avoid conflicts with the __get() stuff?</p>
<p>Looking forward to hearing all of your thoughts on this</p>
Sep 12, 2006
Michael Sheakoski
<p>Okay I went ahead and redid the stuff that was talked about by everyone in the comments.</p>
<p>Simon I wanted to clear something up about a previous comment you had, "Lastly - and this may be more of a personal preference - but I would see the initial setup of methods like setPathInfo() happen in the constructor..."</p>
<p>You don't need to call setPathInfo() in your app. It is only there as a fallback mechanism in case the automatic detection does not work. If $this->_pathInfo is null then getPathInfo() will automatically call setPathInfo(null) to trigger the auto-detection. I designed it this way because I believe the code for setting $this->_pathInfo should be in the setter not the getter.</p>
<p>This works just fine:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$request = new Zend_Http_Request();
echo $request->getPathInfo();
]]></ac:plain-text-body></ac:macro>
Oct 10, 2006
Matthew Weier O'Phinney
<ac:macro ac:<ac:parameter ac:Zend Comment</ac:parameter><ac:rich-text-body>
<p>This proposal is now incorporated in <a class="external-link" href=""></a></p></ac:rich-text-body></ac:macro> | http://framework.zend.com/wiki/display/ZFPROP/Zend_Http_Request+-+Michael+Sheakoski?focusedCommentId=3945 | CC-MAIN-2014-10 | refinedweb | 3,406 | 52.9 |
- Training Library
- Containers
- Courses
- Kubernetes Patterns for Application Developers
Leveraging kubectl
Contents
Introduction
Kubernetes Patterns
Course Summary
The course is part of these learning paths
Description
With many enterprises making the shift to containers, along with Kubernetes being the leading platform for deploying container applications; learning Kubernetes Patterns is more essential than ever for application developers. Having this knowledge across all teams will lead to yielding the best results, for developers in particular.
This Kubernetes Patterns for Application Developers Course covers many of the configuration, multi-container pods, and services & networking objectives of the Certified Kubernetes Application Developer (CKAD) exam curriculum.
Help prepare for the CKAD exam with this course, comprised of 6 expertly-instructed lectures.
Learning Objectives
- Understand and use multi-container patterns to get the most out of Kubernetes Pods
- Learn how to control network access to applications running in Kubernetes
- Understand how Kubernetes Service Accounts provide access control for Pods
- Use the Kubernetes command-line tool kubectl to effectively overcome challenges you may face when working with Kubernetes
Intended Audience
This course is intended for application developers that are leveraging containers and using or considering using Kubernetes as a platform for deploying applications. However, there are significant parts of this course that appeal to a broader audience of Kubernetes users. Some individuals that may benefit from taking this course include:
- Application Developers
- DevOps Engineers
- Software Testers
- Kubernetes Certification Examinees
Prerequisites
We would recommend that to successfully navigate this course it would be helpful to have:
- Knowledge of the core Kubernetes resources including Pods, and Deployments
- Experience using the kubectl command-line tool to work with Kubernetes clusters
- An understanding of YAML and JSON file formats
Related Training Content
This course is part of the CKAD Exam Preparation Learning Path.
Transcript
Update - From kubectl version 1.18 the
kubectl run command can no longer be used for creating deployments.
kubectl create deployment or manifest files can be used as alternatives. Also the
--export option of
kubectl get is no longer supported resulting in functional, but more verbose output manifests.
Welcome back. If you spend any amount of time working with a Kubernetes cluster, you'll probably be issuing a lot of kubectl commands, this is certainly the case during Kubernetes certification exams, kubectl has features that can help you overcome many challenges you might face. This lesson is intended to give you some tips to increase your efficiency and get the most out of kubectl. This lesson will demonstrate enabling auto-completions for kubectl or Kube Control to up your productivity at the command line, how to get the most our of kubectl's Get command, quick ways to generate resource manifest files with kubectl and how to use kubectl to give you information about resource specification fields in manifest files.
I'll be using a Kubernetes cluster I've stood up on Linux nodes running in AWS. The cluster is the same as you use in Cloud Academy lab environments and is also very similar to clusters in Kubernetes certification exam environments. You can follow along using any type of cluster however, single node clusters spun-up using Minikube or enabling Kubernetes in Docker for Mac or Docker for Windows Distributions will work just fine for this lesson. Now let's get started. Anytime you will be using more than a few commands with kubectl, you will probably enjoy having command completions enabled, it is not very difficult to do, but if you ever forget, kubectl can tell you how, all you really need to remember is that entering kubectl by itself lists all of the available commands and completion is one of them. To display the commands for enabling completions for different systems and shells, add the help option to the completion command, you can also use -h as a short form instead of spelling out help.
With kubectl it is quite common to have examples and command help pages, it is tempting to hop over to your favorite search engine when you forget how to do something, but kubectl has a lot of answers as long as you know how to get at them. I'm using Linux and the bash shell, so this source command is what I need to enable completions for the current shell. To have completions enabled automatically every time a shell is created, add the command to your .bash_profile file, now you can easily have commands auto-completed by pressing Tab or list the available commands and options by pressing Tab twice if there isn't a single completion for what you've entered. If I enter kubectl followed by Tab twice, the available commands for kubectl are displayed. If I type g, followed by Tab, the only command starting with a g, get, is completed. Completions will save you a fair amount of time and prevent typos, this is always useful, but especially if you are taking one of the time-limited Kubernetes certification exams. The get command is your go-to command for displaying resources in Kubernetes, I'll press Tab twice to show the resources that can be shown with get. Let's say we're interested in nodes, I'll enter nodes to display all the nodes in the cluster.
With completions enabled, it is easy to enter the names of resources with some Tab magic, but you can also make use of the short names for resources. To list the short names for resources, you can enter kubectl api-resources. The first column lists the full name and the second lists any short name, if there is one, so if you aren't a fan of typing nodes, feel free to simply enter no, it's more beneficial, considering the length of some resources, such as certificate signing requests, which can be compressed down to CSR, a whopping savings of 23 ASCII characters. To get a look at all the pods in a cluster, I'll use the all namespaces option of the get command, you can also use the short name p-o for pods. Only pods in the Kube system namespace are running, because this is a fresh cluster, there are still quite a few pods running and it doesn't take long before there can be significantly more. Besides selecting a specific namespace, you can also use labels to filter the output, but how do you know what the labels are, you ask? You can use the show-labels option for that.
An additional labels column is appended showing all the labels for each pod, if you are only interested in a subset of the labels, you can use the -L option, followed by a comma-separated list of label names. For example, if you are only interested in the k8s-app label, you can use the -L option, followed by k8s-app and the k8s-app column is added. Any resources with the value for the label have the value shown in the k8s-app column, if you are only interested in seeing resources with the label defined, you can use the lowercase l option. The lowercase l is how you filter the output and if you only want the resources with a specific value of a label, you can specify the value after an equal sign. For example, to only show the kube-proxy pods, you'd enter k8s-app=kube-proxy.
Likewise, you can add an exclamation mark before the equal sign to show all resources not matching the label value, here, the pods that don't have the k8s-app label defined showed up again because not having the label defined is a match for not having a specific value of the label. To hide the pods that don't have the label defined, just add the label name after a comma, you can join as many label queries as you like by joining them with commas. While we're at it, the sort by option comes in handy for organizing the output of the get command, you can sort by the value of fields in the resources manifest. For example, if you want to sort by the age of the pods, you can sort by metadata.creationTimestamp, you can verify that the age column is now sorted, the field that you give to sort by is specified using a json path, it can be read as sorting by the metadata object's creation timestamp field.
Although that works, for more complicated json path expressions it is a good idea to wrap the path in single quotes and braces to avoid cell substitutions and start the path with a period to represent a child of the root json object, this is a better way to specify the json path. When writing json paths, the root object is actually represented with a dollar sign, but here, the dollar sign can be omitted, because the expressions are always children of the root object. You might be wondering, how did I come up with the metadata.creationTimestamp path to sort by age? You can use the output format option of get to list all the fields in a resource, the output formats for entire resources can be either json or yaml, yaml is more compact, so I'll be using that. Here's an example to output a pod in yaml, output format, notice the output=yaml option, you can also use -o as a shortform for output. With the sort by option, you can sort by any numeric or string value field you find in the output, if you wanted to sort by the pod IP address, which is treated as a string, you would give the path .status.podIP, here's how the get command would look sorting by pod IP. You can trust the sort is performed correctly, but how can you verify it? There's another output format that gives additional information dependent on the type of resource, the wide format. For pods, the wide format includes the pod's IP and if you knew you can verify that the output is indeed sorted by the value in the IP column, treating the values as strings. There is one other output format I want to mention, although there are several more. You've actually used the type of the format before, it is the json path format, you can use a json path expression to describe what you want to output.
Let's try to output the pod IP using the same json path expression for the output format. Hmm, all the output disappeared, it must be something wrong with that json path expression. To use json path output effectively, you need to understand when the get command is returning a specific resource or a list of resources, if you specify the name of a resource, for example, the name of a pod, then get will return only that specific pod. In all other cases, where you don't specify a specific resource, a list will be returned, when the list is returned, the json array that contains all the resources is named items. In our case, no specific pod is identified, so the items array needs to be included in the output format json path expression. Notice that you need to use square brackets to index the array, the asterisk is a wildcard, meaning all of the items in the array. The output is not as tidy as it is with the yaml or wide output format, to clean it up, you can use a more complex expression that iterates over the items in the array to also include the name of the pod and add new lines.
The expression takes some time to understand and it is only included to show you that it is possible to include more than single fields in json path output. For more information about json path expressions, see the link to json path support in Kubernetes in the transcript of this video. Those tips are really good for viewing what is already in the cluster, now it's time to shift the focus to creating new resources in the cluster. The create command is your friend for that, the final name option or in short form, -f, allows you to create a resource or multiple resources from a manifest file or a directory containing manifest files. There are also several sub-commands for creating different types of resources without having to use a manifest file, to see them, just view the Create Help page. It's usually better to use manifest files so that you can version control your configuration and practice configuration as code. So why did I mention these shortcuts? You can use sub-commands to generate manifest files when paired with the dry run and output format options, a dry run will stop any resource from actually being created and if you set the output to yaml, the output is an equivalent manifest file for the create command you enter, let's try that with a namespace.
Here, I've used create to generate a manifest file for a namespace named tips, I'll redirect that output to a file in a tips directory. The create command is going to create the resources in filename order. So to ensure the namespace is created first, I've used the number one in the name to force the order, the dry run option is available for other commands that create resources as well. Let's say you want to create an nginx deployment, you can use the run command to generate a manifest file using the options you provide at the command line. Here, I've set the image to nginx, published to container port 80, set the number of replicas to two and exposed the deployment with a service using the expose option. The service will use the default type of cluster IP and the service port will be the same as the container port. If any of those are not what you want, it's now very easy to edit the flushed out manifest file to customize it as you like. We'll discuss services later in this course. For now, let's say we are happy with the defaults, except we want to put the resources in the tips namespace, I'll redirect the output to a file prefixed with two, so that the resources are created after the namespace, then I'll add the namespace to the metadata, I'll use VI for this, which is an alias for Vim on my system. VI is also the default editor for kubectl edit, which we will see later on in this course. You can use whatever editor you're comfortable with, if you are preparing for a Kubernetes certification exam, you should be comfortable with the command line editor to save time copying and pasting into the exam notepad area. To learn how to become an expert at Vim, I'd recommend entering vimtutor in a Mac or Linux cell to go through a series of lessons starting from scratch.
And now, to set the resources namespace field. The resources will now be created in the tips namespace, to create all the resources, I'll use kubectl -f and specify the tips directory. Other commands, such as delete, support the same pattern of specifying a directory, so when you're finished with those resources, you can simply use the same option with the delete command and now all the resources specified in those manifests have been deleted. One last technique for quickly creating manifests is to use the get command to modify manifests from existing resources. That might be an obvious technique, but there is an option to help strip out any cluster specific, that you don't want to be present in a manifest file, such as the pod status and creation time. As an example, if I get the yaml output for a kube-proxy pod and count the number of lines in the output, there are 133 lines of yaml, whereas with the export option, there are 92, in this case, export automatically removed 41 lines of yaml for you. Now to close out the lesson, I have one last tip. Earlier when I was explaining how to craft json path's sort by expressions, I said you could use yaml or json output of the get command to show all the fields of resources.
There is actually another way and it's very useful, it can help you with customizing generated manifests as well. It not only gives you the field names of paths but also tells you the purpose of each field and other useful information, all of that goodness is bundled up into the kubectl explain command. There are a couple of ways to use explain, they both require you to specify a simple path that is similar to a json path, but you give the kind of resource first without a leading dot and follow it with the field path that you are interested in. For example, if you want to see the top-level fields of a pod resource, you enter kubectl explain pod and the output gives you a description of a pod and the top-level fields in a pod. If you wanted to dive into the details of a field that's further down in the hierarchy, let's say, a pod specs container resources field, you just join the fields with dots, in this case, pod.spec.containers.resources. You can traverse the fields up and down in this fashion to understand what fields are used for and if you want to see examples, you can navigate to the provided info links when available. The other way to use explain is to provide the entire schema of a field or resource by using the recursive option.
For example, to see all the fields in a pod's container field along with their types, you can enter kubectl explain pod.spec.containers and specify the recursive option. Explain can save you quite a few trips to your search engine of choice when writing resource manifests. During Kubernetes certification exams, you won't be able to use search engines, so it's important to know how to get the most out of kubectl for exams as well. All right, that brings us to the end of this lesson. There were quite a few tips in there and with them, you should feel confident about using kubectl to overcome any challenges you might have, such as generating manifest files or explaining specification fields. Taking it from the top, we saw how kubectl completions help write lengthy commands and prevents spelling mistakes. Remember that if you don't know the exact syntax for enabling completions, the kubectl completion help page has you covered.
Then you saw how to use labels to filter and the sort by option to sort with the get command to effectively view resources in a Kubernetes cluster. We finished with some techniques for quickly generating manifest files by opening yaml from dry run commands and exporting resources from the get command. The explain command can help you customize the manifest files by explaining the schema and the purpose of different fields. We'll wrap up the course in the next lesson, continue on when you're ready. | https://cloudacademy.com/course/kubernetes-patterns-for-application-developers/leveraging-kubectl/ | CC-MAIN-2021-10 | refinedweb | 3,244 | 51.62 |
I’ve got a list of linelike curves which all coincide at their end and shape a polyline(except it is not joined) . the problem is that little lines in the list are not sorted along the polyline. if I join them it would be one curve . but I dont want to! how can i sort these list of lines that if i start to select them base on their indices it goes from one end of polyline to the end continuously!I hope you understood!
Sorting curves along a curve or direction
Well I guess you could use the JoinCurves Function, and then use the Vertices (obviously they will be “in order”) to create ordered Lines. Or have a joined copy and select your existing lines with some sort of closest Point selection.
Maybe Rhino.SortPointList could help in some way too, although I’m not sure about how it works and what you would exactly need
Just my very amateurish answer, but maybe they help…
The problem I have is that I have a list of data for each line that after joining the line I lose the connection.(I have a list of line and a list of data(as they are beams. this data represent beam section size). If I join the part I dont know how I should rearrange the data with the lines.(I use python in grasshopper by the way).I want to keep it parametric
Here’s a sample that should help. The sample has no “tolerance” in comparing end points which is something you may need to account for (this can be done by checking distance between points instead of direct equality)
import rhinoscriptsyntax as rs import Rhino.Geometry.Line as Line original = [Line(1,2,4,3,5,0), Line(0,0,0,3,5,0), Line(10,0,1,10,10,10), Line(1,2,4,10,10,10)] original_indices = [i for i in range(len(original))] connected = [original_indices[0]] start = original[0].From end = original[0].To original_indices.pop(0) while original_indices: found_connection = False for i, index in enumerate(original_indices): line = original[index] if start==line.From or start==line.To: connected.insert(0, index) original_indices.pop(i) found_connection = True if start==line.From: start = line.To else: start = line.From elif end==line.From or end==line.To: connected.append(index) original_indices.pop(i) found_connection = True if end==line.From: end = line.To else: end = line.From if not found_connection: break for line in original: rs.AddLine(line.From, line.To) for i, index in enumerate(connected): line = original[index] mid = line.PointAt(0.5) rs.AddTextDot(str(i), mid)
Let me know you you have any questions | https://discourse.mcneel.com/t/sorting-curves-along-a-curve-or-direction/671 | CC-MAIN-2018-51 | refinedweb | 451 | 67.86 |
TextSprite rotation not working in IE8
TextSprite rotation not working in IE8
I am using Ext-GWT version 3.0 (specifically 3.0.0-rc), and am attempting to rotate some text via a TextSprite. It works fine in Google Chrome but not in Internet Explorer 8.
Code:
import com.google.gwt.core.client.EntryPoint; import com.google.gwt.user.client.ui.RootPanel; import com.sencha.gxt.chart.client.draw.DrawComponent; import com.sencha.gxt.chart.client.draw.sprite.TextSprite; import com.sencha.gxt.widget.core.client.container.Viewport; public class TextRotation implements EntryPoint { public void onModuleLoad() { TextSprite sprite = new TextSprite("Some text"); sprite.setRotation(270); DrawComponent component = new DrawComponent(); component.setViewBox(true); component.addSprite(sprite); Viewport vp = new Viewport(); vp.add(component); RootPanel.get().add(vp); } }
Code:
<DIV style="WIDTH: 1268px; HEIGHT: 785px" class=GAOO-TBDNN> <DIV style="WIDTH: 1268px; HEIGHT: 785px" __eventBits="100"> <DIV style="WIDTH: 1268px; HEIGHT: 785px" class=x-vml-base> <?xml:namespace prefix = vml /> <vml:shape <vml:skew class=vml</vml:skew> <vml:fill></vml:fill> </vml:shape> <vml:shape <vml:skew class=vml</vml:skew> <vml:textpath</vml:textpath> <vml:path class=vml</vml:path> <vml:fill></vml:fill> </vml:shape> </DIV> </DIV> </DIV> <DIV style="POSITION: absolute; LINE-HEIGHT: normal; TEXT-TRANSFORM: none; FONT-STYLE: normal; FONT-FAMILY: null; LETTER-SPACING: normal; VISIBILITY: hidden; FONT-SIZE: 12px; TOP: -10000px; FONT-WEIGHT: 400; LEFT: -10000px"></DIV>
Code:
10c10 < <vml:skew class=vml</vml:skew> --- > <vml:skew class=vml</vml:skew> 14c14 < <vml:skew class=vml</vml:skew> --- > <vml:skew class=vml</vml:skew>
TextSprite defaults to coordinates (0, 0) which is why 270 degrees moves it outside of the DrawComponent. You can either reposition the sprite or set it to TextAnchor.END to have appear inside the component.
Thanks for the response Brendan.
There is now a message at the top of the page saying "Looks like we can't reproduce the issue or there's a problem in the test case provided". What is the problem with the test case I provided, or in what way can it not be produced? I tried to cut it down to as simple a case as possible.
I want to have the sprite as having a text anchor of TextAnchor.MIDDLE, as I want the text to be centred within its parent component. What co-ords should I be setting the sprite to?
I'm still unsure why this does not work in IE as it does in Chrome. I assume it's differences in implementation between SVG and VML.
The problem seems to be related to the fact that I want the sprite to be resizable when the window is resized, which is why the DrawComponent is placed inside a Viewport (this works fine in Chrome). If I do not use a Viewport then the text appears ok in Chrome and IE.
Thanks.
Missed the view box line. This appears to be a bug in view box and or transformed text bounding box calculations for VML. Thanks for the report.
When might this be fixed ?
When might this be fixed ?
Hi Brendan,
Would you share a guess for when this bug will be scheduled for repair?
My application is suffering from the same problem.
Regards,
Mike
This has been fixed in SVN and will be in the next release.
GXT 3.0.1 has been released and contains this fix.
Success! Looks like we've fixed this one. According to our records the fix was applied for a bug in our system in a recent build. | http://www.sencha.com/forum/showthread.php?196734-TextSprite-rotation-not-working-in-IE8&p=872212 | CC-MAIN-2014-35 | refinedweb | 591 | 67.35 |
Hi there,
I am new to the forum. I know this topic is not new. I have did a search on it and did not quite get what I wanted. Currently, I am trying to write a c++ programme which help me to read in about 10000 rows of numerical data, with 4 columns in each rows. There is some words before the numerical data and those should be discarded by the programme.
I wanted to import the data into my c++ programme and fit those data into a 2x2 vector(to store all the data).
I have 2 problems:
1) the number of row of my data are not constant and I don't have any idea how to dynamically insert the data into my vector(change my vector row according to rows number).
2) I have written a simpler version of my code to proccess a simpler version of data file(attached with this). But it does not compile(can't get fgets to work). Any idea?
Thanks for your help in advanced.
Anyway, here's my code.
#include<stdio.h> #include<stdlib.h> #include<cmath> #include<iostream> #include<fstream> #include<sstream> #include<string> #include<vector> using namespace std; int main(int argc,char* argv) { [INDENT] int i,j=4,k; char* pend; char line[256]; double d[j]; vector< vector<double> > data; //define a vector called data //Read in files from folder //Need to expand to multiple file input ifstream is; //define object which reads in from a file is.open("C:\\Research\\TGA\\processed data\\test.txt"); //open file //if file cannot be opened [INDENT]if(!is) { //print error and exit cerr<<"file cannot be opened."<< endl; exit(1); } else { k=0; while(is) { fgets(line,256,is); //read lines from is to line d[0]>>strtod(line,&pend); //convert char to double var d[1]>>strtod(pend,&pend); d[2]>>strtod(pend,&pend); d[3]>>strtod(pend,NULL);[/INDENT] [INDENT]while(d[0]>0.0){ // check if d[0] is a number, if not get the data out of loop [INDENT]for(i=0; i<j; i++)//loop for creating vector rows { [INDENT]for(k=0; k<j; k++) { data[i].push_back(d[i]);//add vector columns to rows} cout<<data[i]<<endl; }[/INDENT] }[/INDENT] is.close(); } return 0; } [/INDENT] [/INDENT] } | https://www.daniweb.com/programming/software-development/threads/346960/import-only-numerical-data-from-text-file | CC-MAIN-2017-51 | refinedweb | 386 | 62.78 |
Check whether a variable is of a certain qualified typesuggest change
#include <stdio.h> #define is_const_int(x) _Generic((&x), \ const int *: "a const int", \ int *: "a non-const int", \ default: "of other type") int main(void) { const int i = 1; int j = 1; double k = 1.0; printf("i is %s\n", is_const_int(i)); printf("j is %s\n", is_const_int(j)); printf("k is %s\n", is_const_int(k)); }
Output:
i is a const int j is a non-const int k is of other type
However, if the type generic macro is implemented like this:
#define is_const_int(x) _Generic((x), \ const int: "a const int", \ int: "a non-const int", \ default: "of other type")
The output is:
i is a non-const int j is a non-const int k is of other type
This is because all type qualifiers are dropped for the evaluation of the controlling expression of a
_Generic primary expression.
Found a mistake? Have a question or improvement idea? Let me know.
Table Of Contents | https://essential-c.programming-books.io/check-whether-a-variable-is-of-a-certain-qualified-type-0cfff39006d5479692a0172dd19d0329 | CC-MAIN-2021-25 | refinedweb | 169 | 50.06 |
So, what the hell are goroutines?
Native support for concurrency is by far one of the most popular features of Go which allows developers to create concurrent applications easily. In order to take advantage of this native concurrency, goroutines are needed.
Goroutines can be considered the backbone of Go and are important in designing inherently concurrent applications.
So let’s understand what goroutines are.
What are Goroutines
Goroutines can be compared to light weight threads. They are just functions or methods that run concurrently with other functions or methods.
They allow us to create and run multiple functions concurrently in the same address space and have no significant setup cost.
Goroutines vs Threads
Go uses goroutines while other languages such as Java uses threads. Goroutines are cheap(around 2 Kb) where as threads are often memory intensive(beginning at 1 Mb) as they require more stack space. This allows developers to create a large number of goroutines which could not have been possible with threads.
So you might ask, why are goroutines so light weight?
Well, the reason why goroutines are so light weight is because of the fact that Go doesn’t use the conventional native threads but rather the green threads. Native threads or rather the OS threads have individual function call stack and their setup and tear down cost is significant.
Let’s see how Go does that.
So at runtime, the scheduler maps the goroutines onto the threads but unlike the 1:1 mapping in Java threads, a bunch of goroutines are mapped onto a single or a group of threads. Also, goroutines have flexible stack size that expands and shrinks as and when required. This significantly reduces the memory footprint and improves the memory reallocation time for goroutines.
But you don’t need to worry about all these nifty details at all.
All these tasks are taken care of by Go at runtime and is abstracted from the user.
So the user can do what he does best, write clean and efficient code and not to be worried about stack segmentation and memory reallocation.
How to start a Goroutine
A goroutine is fairly simple to start and can be done simply by using the keyword go in front of the function or method and you get a goroutine ready to work concurrently.
package main
import (
"fmt"
)
func hello() {
fmt.Println("Hi, I'm a goroutine!")
}
func main() {
go hello()
fmt.Println("Inside main function")
}
The above code will create a goroutine to run concurrently along with the
func main() which executes its own goroutine known as the main goroutine.
The creation of goroutines is very simple and you don’t have to worry about the finer details in the background as they all are handled at runtime by Go.
But wait..
On execution of the code, you’ll receive a little surprise!
Inside main function
The code only prints the message from the
func main()and not the message from
func hello(). This can be explained as follows —
- When a goroutine is started, the goroutine call gets returned immediately. What it means is that the control does not wait for the goroutine to finish executing in order to proceed with the next function or goroutine.
- Also the main goroutine should be running in order for any other goroutine to run. Its termination would lead to the termination of any other goroutines and the program will end.
That’s precisely what happened with our code above. After a call to
go hello the control printed the message in the main function and the program terminated after that. The
func hello() didn’t get a chance to be executed because the main goroutine got terminated and the program ended.
The fix to the problem
To be precise, it’s more of a hack than a fix because we are bounding the execution of the program to real world time constraints but that would be suffice for this example. Note that this is a bad practice and should be avoided as in reality we always use channels for any such synchronisation.
package main
import (
"fmt"
"time"
)
func hello() {
fmt.Println("Hi, I'm a goroutine!")
}
func main() {
go hello()
time.Sleep(1 * time.Second)
fmt.Println("Inside main function")
}
The
time.sleep(1 * time.Second) line puts the current goroutine to sleep and gives the hello goroutine enough time to finish.
Now when we run the program we get both the messages with a 1 second gap.
Hi, I'm a goroutine!
Inside main function
Closing thoughts
By default Go uses the number of CPU threads equal to the number of CPU cores present. You can get the number of threads by the following statement
runtime.GOMAXPROCS(-1)
The number of threads can be changed by changing the value from 1 to any number but care must be taken while choosing them. Though more threads increase performance but too many can slow the process down so the ideal amount should only be decided after a running a performance sweep.
Go’s native concurrency model has allowed it to rise in popularity for creating truly concurrent systems and goroutines are an important part of its functioning. They can be considered the heart and soul of the language and if used correctly, they can provide massive performance boost.
Concurrency is not parallelism
I’ve used concurrency a lot of times in the post but if you’re still tempted to use the words concurrency and parallelism interchangeably then its high time you watched Rob Pike’s take on the topic. | https://medium.com/hackernoon/goroutines-a-general-introduction-49bed9812c94 | CC-MAIN-2019-35 | refinedweb | 930 | 62.48 |
:
from pathlib import Path import random from collections import Counter, defaultdict import numpy as np import pandas as pd from sklearn.neighbors import * from matplotlib import pyplot as plt from mpl_toolkits import mplot3d %matplotlib inline def read(file): '''Returns contents of a file''' with open(file, 'r', errors='ignore') as f: text = f.read() return text def load_eu_texts(): '''Read texts snipplets in 10 different languages into pd.Dataframe load_eu_texts() -> pd.Dataframe The text snipplets are taken from the nltk-data corpus. ''' basepath = Path('/home/my_username/nltk_data/corpora/europarl_raw/langs/') df = pd.DataFrame(columns=['text', 'lang', 'len']) languages = [None] for lang in basepath.iterdir(): languages.append(lang.as_posix()) t = '\n'.join([read(p) for p in lang.glob('*')]) d = pd.DataFrame() d['text'] = '' d['text'] = pd.Series(t.split('\n')) d['lang'] = lang.name.title() df = df.append(d.copy(), ignore_index=True) return df def clean_eutextdf(df): '''Preprocesses the texts by doing a set of cleaning steps clean_eutextdf(df) -> cleaned_df ''' # Cuts of whitespaces a the beginning and and df['text'] = [i.strip() for i in df['text']] # Generate a lowercase Version of the text column df['ltext'] = [i.lower() for i in df['text']] # Determining the length of each text df['len'] = [len(i) for i in df['text']] # Drops all texts that are not at least 200 chars long df = df.loc[df['len'] > 200] return df # Execute the above functions to load the texts df = clean_eutextdf(load_eu_texts()) # Print a few stats of the read texts textline = 'Number of text snippplets: ' + str(df.shape[0]) print('\n' + textline + '\n' + ''.join(['_' for i in range(len(textline))])) c = Counter(df['lang']) for l in c.most_common(): print('%-25s' % l[0] + str(l[1])) df.sample(10)
Number of text snippplets: 56481 ________________________________ French 6466 German 6401 Italian 6383 Portuguese 6147 Spanish 6016 Finnish 5597 Swedish 4940 Danish 4914 Dutch 4826 English 4791
lang len text ltext 135233 Finnish 346 Vastustan sitä , toisin kuin tämän parlamentin... vastustan sitä , toisin kuin tämän parlamentin... 170400 Danish 243 Desuden ødelægger det centraliserede europæisk... desuden ødelægger det centraliserede europæisk... 85466 Italian 220 In primo luogo , gli accordi di Sharm el-Sheik... in primo luogo , gli accordi di sharm el-sheik... 15926 French 389 Pour ce qui est concrètement du barrage de Ili... pour ce qui est concrètement du barrage de ili... 195321 English 204 Discretionary powers for national supervisory ... discretionary powers for national supervisory ... 160557 Danish 304 Det er de spørgmål , som de lande , der udgør ... det er de spørgmål , som de lande , der udgør ... 196310 English 355 What remains of the concept of what a company ... what remains of the concept of what a company ... 110163 Portuguese 327 Actualmente , é do conhecimento dos senhores d... actualmente , é do conhecimento dos senhores d... 151681 Danish 203 Dette er vigtigt for den tillid , som samfunde... dette er vigtigt for den tillid , som samfunde... 200540 English 257 Therefore , according to proponents , such as ... therefore , according to proponents , such as ...:
def calc_charratios(df): '''Calculating ratio of any (alphabetical) char in any text of df for each lyric calc_charratios(df) -> list, pd.Dataframe ''' CHARS = ''.join({c for c in ''.join(df['ltext']) if c.isalpha()}) print('Counting Chars:') for c in CHARS: print(c, end=' ') df[c] = [r.count(c) for r in df['ltext']] / df['len'] return list(CHARS), df features, df = calc_charratios(df)
Now that we have calculated the features for every text snipplet in our dataset, we can split our data set in a train and test set:
def split_dataset(df, ratio=0.5): '''Split the dataset into a train and a test dataset split_dataset(featuredf, ratio) -> pd.Dataframe, pd.Dataframe ''' df = df.sample(frac=1).reset_index(drop=True) traindf = df[:][:int(df.shape[0] * ratio)] testdf = df[:][int(df.shape[0] * ratio):] return traindf, testdf featuredf = pd.DataFrame() featuredf['lang'] = df['lang'] for feature in features: featuredf[feature] = df[feature] traindf, testdf = split_dataset(featuredf, ratio=0.80) x = np.array([np.array(row[1:]) for index, row in traindf.iterrows()]) y = np.array([l for l in traindf['lang']]) X = np.array([np.array(row[1:]) for index, row in testdf.iterrows()]) Y = np.array([l for l in testdf['lang']]):
def train_knn(x, y, k): '''Returns the trained k nearest neighbors classifier train_knn(x, y, k) -> sklearn.neighbors.KNeighborsClassifier ''' clf = KNeighborsClassifier(k) clf.fit(x, y) return clf def test_knn(clf, X, Y): '''Tests a given classifier with a testset and return result text_knn(clf, X, Y) -> float ''' predictions = clf.predict(X) ratio_correct = len([i for i in range(len(Y)) if Y[i] == predictions[i]]) / len(Y) return ratio_correct print('''k\tPercentage of correctly predicted language __________________________________________________''') for i in range(1, 16): clf = train_knn(x, y, i) ratio_correct = test_knn(clf, X, Y) print(str(i) + '\t' + str(round(ratio_correct * 100, 3)) + '%')
k Percentage of correctly predicted language __________________________________________________ 1 97.548% 2 97.38% 3 98.256% 4 98.132% 5 98.221% 6 98.203% 7 98.327% 8 98.247% 9 98.371% 10 98.345% 11 98.327% 12 98.3% 13 98.256% 14 98.274% 15 98.309%:
def extract_features(text, features): '''Extracts all alphabetic characters and add their ratios as feature extract_features(text, features) -> np.array ''' textlen = len(text) ratios = [] text = text.lower() for feature in features: ratios.append(text.count(feature) / textlen) return np.array(ratios) def predict_lang(text, clf=clf): '''Predicts the language of a given text and classifier predict_lang(text, clf) -> str ''' extracted_features = extract_features(text, features) return clf.predict(np.array(np.array([extracted_features])))[0] text_sample = df.sample(10)['text'] for example_text in text_sample: print('%-20s' % predict_lang(example_text, clf) + '\t' + example_text[:60] + '...')
Italian Auspico che i progetti riguardanti i programmi possano contr... English When that time comes , when we have used up all our resource... Portuguese Creio que o Parlamento protesta muitas vezes contra este mét... Spanish Sobre la base de esta posición , me parece que se puede enco... Dutch Ik voel mij daardoor aangemoedigd omdat ik een brede consens... Spanish Señor Presidente , Señorías , antes que nada , quisiera pron... Italian Ricordo altresì , signora Presidente , che durante la preced... Swedish Betänkande ( A5-0107 / 1999 ) av Berend för utskottet för re... English This responsibility cannot only be borne by the Commissioner... Portuguese A nossa leitura comum é que esse partido tem uma posição man...
With this classifier it is now also possible to predict the language of the randomized example snipplet from the introduction (which is acutally created from the first paragraph of this article):" predict_lang(example_text) 'English'
The KNN classifier of sklearn also offers the possibility to predict the propability with which a given classification is made. While the probability distribution for a specific language is relativly clear for long sample texts it decreases noticeably the shorter the texts are.
def dict_invert(dictionary): ''' Inverts keys and values of a dictionary dict_invert(dictionary) -> collections.defaultdict(list) ''' inverse_dict = defaultdict(list) for key, value in dictionary.items(): inverse_dict[value].append(key) return inverse_dict def get_propabilities(text, features=features): '''Prints the probability for every language of a given text get_propabilities(text, features) ''' results = clf.predict_proba(extract_features(text, features=features).reshape(1, -1)) for result in zip(clf.classes_, results[0]): print('%-20s' % result[0] + '%7s %%' % str(round(float(100 * result[1]), 4)))' print(example_text) get_propabilities(example_text + '\n') print('\n') example_text2 = 'Dies ist ein kurzer Beispielsatz.' print(example_text2) get_propabilities(example_text2 + ' Danish 0.0 % Dutch 0.0 % English 100.0 % Finnish 0.0 % French 0.0 % German 0.0 % Italian 0.0 % Portuguese 0.0 % Spanish 0.0 % Swedish 0.0 % Dies ist ein kurzer Beispielsatz. Danish 0.0 % Dutch 0.0 % English 0.0 % Finnish 0.0 % French 18.1818 % German 72.7273 % Italian 9.0909 % Portuguese 0.0 % Spanish 0.0 % Swedish 0.0 %:
projection='3d') legend = [] X, Y, Z = 'e', 'g', 'h' def iterlog(ln): retvals = [] for n in ln: try: retvals.append(np.log(n)) except: retvals.append(None) return retvals for X in ['t']: ax = plt.axes(projection='3d') ax.xy_viewLim.intervalx = [-3.5, -2] legend = [] for lang in [l for l in df.groupby('lang') if l[0] in {'German', 'English', 'Finnish', 'French', 'Danish'}]: sample = lang[1].sample(4000) legend.append(lang[0]) ax.scatter3D(iterlog(sample[X]), iterlog(sample[Y]), iterlog(sample[Z])) ax.set_title('log(10) of the Relativ Frequencies of "' + X.upper() + "', '" + Y.upper() + '" and "' + Z.upper() + '"\n\n') ax.set_xlabel(X.upper()) ax.set_ylabel(Y.upper()) ax.set_zlabel(Z.upper()) plt.legend(legend) plt.show():
legend = [] fig = plt.figure(figsize=(15, 10)) plt.axes(yscale='log') langs = defaultdict(list) for lang in [l for l in df.groupby('lang') if l[0] in set(df['lang'])]: for feature in 'abcdefghijklmnopqrstuvwxyz': langs[lang[0]].append(lang[1][feature].mean())) plt.title('Log. of relative Frequencies compared to the mean Frequency in all texts') plt.xlabel('Letters') plt.ylabel('(log(Lang. Frequencies / Mean Frequency)') plt.legend(legend) plt.grid() plt.show():
legend = [] fig = plt.figure(figsize=(15, 10)) plt.axes(yscale='linear') langs = defaultdict(list) for lang in [l for l in df.groupby('lang') if l[0] in {'German', 'English', 'French', 'Spanish', 'Portuguese', 'Dutch', 'Swedish', 'Danish', 'Italian'}]: for feature in 'abcdefghijklmnopqrstuvwxyz': langs[lang[0]].append(lang[1][feature].mean()) colordict = {l[0]:l[1] for l in zip([lang for lang in langs], ['brown', 'tomato', 'orangered', 'green', 'red', 'forestgreen', 'limegreen', 'darkgreen', 'darkred'])}, color=colordict[i[0]]) # plt.plot([c for c in 'abcdefghijklmnopqrstuvwxyz'], i[1], color=colordict[i[0]]) plt.title('Log. of relative Frequencies compared to the mean Frequency in all texts') plt.xlabel('Letters') plt.ylabel('(log(Lang. Frequencies / Mean Frequency)') plt.legend(legend) plt.grid() plt.show(). | https://data-science-blog.com/en/blog/category/data-science-hack-code-beispiele-data-science/hadoop/ | CC-MAIN-2019-35 | refinedweb | 1,599 | 54.29 |
I?
If there is a problem of any kind at the SMTP server level, the server reports an exception, you will probally use SmtpException to handle this. While using the SmtpException works well, its functionality is limited in determining exactly how the send failed.:
- The destination mailbox is currently in use. If this occurs, only lasts a short amount of time.
- The mailbox is unavailable. This may mean that there is actually no such address on the server, but it may also mean that the search for the mailbox simply timed out.
- The transaction with the mailbox failed. The reasons for this can be somewhat mysterious; but like a stubborn web page, sometimes a second nudge is all it takes to do the trick.
What does this tell us? With a little more coding, we could gracefully recover from the majority of email send failures by detecting these particular conditions and attempting the send a second time..
Handling the Exceptions
Problems with mailboxes is that they are not easily discovered by catching SmtpException. Fortunately, there is a more derived type called SmtpFailedRecipientException that the .NET Framework uses to wrap errors reported from an individual mailbox. This exception contains a StatusCode property of type enum that will tell us the exact cause of the error.
Obvserve in the following code below:
using System.Net.Mail;
using System.Threading;
using System.Web.Configuration;
/// <summary>
/// Provides a method for sending email.
/// </summary>
public static class Email
{
/// <summary>
/// Constructs and sends an email message.
/// </summary>
/// <param name="fromName">The display name of the person the email is from.</param>
/// <param name="fromEmail">The email address of the person the email is from.</param>
/// <param name="subject">The subject of the email.</param>
/// <param name="body">The body of the email.</param>
public static void Send(string fromName, string fromEmail, string subject, string body)
{
MailMessage message = new MailMessage
{
IsBodyHtml = false,
From = new MailAddress(fromEmail, fromName),
Subject = subject,
Body = body
};
message.To.Add(WebConfigurationManager.AppSettings["mailToAddress"]);
Send(message);
}
private static void Send(MailMessage message)
{
SmtpClient client = new SmtpClient();
try
{
client.Send(message);
}
catch (SmtpFailedRecipientException ex)
{
SmtpStatusCode statusCode = ex.StatusCode;
if (statusCode == SmtpStatusCode.MailboxBusy ||
statusCode == SmtpStatusCode.MailboxUnavailable ||
statusCode == SmtpStatusCode.TransactionFailed)
{
// wait 5 seconds, try a second time
Thread.Sleep(5000);
client.Send(message);
}
else
{
throw;
}
}
finally
{
message.Dispose();
}
}
}
As you can see in the above code block, we’re catching the SmtpFailedRecipientException that is thrown as a result of the mailbox error, and examining its StatusCodebefore.
Hope this helps… Happy Coding! | https://blogs.msdn.microsoft.com/jrspinella/2011/05/31/handling-exceptions-when-sending-mail-via-system-net-mail/ | CC-MAIN-2017-26 | refinedweb | 414 | 51.44 |
Functional plumbing for Python
Complete documentation in full color_.
.. image:: :target:
pipetoolsis a python package that enables function composition similar to using Unix pipes.
Inspired by Pipe_ and Околомонадное_ (whatever that means...)
.. _Pipe: .. _Околомонадное:
It allows piping of arbitrary functions and comes with a few handy shortcuts.
Source is on github_.
.. _github:))).
Say you want to create a list of python files in a given directory, ordered by filename length, as a string, each file on one line and also with line numbers:
.. code-block:: pycon
>>> print pyfiles_by_length('../pipetools') 0. main.py 1. utils.py 2. __init__.py 3. ds_builder.py
So you might write it like this:
.. code-block:: python:
.. code-block:: python
def pyfiles_by_length(directory): return '\n'.join('{0}. {1}'.format(*x) for x in enumerate(sorted( [f for f in os.listdir(directory) if f.endswith('.py')], key=len)))
Or, if you're a mad scientist, you would probably do it like this:
.. code-block:: pythongive you yet another possibility!
.. code-block:: python.
.. _
The Right Way™:
.. code-block:: console
$ pip install pipetools
Uh, what's that?_
.. _the-pipe:
The pipe """""""" The
pipeobject can be used to pipe functions together to form new functions, and it works like this:
.. code-block:: python
from pipetools import pipe
f = pipe | a | b | c
f(x) == c(b(a(x)))
A real example, sum of odd numbers from 0 to x:
.. code-block:: python
from functools import partial from pipetools import pipe
odd_sum = pipe | range | partial(filter, lambda x: x % 2) | sum
odd_sum(10) # -> 25
Note that the chain up to the
sumis lazy.
Automatic partial application in the pipe """""""""""""""""""""""""""""""""""""""""
As partial application is often useful when piping things together, it is done automatically when the pipe encounters a tuple, so this produces the same result as the previous example:
.. code-block:: python
odd_sum = pipe | range | (filter, lambda x: x % 2) | sum
As of
0.1.9, this is even more powerful, see
X-partial_.
Built-in tools """"""""""""""
Pipetools contain a set of pipe-utils that solve some common tasks. For example there is a shortcut for the filter class from our example, called
where()_:
.. code-block:: python
from pipetools import pipe, where
odd_sum = pipe | range | where(lambda x: x % 2) | sum
Well that might be a bit more readable, but not really a huge improvement, but wait!
If a pipe-util is used as first or second item in the pipe (which happens quite often) the
pipeat the beginning can be omitted:
.. code-block:: python
odd_sum = range |!
.. code-block:: python
from pipetools import where, X
odd_sum = range | where(X % 2) | sum
How 'bout that.
Read more about the X object and it's limitations._
.. _auto-string-formatting:
Automatic string formatting """""""""""""""""""""""""""
Since it doesn't make sense to compose functions with strings, when a pipe (or a
pipe-util) encounters a string, it attempts to use it for
(advanced) formatting:
.. code-block:: pycon
>>> countdown = pipe | (range, 1) | reversed | foreach('{0}...') | ' '.join | '{0} boom' >>> countdown(5) u'4... 3... 2... 1... boom'
.. _(advanced) formatting:
Feeding the pipe """"""""""""""""
Sometimes it's useful to create a one-off pipe and immediately run some input through it. And since this is somewhat awkward (and not very readable, especially when the pipe spans multiple lines):
.. code-block:: python
result = (pipe | foo | bar | boo)(some_input)
It can also be done using the
>operator:
.. code-block:: python
result = some_input > pipe | foo | bar | boo
.. note:: Note that the above method of input won't work if the input object defines
__gt___ for any object - including the pipe. This can be the case for example with some objects from math libraries such as NumPy. If you experience strange results try falling back to the standard way of passing input into a pipe.
See the
full documentation_. | https://xscode.com/0101/pipetools | CC-MAIN-2021-10 | refinedweb | 631 | 65.22 |
Python mongodb tutorial with examples
You should have installed and started MongoDB server, if you didn't yet, go to MongoDB.
Install MongoDB client driver for Python
I suggest using pip install command , like this:
pip install pymongo
Connect to MongoDB server with pymongo
For default network port and local server instance , just single line of code to connect MongoDB .
import pymongo import time from pprint import pprint from pymongo import MongoClient client = MongoClient()
Select database and collection
In MongoDB's flexible schema, databases, collections and fields are created on the fly. They are created when being referenced.
db = client.test collection = db.post
Insert data into MongoDB database
A collection can store any kind of JSON like documents. But the best practice is to store the same kind of data into collection.
collection.save({ 'title' : 'article title', 'date' : int(time.time()), 'content' : 'article content' }) collection.save({ 'title' : 'article title', 'date' : int(time.time()), 'content' : 'article content', 'author' : 'name' })
Notice how the time is retrieved, in Python, time.time() returns a floating number which represent the seconds, weird but just discard the number after dot is what we want.
Modify the data
Use the $set operator to modify document.
collection.update({'title' : 'article title'},{'$set' : {'author' : 'James'}})
Using find to query data
Query by field value. Find documments with author 'James'.
result = collection.find({"author" : "Jones"}) for x in result: pprint (x)
Find range with conditions.
result = collection.find({"date" : { '$lt' : int(time.time()) - ( 3 * 60 * 60) }})
Find with regular expression. Find documents with field value start with 'm'.
result = collection.find({ "title" : { '$regex' : '^m' }})
Sort result by date.
result = collection.find({'author' : 'James'}).sort("date", pymongo.DESCENDING)
Delete document
Delete has the same syntax with find, the method is remove.
collection.remove({"title" : { '$regex' : '^m' }}) | http://makble.com/python-mongodb-tutorial-with-examples | CC-MAIN-2022-40 | refinedweb | 296 | 50.63 |
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#17414 closed Bug (fixed)
intcomma raises ZeroDivisionError for locales whose NUMBER_GROUPING is unspecified
Description
When using locales that do not specify
NUMBER_GROUPING (like 'fa','ja','th',) template filter
intcomma (from humanize) raises ZeroDivisionError.
Version: Django 1.4 pre-alpha SVN-17202
How to reproduce:
- Make sure that
USE_L10N = Truein
settings.py
- Try this code in shell
from django.utils import translation translation.activate('th') t = "{% load humanize %}}{{ 100|intcomma }}" from django.template import Template, Context Template(t).render(Context())
- You can change 'th' to 'ja', 'mn', 'ro' and many others.
Possible cause:
This is not the issue directly related to contrib.humanize's
intcomma.
The exception occurs in function
format in
django.utils.numberformat when Django tries to divide something with parameter
grouping. This parameter is assigned in function
number_format in
django.utils.formats from a call to
get_format('NUMBER_GROUPING'), which returns
0 for many locales (e.g., th, ja, km, zh) with no definition of
NUMBER_GROUPING in the format files. Therefore, you get
ZeroDivisionError.
How to fix:
I am not sure where to put the fixing code. These are the options:
- Use something like 3 as a default for
NUMBER_GROUPING.
- Change function
formatin
django.utils.numberformatto make sure that
groupingis non-zero (and if it is zero, use something like 3 as a default).
Since I am not sure where to put the fix, I have not tried to produce the patch. I can try to do that for the 2nd option if it makes sense.
Attachments (1)
Change History (4)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
Changed 6 years ago by
Fix possible ZeroDivisionError in numberformat
In [17267]: | https://code.djangoproject.com/ticket/17414 | CC-MAIN-2017-39 | refinedweb | 291 | 59.3 |
.
ASP.NET 2.0 Provider Toolkit
Getting Started with Visual Web Developer 2005 Express Edition.
Hello
I’ve been reading the article "Gridview Example:"
I went thru the code and see something new that I don’t get:
public class ProductDAL
{
public static List<Product> GetProducts()
{
Can you please tell me what "List<Product>" is ?, why do we need to use it?
thank you
Steven:
That’s a generic, a new feature of .Net 2.0.
if you simply do a search for c# generics (or vb.net, they aren’t language specific), you’ll get a lot of hits.
I like to think of things in simple terms. Basically, generics let you build generic classes 😉 Ok, simple but not useful.
There’s a class in the framework called List<T> and when you declare a new type of List<T> you substitude T with whatever class you want. like List<int> or List<string> or List<Pruduct>. For that instance, where ever the class <T> exists, it’ll get replaced with whatever class you pass in.
so if you have a function like
public void Add(<T> objectToAdd)
{
someCollection.Add(objectToAdd)
}
it’ll actually turn, at compile time, into
public void Add(string objectToAdd) //or whatever you pass into <string>
The problem with not having generics is that the only two solutions is to create a class that accepts everything (object) or a custom collection for each possible value.
With the arraylist, which accepts an object, you can do:
ArrayList arr = new ArrayList();
arr.Add("asds");
arr.Add(231);
arr.Add(new DataSet());
which is powerful, but often times not needed and can result in (a) performance issues and (b)runtime erros
With a generic you can do:
List<string> strings = new List<string>();
strings.Add("hello");
strings.Add(1232);
but it won’t compile, because the specific instance only accepts strings. It essentially moved potential run-time errors into compile-time erros, which are 1000x better.
Do a search and you’ll get a lot more details.
Hi, it is the ‘new’ feature in C# 2.0 called generics. A List is a collection, can store pure objects or if you want to maintain strong typing you make sure List collection will take only Product type objects. And it is also a prerequisite when working with Object DataSources etc.
e.g. GetProducts() method will return a collection of Product-typed objects, so there is no casting needed when you catch it as in C# 1.0 times, where collection typically stored only object typed values.
"List<product>" is a respresentation stating that the list is strongly typed (generic). This is part of the whole new generics (yep – generic doesn’t sound quite right but is what it is). You can read up more about generics on MSDN but basically put – it creates strongly typed data so that when compiled you should be notified when you try to assign data that does not match the type of the variable you are trying to populate. Thus, resulting in less errors when an application is actually deployed.
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/bgold/2005/11/13/brians-top-10-articles-on-msdn/ | CC-MAIN-2018-05 | refinedweb | 520 | 63.39 |
|Build Status| |Code Health| |Coverage Status|
Downloads a python project from github and allows to import it from anywhere. Very useful when the repo is not a package
.. figure:: :alt:
Sometimes is usefull to be able to import a project from github. If the project is configured as a python package it could be installed with pip and git. But still lot of project are not using setuptools which makes difficult to use them from python easily. Some people could be using git submodules, but it also requires adding a init.py file in the project root.
With packyou it is possible to import any pure python project from github justo with a simple import statement like:
.. code:: python
from packyou.github.username.repository_name import external_github_module
::
pip install packyou
Supose you want to use something from sqlmap project. since sqlmap proyect is not yet a python package you can import anything from sqlmap like this:
.. code:: python
from packyou.github.sqlmapproject.sqlmap.lib.utils.hash import mysql_passwd mysql_passwd(password='testpass', uppercase=True) # '*00E247AC5F9AF26AE0194B41E1E769DEE1429A29'
screenshot!
.. |Build Status| image:: :target: .. |Code Health| image:: :target: .. |Coverage Status| image:: :target: | https://openbase.com/python/packyou | CC-MAIN-2021-39 | refinedweb | 186 | 68.97 |
There are many ways to gain control over a compromised system, a common way is to gain interactive shell access, which enables you to try to gain full control of the operating system. However, most basic firewalls blocks direct remote connections. One of the methods to bypass this, is to use reverse shells.
A reverse shell is a program that executes local cmd.exe (for Windows) or bash/zsh (for Unix-Like) commands and sends the output to a remote machine. With a reverse shell, the target machine initiates the connection to the attacker machine, and the attacker's machine listens for incoming connections on a specified port, this will bypass firewalls.
Do you want to implement such code in Python ? Let's do it!
The basic idea is that the attacker's machine will keep listening for connections, once a client (or target machine) connects, the server will send shell commands to the target machine and expects output results.
Related: How to Use Hash Algorithms in Python using hashlib.
First, let's start off by the server (attacker's code):
import socket SERVER_HOST = "0.0.0.0" SERVER_PORT = 5003 # send 1024 (1kb) a time (as buffer size) BUFFER_SIZE = 1024 # create a socket object s = socket.socket().
We then specified some variables and initiated the TCP socket. Notice I used 5003 as the TCP port, feel free to choose any port above 1024, just make sure to use it on both the server's and client's code.
Now let's bind that socket we just created to our IP address and port:
# bind the socket to all IP addresses of this host s.bind((SERVER_HOST, SERVER_PORT))
Listening for connections:
s.listen(5) print(f"Listening as {SERVER_HOST}:{SERVER_PORT} ...")
If any client attempts to connect to the server, we need to accept it:
# accept any connections attempted client_socket, client_address = s.accept() print(f"{client_address[0]}:{client_address[1]} Connected!")
accept() function waits for an incoming connection and returns a new socket representing the connection (client_socket), and the address (IP and port) of the client.
Now below code will be executed only if a user is connected to the server, let's try to send a welcome message, just to let you know how you can send messages over sockets:
# just sending a message, for demonstration purposes message = "Hello and Welcome".encode() client_socket.send(message)
Note that we need to encode the message to bytes before sending, and we must send the message using the client_socket and not the server socket "s".
Now let's start our main loop, which is sending shell commands and retrieving the results and printing them:
while True: # get the command from prompt command = input("Enter the command you wanna execute:") # send the command to the client client_socket.send(command.encode()) if command.lower() == "exit": # if the command is exit, just break out of the loop break # retrieve command results results = client_socket.recv(BUFFER_SIZE).decode() # print them print(results) # close connection to the client client_socket.close() # close server connection s.close()
All we are doing here is prompting the attacker for the desired command, we encode and send the command to the client, after that we receive the output of that command executed on the client (we'll see how in the client's code).
If the command is "exit", just exit out of the loop and close the connections.
Let's see the code of the client now, open up a new file and write:
import socket import subprocess SERVER_HOST = "192.168.1.103" SERVER_PORT = 5003 BUFFER_SIZE = 1024
For demonstration purposes, I will connect to my local machine in the same network, which its IP address is "192.168.1.103". You can test this on the same machine by using the host as localhost or 127.0.0.1 on both sides.
Let's create the socket and connect to the server:
# create the socket object s = socket.socket() # connect to the server s.connect((SERVER_HOST, SERVER_PORT))
Remember, the server sends a greeting message after the connection is established, let's receive it and print it:
# receive the greeting message message = s.recv(BUFFER_SIZE).decode() print("Server:", message)
Going to the main loop, we first receive the command from the server, execute it and send the result back, here is the code for that:
while True: # receive the command from the server command = s.recv(BUFFER_SIZE).decode() if command.lower() == "exit": # if the command is exit, just break out of the loop break # execute the command and retrieve the results output = subprocess.getoutput(command) # send the results back to the server s.send(output.encode()) # close client connection s.close()
subprocess' getoutput(command) function returns the output of executing the command in the shell, just as we desire !
Okey, we're done writing the code for both sides. I'm gonna run the server code on a Linux machine and the client code on a Windows machine.
Note that you need to run the server before the client.
Here is a screenshot when I tried to execute the "dir" command (which is Windows command) remotely in my Linux box after running the client:
Awesome, isn't it ? You can execute any shell command available in that operating system.
Here is some ideas to extend that code:
To conclude, a reverse shell isn't generally meant for being a malicious code, it can be used for legitimate purposes, for example, you can use this to manage your servers remotely.
DISCLAIMER: We are not responsible for any misuse of the code provided in this tutorial, use it on your own responsibility.
Happy Coding ♥View Full Code | https://www.thepythoncode.com/article/create-reverse-shell-python | CC-MAIN-2020-16 | refinedweb | 939 | 62.48 |
I have a strange thing going on with my log in scripts. the Z: drive is mapped to a personal share on the various file servers. In Active Directory the profile settings set to connect Z: to the file path for the individuals home share.
\\file server\users\username
The log in script using simple
net use z: /home
The problem is many of the users end up with the Z: drive being mapped to the parent user folder and not the individual's folder. All the other folders give an access denied, so that is at least good.
What is even stranger is that it actually works for some people and I cannot figure out why the discrepancy This is a really easy set up and script, there should be no area for problems. I have found this same problem when mapping to a DFS namespace, it will map to the parent folder, not the individual.
Anyone with any ideas?
Oct 9, 2013 at 14:24 UTC
Setting the mapping in the user's profile should automatically map the drive to the correct folder, you shouldn't need the login script mapping, that may be overriding it.
5 Replies
Oct 9, 2013 at 14:24 UTC
Setting the mapping in the user's profile should automatically map the drive to the correct folder, you shouldn't need the login script mapping, that may be overriding it.
Oct 9, 2013 at 14:30 UTC
Should that not be \\fileserver\users\%username%
Oct 9, 2013 at 14:38 UTC
If you have AD, you can set this home directory in the profile setting and it will be mapped correctly.
Oct 9, 2013 at 14:46 UTC
Agree with Michael8545 set the home directory via ADUC. This will allow the folder to be created automatically when you create new users and proper permissions should be set.
Oct 9, 2013 at 15:10 UTC
You guys were right. Those were the settings when I got on staff so I never changed them. After a few test it looks like there was some sort of weird conflict.
Thanks everyone!
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/392601-mapping-drive-to-home | CC-MAIN-2018-43 | refinedweb | 378 | 67.89 |
29 August 2008 19:26 [Source: ICIS news]
HOUSTON (ICIS news)--DuPont has named Carl Lukach as president of DuPont East Asia and Karen Fletcher as vice president of Investor Relations, the company said on Friday.
The appointments are both effective on 1 January 2009.
Lukach, 53, currently vice president of Investor Relations, will lead the company's activities in ?xml:namespace>
Lukach has served in several business leadership positions during his 27-year career, including six years in Asia Pacific from 1989-95.
Fletcher, 48, currently director of Investor Relations, has held a range of business leadership roles since joining the company in 1982, including in the nonwovens and titanium technologies businesses.
She has been in her current role since 2007.. | http://www.icis.com/Articles/2008/08/29/9152846/moves+dupont+names+new+executives+for+asia+ir.html | CC-MAIN-2013-20 | refinedweb | 122 | 51.89 |
immutable: dataModels[d].immutalbe || false,:smile:
leveldownis the raw LevelDB backend with C++ compiling.
levelupwith indexedDB based storage, so it works in the browser.
leveldownand
rocksdbdepend on native compiltion and only work in NodeJS
undefinedvalues being included in a row? I've read through and can't see anything of use.
Cannot find name SQLResultSetwhen I import
import { getMode } from '@nano-sql/adapter-sqlite-cordova';. All of my packages are up to date, what cordova and nano-sql means. Is there a typing missing? When I click on that error it jumps into the index.d.ts of the specific adapter and on line 6 it is red marked.
Ok, so what is the correct way now? My scenario: I created a cordova app with Vue, the vue cordova plugin and vuetify. I test my App on the Desktop browser and when I'm finish with everything I test it on my phone. For this I use NanoSQL to have a TEMP database which looks similar to the SQLite DB file that I have for my phone. On Desktop, I want to see the TEMP Database and on my phone, I want to use the real SQLite DB. So as you said, I just need to use createDatabse for Desktop with dummy tables and for the real world App I can use nSQL().connect with empty tables or is this really not possible?
So for this, I need to have my queries doubled? One for Desktop to test my app and one for the Real world? @ClickSimply
createDatabasecall so that NanoSQL knows the shape of the data in the tables and what tables are on device. It’s not SQLite where the schema is included in the database and follows it everywhere. In NanoSQL the schema is provided by you in the
createDatabasecall.
Ok, I will test this. Thx. I only need the "dummy tables" for the browser to test. Because at the end it will be an app for Android and for this I have a dedicated sqlite file (my-app.db). And I just needed the tables for testing in the browser. It will never run there. But on my mobile phone it also showed me the dummy tables, not the correct ones from my db file. Neither for createDatabase nor connect. This is why I was wondering.
But as I said, I will test what you told me and would like to come back to you to clarify when I was wrong. Thx :)
.map(o => o.filedata)on the results to get this outcome. | https://gitter.im/nano-sql/community | CC-MAIN-2020-45 | refinedweb | 427 | 75.1 |
ORIGINAL DRAFT
It’s hard not to be sensitive to the stock market this year. It seems appropriate, therefore, that we take the time to implement a simple chart component that handles stock history information. Traditional charts show daily high and low stock prices, along with the open and close prices. A second chart shows the volume of shares traded on each day. We’ll provide both these charts and use a table model that handles stock information you can freely download from the Yahoo financial web site.
Figure 1: JStock charts showing both price and volume history for IBM over a one year period.
Let’s briefly cover how you can get compatible stock data. If you visit, you’ll see something that looks like Figure 2. From there, you can pick a stock, IBM for example, and get the Daily quotes. Don’t forget to select a suitable range of dates, typically at least a year of daily quotes.
Figure 2: Historical Quotes page on Yahoo Finance.
When you press the "Get Historical Data" button, you’ll get an HTML page with data represented in a tabular format. At the bottom of that page is a link called "Download Spreadsheet", which lets you save the information in a CSV (Comma Separated Value) file. This is the format we’ll use in our model.
By convention, I’ve named CSV files with the ticker symbol and a ".CSV" extension. The test program for JStock makes this assumption, so you’ll want to do the same thing, at least to make sure this works, putting data files in the same directory as the class files to run JStockTest. Naming them by the ticker symbol makes it easy to ensure you have unique and suitably descriptive file names.
The comma-separated files include date, open, high, low, close and volume fields. We’ll implement a TableModel that uses the same column order and declare some constants (static final variables) that allow us to retrieve these values more easily. Because we use a TableModel, we can view this data as either a JTable or in the form of stock charts with JStock.

Figure 3: JStock classes. Both StockPriceChart and StockVolumeChart extend StockChart, which provides most of the basic chart functionality.
The classes in the JStock project are diagrammed in Figure 3. The JStock class provides a view that includes both the Price and Volume charts, both of which display charts based on the StockModel provided. The StockModel class provides a parseFile method that lets you read CSV files at your discretion. The StockChart class is abstract and provides most of the basic chart functionality inherited by StockPriceChart and StockVolumeChart. The later two classes provide more specific functionality.
We only have room to look at a few listings, but you can find the all the classes online at. The JStock class itself merely places StockPriceChart and StockVolumeChart instances in a BorderLayout, so we can safely skip over it without loosing much. The StockModel is also quite simple, storing each line of data as a set of values in a list, each of which is held in a list of rows.
The only two methods of interest are parseFile and parseLine, which read the CSV file into ArrayList instances. The TableModel methods are relatively straight forward. You’ll find it easy to decipher the code, so we won’t cover them in detail. We’ll put our focus on the chart classes themselves; StockChart, StockPriceChart and StockVolumeChart. That’s where most of the interesting work takes place.
Listing 1 shows the code for StockChart. At the top of the class are a few declarations; a static array of month labels, along with instance variables for the inside Insets, the StockModel, and a few values we’ll need along the way for hi, lo data values, min, max chart values, as well as the chart data range, number of divisions on the vertical axis and how large each unit is. These values are initialized in the constructor and used to draw chart elements.
Each chart is divided into five areas. The central area with grid line and plotted values, the top label, bottom X axis and Y axis labels on both the left and right. One tricky operation is figuring out how to label the Y axis values. To do this, we initialize a few variables. To set the hi and lo value we call calculateMinMax, which is an abstract method implemented by subclasses.
The range value is initialized by calling the calculateRange method, which figures out the widest range between the hi and lo values. You’ll notice that we use the floor of the low value and ceiling of the high value to round down and up, respectively. The unit value is the size of each range of data between grid lines. We initialize the variable by calling calculateUnit, which returns an integer based on the size of the range. Since values may range from volumes in the millions or prices in the tens or hundreds, we handle a broad range of possibilities. Note that calculateUnit uses a local range value, a double, to avoid rounding errors.
With the range and unit values all set, the divs (divisions) value is easy to calculate. We want to know how many divisions there are on the chart. We add two to account for values below or above the range. The min value is the floor of the lowest value divided by the unit size and remultiplied to units. In other words, we round down to the best lowest unit value for the bottom of the chart. The max value is then the number of divisions multiplied by the unit size, with the min value added to push values up from the bottom of the chart.
One of the operations we’ll do often is normalize the plotted values to fit within the display range. The normalize method makes this easy, calculating the result for a given chartable value, accounting for the height of the display area. We provide accessors for the hi and lo values, which are either the price or volume minimum and maximum values. These are reported by the label at the top of the chart.
The next set of methods abstract access to the model to make it possible to get values without long calls or casting. The getValue method is called by the others and implicitly addresses the current model. The getDate, getOpen, getHigh, getLow, getClose and getVolume methods each return appropriate values. A Date object for getDate, a long value for getVolume and a double for the rest.
The drawText method lets us draw text in a given rectangle and justify it on the left, right or centered. We use the SwingConstants interface to set the state for justification. The next pair of methods handle the horizontal axis. The names may be a little confusing because the drawHorzGridLines draws lines for the horizontal grid rather than horizontal lines. It all makes sense if you think about it.
Horizontal lines and labels are plotted at the first of the month intervals, so the code walks the Date field and watches for month changes using the month and next variables. We can’t count on the first of the month being a trading/weekday, so we need to account for that. The month label is fetched from the months array. The drawHorzGridLines adds two lines at the top and bottom of the grid to frame the region. Otherwise, it’s all pretty straight forward.
Drawing vertical grid lines and labels depends on the range, min, max, divs and unit values calculated in the constructor. The vertical axis lines are drawn at each unit interval, scaled to the height of the drawing area. The same is true of the labels, though we delegate formatting to an abstract formatLabel method implemented by subclasses. Labels are justified, based on whether they’re on the right or left of the chart.
You’ll see all the abstract method declarations a the bottom of Listing 1. The only one we didn’t mention is the drawChartValues method, which actually plots the values in the drawing area at the center of the chart, surrounded by labels on all four sides. This is where we use the Insets set in the constructor. The inside variable is used to avoid conflicts with the standard insets used by the border classes. Because the StockChart subclasses are only used in the JStock class, I’ve made no attempt to support standard borders.
Listing 2 shows the StockPriceChart class. We subclass StockChart and implement the three mandatory abstract methods, but the class is otherwise uncomplicated. We use a DecimalFormat instance to format the high and low values displayed in the top label. The constructor initializes the insets and sets a preferred size. The preferred size is three times the number of data points because we need three pixels to paint the ‘candlesticks’ in the price chart. We add the inset values to account for them in the preferred size.
The calculateMinMax method is simple. It captures the hi and lo values by checking for smaller an larger values, initializing to the highest and lowest possible values to ensure we pick up the differences. After the loop is executed through each data point, the hi and lo variables are properly set.
The formatLabel method is used by the drawVertAxisLabels method in the superclass and does little in the StockPriceChart other than return a string with the price value. Note that the value is rounded to an integer by the superclass, so we don’t need to handle any decimal points in these labels. The drawTopLabel method uses a StringBuffer to format the label at the top of the display area, which includes the hi and lo values, formatted by our DecimalFormat instance.
The paintComponent method delegates most of the drawing to the methods declared in the abstract superclass. I’ve left paintComponent in the subclasses to support more customization, though the two chart subclasses have pretty much identical code in this method. The order in which we draw the elements is largely arbitrary because there is very little overlap to watch for, other than a need to draw the grid lines before the data.
The drawChartValues method does the actual plotting, accounting the normalized values that scale to the drawing area. The ‘candlestick’ regions include a thin high-low line in the middle and a thicker open-close line in the middle. This lets you see the stock’s price range for a given day at a glance. Otherwise, there’s not much to it.
Listing 3 shows the code for StockVolumeChart, which is similar to Listing 2 in structure and content. The DecimalFormat is configured to handle comma-delimited volume values, which tend to be in the millions and are easier to read this way. The same initialization takes place in the constructor, with the preferred height set to a smaller value. The calculateMinMax calls getVolume instead of getHigh and getLow.
The formatLabel method handles the axis labels and represents values in millions, using an ‘M’ suffix. The drawTopLabel method shows hi and lo values in the same way as it’s twin in the StockPriceChart class, handling Volume values instead of Price. I won’t cover paintComponent. As I mentioned, this is virtually identical to the StockPriceChart implementation. The drawChartValues method handles plotting volume as solid lines from the bottom. These are two pixels wide, with a blank pixel to the right.
There’s nothing overly complicated in JStock, though this implementation illustrates some useful techniques, including abstraction, good separation of responsibility and effective use of a data model. If you follow the stock market, you might feel inclined to plot your favorite stocks using this component, or a variation on the same theme. Sophisticated traders are used to much more comprehensive analysis tools, but sometimes a simple component is all your users really need.
Listing 1
import java.awt.*; import java.util.*; import javax.swing.*; public abstract class StockChart extends JPanel implements SwingConstants { protected static final String[] months = { "J", "F", "M", "A", "M", "J", "J", "A", "S", "O", "N", "D" }; protected Insets inside; protected StockModel model; protected double hi, lo; protected double min, max; protected int range, divs, unit; public StockChart(StockModel model) { this.model = model; calculateMinMax(); range = calculateRange(); unit = calculateUnit(); divs = (int)(range / unit) + 2; min = (int)Math.floor(lo / unit) * unit; max = divs * unit + min; } protected int calculateRange() { return (int)(Math.ceil(hi) - Math.floor(lo)); } protected int calculateUnit() { double range = calculateRange(); if (range > 10000000) return 10000000; if (range > 1000000) return 1000000; if (range > 100000) return 100000; if (range > 10000) return 10000; if (range > 1000) return 1000; if (range > 100) return 100; if (range > 10) return 10; return 1; } protected int normalize(double value, double height) { double range = max - min; double factor = ((value - min) / range); return (int)(height - factor * height); } public double getHigh() { return hi; } public double getLow() { return lo; } protected Object getValue(int row, int col) { return model.getValueAt(row, col); } public Date getDate(int row) { return (Date)getValue(row, StockModel.DATE); } public double getOpen(int row) { Object val = getValue(row, StockModel.OPEN); return ((Double)val).doubleValue(); } public double getHigh(int row) { Object val = getValue(row, StockModel.HIGH); return ((Double)val).doubleValue(); } public double getLow(int row) { Object val = getValue(row, StockModel.LOW); return ((Double)val).doubleValue(); } public double getClose(int row) { Object val = getValue(row, StockModel.CLOSE); return ((Double)val).doubleValue(); } public long getVolume(int row) { Object val = getValue(row, StockModel.VOLUME); return ((Long)val).longValue(); } protected void drawText( Graphics g, int x, int y, int w, int h, String text, int justify) { FontMetrics metrics = g.getFontMetrics(); int width = metrics.stringWidth(text); if (justify == LEFT) x += 3; if (justify == CENTER) x += (w - width) / 2; if (justify == RIGHT) x += w - width - 3; y += (h / 2) - (metrics.getHeight() / 2) + metrics.getAscent(); g.drawString(text, x, y); } public void drawHorzGridLines(; g.drawLine(x + i * 3, y, x + i * 3, y + h); } } g.drawLine(x, y, x, y + h); g.drawLine(x + w, y, x + w, y + h); } public void drawHorzAxisLabels(; String label = months[month]; if (month == 0) { label = "" + calendar.get(Calendar.YEAR); label = label.substring(label.length() - 2); } drawText(g, x + i * 3 - 15, y, 30, h, label, CENTER); } } } public void drawVertGridLines( Graphics g, int x, int y, int w, int h) { int incr = (int)(h / divs); for(int i = 0; i < h; i += incr) { g.drawLine(x, y + h - i, x + w, y + h - i); } g.drawLine(x, y, x + w, y); g.drawLine(x, y + h, x + w, y + h); } protected void drawVertAxisLabels( Graphics g, int x, int y, int w, int h, int justify) { int incr = (int)(h / divs); int count = 0; for(int i = 0; i < h; i += incr) { drawText(g, x, y + h - i - 10, w, 20, formatLabel((int)(count * unit + min)), justify); count++; } } protected abstract String formatLabel(int value); protected abstract void calculateMinMax(); protected abstract void drawChartValues( Graphics g, int x, int y, int w, int h); }
Listing 2
import java.awt.*; import java.text.*; import javax.swing.*; public class StockPriceChart extends StockChart { protected static final DecimalFormat form = new DecimalFormat("$###.##"); public StockPriceChart(StockModel model) { super(model); setBackground(Color.white); int count = model.getRowCount(); inside = new Insets(20, 30, 20, 30); setPreferredSize(new Dimension( count * 3 + (inside.left + inside.right), 280)); } protected void calculateMinMax() { hi = Double.MIN_VALUE; lo = Double.MAX_VALUE; int count = model.getRowCount(); for(int i = 0; i < count; i++) { double high = getHigh(i); double low = getLow(i); if (high > hi) hi = high; if (low < lo) lo = low; } } protected String formatLabel(int value) { return "" + value; } protected void drawTopLabel( Graphics g, int x, int y, int w, int h) { StringBuffer buffer = new StringBuffer(); buffer.append("Price (");(0, 0,) { int count = model.getRowCount(); for(int i = count - 1; i >= 0; i--) { int xx = x + i * 3; int y1 = y + normalize(getHigh(i), h); int y2 = y + normalize(getLow(i), h); int y3 = y + normalize(getOpen(i), h); int y4 = y + normalize(getClose(i), h); g.setColor(Color.blue); g.drawLine(xx + 1, y1, xx + 1, y2); g.setColor(Color.blue); g.drawLine(xx, y3, xx, y4); g.drawLine(xx + 1, y3, xx + 1, y4); g.drawLine(xx + 2, y3, xx + 2, y4); } } }
Listing 3
import java.awt.*; import java.text.*; import javax.swing.*; public class StockVolumeChart extends StockChart { protected static final DecimalFormat form = new DecimalFormat("#,###,###,###"); public StockVolumeChart(StockModel model) { super(model); setBackground(Color.white); int count = model.getRowCount(); inside = new Insets(20, 30, 20, 30); setPreferredSize(new Dimension( count * 3 + (inside.left + inside.right), 120)); } protected void calculateMinMax() { hi = Long.MIN_VALUE; lo = Long.MAX_VALUE; int count = model.getRowCount(); for(int i = count - 1; i >= 0; i--) { long vol = getVolume(i); if (vol > hi) hi = vol; if (vol < lo) lo = vol; } } protected String formatLabel(int value) { return "" + (value / 1000000) + "M"; } protected void drawTopLabel( Graphics g, int x, int y, int w, int h) { StringBuffer buffer = new StringBuffer(); buffer.append("Volume (");(x, y,) { g.setColor(Color.blue); int count = model.getRowCount(); for(int i = count - 1; i >= 0; i--) { int xx = x + i * 3; int yy = y + normalize(getVolume(i), h); g.drawLine(xx, y + h, xx, yy); g.drawLine(xx + 1, y + h, xx + 1, yy); } } } | http://www.claudeduguay.com/articles/stock/JStockArticle.html | CC-MAIN-2019-30 | refinedweb | 2,899 | 63.9 |
what do this error mean?
Code:error C2084: function 'int __thiscall dice::numSides(void)' already has a body
This is a discussion on Help me please within the C++ Programming forums, part of the General Programming Boards category; what do this error mean? Code: error C2084: function 'int __thiscall dice::numSides(void)' already has a body...
what do this error mean?
Code:error C2084: function 'int __thiscall dice::numSides(void)' already has a body
You probably have two functin declarations or definitions of the numsides() in your dice class
You probably declare the same function in two different places. Post your code.
ok, my teacher had us split the dice.h into dice.h and dice.cpp, and include the dice.cpp in with the dice.h, I think that might be my problem, is there a way make this work without merging them back together
Your function schould be declared in dice.h like this
void DoSomething();
and defined in your dice.cpp
void DoSomething()
{
bla bla;
}
include your dice.h in the top of your dice.cpp like this
#include "dice.h"
yeah, I did that
How about if you help us to help you and post some code
dice.h
dice.cppdice.cppCode:#ifndef _DICE_H #define _DICE_H class dice{ public: dice(int sides); int roll(); int numSides(); int numRolls(); private: int mySides; int myRollCount; }; #endif
Code://dice.cpp #include <stdlib.h> #include <time.h> #include <limits.h> #include "dice.h" dice::dice (int sides) { long time = clock(); srand(time % INT_MAX); mySides = sides; } int dice::roll() { return (rand() % mySides) + 1; } int dice::numSides() { return mySides; } int dice::numRolls() { return myRollCount; }
I got it going, thanks for the help | http://cboard.cprogramming.com/cplusplus-programming/15130-help-me-please.html | CC-MAIN-2014-23 | refinedweb | 280 | 77.53 |
Create line chart with Visual Studio 2015 and Qt 5.9
Hello,
I am trying to create a line chart in Visual Studio 2015 by using Qt 5.9.
I already set up the .pro file, but still get a problem with QtLib.
In main.cpp
In .pro file
And the error
I appreciate any help from all of you
Thank you
does using the macro instead help:
QT_CHARTS_USE_NAMESPACE int main(int argc, char *argv[])
Do the example chart projects work for you? If they do that will be a simple test of finding differences of broken vs working...
Where I do use Charts obejcts, I also use a namespace on that class (as you are main):
using namespace QtCharts; class ...
Again, comparing the example apps to mine worked a treat for me.
- jsulm Moderators
@Kanguru Can you post the compiler/linker output? It looks like your app is not linked against QtChart lib. | https://forum.qt.io/topic/84245/create-line-chart-with-visual-studio-2015-and-qt-5-9 | CC-MAIN-2018-39 | refinedweb | 154 | 81.73 |
joe kim wrote:
> Hey Guys,
>
> I've been spending some time getting situated with Abdera. I have a
> two thoughts that I would be willing to drive implementation on. I am
> still getting started, so feel free to provide feedback or redirect
> me. I am still an atom novice compared to you guys.
>
> In general, I feel like Abdera could be simplified. Java frameworks
> are becoming extreme in terms of providing levels of indirection. You
> guys may have seen this call stack:
>
>
>
>
> 1. Build a pojo module free of any dependencies to represent the Atom
> data model
>
-1. The current interface+impl mechanism provides a great deal of
flexibility in the implementation and gives us built-in serialization to
and from XML using Axiom that is very high performance and efficient.
> Pojo's have a programming model with a very low cost to entry. If the
> data model was implemented as pojo's then there would be much less
> added programming model complexity for tasks that focus in on data
> manipulation.
>
> For example take a look at how an object is created.
>
> To create a feed you do:
>
> Feed feed = Factory.INSTANCE.newFeed();
>
You can also do:
Feed feed = new FOMFeed();
> In the Factory class, INSTANCE is defined as:
>
> public static final Factory INSTANCE = ServiceUtil.newFactoryInstance();
>
> ServiceUtil creates the factory with:
>
> public static Factory newFactoryInstance() {
> return (Factory) newInstance(
> CONFIG_FACTORY,
> ConfigProperties.getDefaultFactory());
> }
>
> ConfigProperties will then try to read from a configuration file or
> return a default value. But, apparently to use a different factory
> you have to modify a file. Wouldn't it be much nicer if you could just
> do this:
>
The ConfigProperties is only used to register the default
factory/parser/etc implementation. If you want to use alternative
implementations, you could use
Factory factory = new MyFactory();
Or, you could also...
Factory factory = new FOMFactory();.
> Feed f = new Feed();
>
> The difference between the two ways of instantiating an Object becomes
> more apparent to the developer when he/she needs to make some changes.
> Let's look at extensions. From the javadoc:
>
> * There are four ways of supporting extension elements.
> *
> * 1. Implement your own Factory (hard)
> * 2. Subclass the default Axiom-based Factory (also somewhat difficult)
> * 3. Implement and register an ExtensionFactory (wonderfully simple)
> * 4. Use the Feed Object Model's dynamic support for extensions (also
> very simple)
>
> Being mentally challenged as I am, I do not really understand what
> that means. My conclusion is that I would have to do some extra
> reading to understand the options and that translates to overhead with
> regard to my extension. If all I wanted to do was add an extra field,
> Java's core object oriented syntax suits me just fine.
>
> class PersonalFeed extends Feed {
> private boolean markedAsRead = false;
> public boolean isMarkedAsRead(){ return markedAsRead ; }
> public void setMarkedAsRead(boolean m){ markedAsRead = m };
> }
>
We can already do this.
class MyFeed extends FOMFeed {
...
}
> I would really like to see a pojo module that represents the Atom model.
>
I don't see this as buying us anything relative to what we already have
and would likely degrade the performance advantages of the Axiom-based
impl we already have in place.
> 2. Use TestNG as a testing framework.
>
> I did some research on Java testing frameworks and it looks like
> TestNG is the most capable of JUnit 3.8, JUnit 4, and TestNG. It will
> allow us to continue using JUnit 3.8 test cases and provides a much
> more flexible test configuration. I think org.apache.examples.simple
> would be a good starting place for creating test cases.
>
-0.5. JUnit has much broader support and, so far, has done everything
we've needed it to do. Unless there is something that TestNG offers
that we absolutely need and cannot get with JUnit, I don't see the
advantage.
> Any thoughts?
>
> Cheers,
> Joe
>
- James | http://mail-archives.apache.org/mod_mbox/abdera-dev/200606.mbox/%3C44A5A7ED.30706@gmail.com%3E | CC-MAIN-2017-04 | refinedweb | 638 | 57.06 |
dconf 0.1.0
dconf is a simple package which allow to retrieve, parse and manage configuration is the most agnostic way possible.
Dconf #
Dart package inspired by viper in golang to manage configuration
Why Dconf ? #
When building a modern application, you don’t want to worry about configuration file formats; you want to focus on building awesome software
Dconf provide :
- Find, load, and unmarshal a configuration file in JSON or YAML formats.
- Provide an alias system to easily rename parameters without breaking existing code.
- Automatic binding from environment variable
Putting values #
You can put values in the configuration by simply create a configuration holder object
import 'package:dconf/dconf.dart'; void main() { var conf = new Config(); conf["http.address"] = "localhost"; conf["http.port"] = "8080"; }
Reading Config Files #
Dconf requires minimal configuration so it knows where to look for config files. Dconf can search multiple paths, but currently a single Dconf instance only supports a single configuration file. Dconf does not default to any configuration search paths leaving defaults decision to an application.
Here is an example of how to use Dconf to search for and read a configuration file. None of the specific paths are required, but at least one path should be provided where a configuration file is expected.
import 'package:dconf/dconf.dart'; void main() { var load = new IOLoader(); load.addPath("./conf"); // Add lookup path load.addPath("."); var conf = load.load(); }
Register alias #
Aliases permit a single value to be referenced by multiple keys
import 'package:dconf/dconf.dart'; void main() { var conf = new Config(); conf["http.address"] = "localhost"; conf["http.port"] = "8080"; conf.alias("server", "http"); print(conf["server.port"]); // "8080" } | https://pub.dev/packages/dconf | CC-MAIN-2020-34 | refinedweb | 275 | 50.53 |
This is the second installment from the Programming Visual Basic .NET chapter on ADO.NET, focusing on connecting to an OLE DB data source and reading data into a data set.
OLE DB is a specification for wrapping data sources in a COM-based API so that data sources can be accessed in a polymorphic way. The concept is the same as ADO.NET's concept of managed providers. OLE DB predates ADO.NET and will eventually be superseded by it. However, over the years, OLE DB providers have been written for many data sources, including Oracle, Microsoft Access, Microsoft Exchange, and others, whereas currently only one product--SQL Server--is natively supported by an ADO.NET managed provider. To provide immediate support in ADO.NET for a wide range of data sources, Microsoft has supplied an ADO.NET managed provider for OLE DB. That means that ADO.NET can work with any data source for which there is an OLE DB data provider. Furthermore, because there is an OLE DB provider that wraps ODBC (an even older data-access technology), ADO.NET can work with virtually all legacy data, regardless of the source.
Connecting to an OLE DB data source is similar to connecting to SQL Server, with a few differences: the OleDbConnection class (from the System.Data.OleDb namespace) is used instead of the SqlConnection class, and the connection string is slightly different. When using the OleDbConnection class, the connection string must specify the OLE DB provider that is to be used as well as additional information that tells the OLE DB provider where the actual data is. For example, the following code opens a connection to the Northwind sample database in Microsoft Access:
' Open a connection to the database.
Dim strConnection As String = _
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" _
& "C:\Program Files\Microsoft Office\Office\Samples\Northwind.mdb"
Dim cn As OleDbConnection = New OleDbConnection(strConnection)
cn.Open( )
Similarly, this code opens a connection to an Oracle database:
' Open a connection to the database.
Dim strConnection As String = _
"Provider=MSDAORA.1;User ID=MyID;Password=MyPassword;" _
& "Data Source=MyDatabaseService.MyDomain.com"
Dim cn As OleDbConnection = New OleDbConnection(strConnection)
cn.Open( )
The values of each setting in the connection string, and even the set of settings that are allowed in the connection string, are dependent on the specific OLE DB provider being used. Refer to the documentation for the specific OLE DB provider for more information.
Table 8-2 shows the provider names for several of the most common OLE DB providers.
The DataSet class is ADO.NET's highly flexible, general-purpose mechanism for reading and updating data. Example 8-1 shows how to issue a SQL
SELECT statement against the SQL Server Northwind sample database to retrieve and display the names of companies located in London. The resulting display is shown in Figure 8-1.
Example 8-1: Retrieving data from SQL Server using a SQL SELECT statement
' Open a connection to the database. Dim strConnection As String = _ "Data Source=localhost; Initial Catalog=Northwind;" _ & "Integrated Security=True" Dim cn As SqlConnection = New SqlConnection(strConnection) cn.Open( ) ' Set up a data set command object. Dim strSelect As String = "SELECT * FROM Customers WHERE City = 'London'" Dim dscmd As New SqlDataAdapter(strSelect, cn) ' Load a data set. Dim ds As New DataSet( ) dscmd.Fill(ds, "LondonCustomers") ' Close the connection. cn.Close( ) ' Do something with the data set. Dim dt As DataTable = ds.Tables.Item("LondonCustomers") Dim rowCustomer As DataRow For Each rowCustomer In dt.Rows Console.WriteLine(rowCustomer.Item("CompanyName")) Next
The code in Example 8-1 performs the following steps to obtain data from the database:
SELECTcommand string and a Connection object are passed to the SqlDataAdapter object's constructor.:
WHEREclause in an SQL statement.
ORDER
BYclause in an SQL statement.
CurrentRows
Deleted
ModifiedCurrent
ModifiedOriginal
New
None
OriginalRows
Unchanged.
The DataRow class has an Item property that provides access to the value in each column of a row. For example, this code iterates through all the columns of a row, displaying the value from each column (assume that
row holds a reference to a DataRow object):
' Iterate through the column values.
Dim n As Integer
For n = 0 To row.Table.Columns.Count - 1
Console.WriteLine(row(n))
Note the expression used to find the number of columns:
row.Table.Columns.Count. The DataRow object's Table property holds a reference to the DataTable object of which the row is a part. As will be discussed shortly, the Table object's Columns property maintains a collection of column definitions for the table. The Count property of this collection gives the number of columns in the table and therefore in each row.
The DataRow object's Item property is overloaded to allow a specific column value to be accessed by column name. The following code assumes that the DataRow
row contains a column named "Description". The code displays the value of this column in this row:
Console.WriteLine(row("Description"))
The DataTable object's Columns property holds a ColumnsCollection object that in turn holds the definitions for the columns in the table. The following code iterates through the columns in the table and displays their names:
' Iterate through the columns.
Dim column As DataColumn
For Each column In dt.Columns
Console.WriteLine(column.ColumnName)
This code does the same thing, using a numeric index on the ColumnsCollection object:
' Iterate through the columns.
Dim n As Integer
For n = 0 To dt.Columns.Count - 1
Console.WriteLine(dt.Columns(n).ColumnName)
The ColumnsCollection object can also be indexed by column name. For example, if DataTable
dt contains a column named "Description", this code gets a reference to the associated DataColumn object:
Dim column As DataColumn = dt.Columns("Description")
To change data in a DataSet, first navigate to a row of interest and then assign new values to one or more of its columns. For example, the following line of code assumes that
row is a DataRow object that contains a column named "Description". The code sets the value of the column in this row to be "Milk and cheese":
row("Description") = "Milk and cheese"
Adding a new row to a table in a DataSet is a three-step process:
For example, assuming that
dt is a DataTable object, and that the table has columns named "CategoryName" and "Description", this code adds a new row to the table:
' Add a row.
Dim row As DataRow = dt.NewRow( )
row("CategoryName") = "Software"
row("Description") = "Fine code and binaries"
dt.Rows.Add(row)
The DataRow object referenced by
row in this code can be indexed by the names "CategoryName" and "Description" because the DataRow object was created by the DataTable object's NewRow method and so has the same schema as the table. Note that the NewRow method does not add the row to the table. Adding the new row to the table must be done explicitly by calling the DataRowCollection class's Add method through the DataTable class's Rows property.
Deleting a row from a table is a one-liner. Assuming that
row is a reference to a DataRow, this line deletes the row from its table:
row.Delete( )
When changes are made to a row, the DataRow object keeps track of more than just the new column values. It also keeps track of the row's original column values and the fact that the row has been changed. The Item property of the DataRow object is overloaded to allow you to specify the desired version of the data that you wish to retrieve. The syntax of this overload is:
Public Overloads ReadOnly Property Item( _
ByVal columnName As String, _
ByVal version As System.Data.DataRowVersion _
) As Object
The parameters are:
Current
Default
Original
Proposed
For example, after making some changes in DataRow
row, the following line displays the original version of the row's Description column:
Console.WriteLine(row("Description", DataRowVersion.Original))
The current value of the row would be displayed using any of the following lines:
Console.WriteLine(row("Description", DataRowVersion.Current))
Console.WriteLine(row("Description", DataRowVersion.Default))
Console.WriteLine(row("Description"))
Calling the DataSet object's AcceptChanges method commits outstanding changes. Calling the DataSet object's RejectChanges method rolls records back to their original versions.
TIP: The code shown in this section affects only the DataSet object, not the data source. To propagate these changes, additions, and deletions back to the data source, use the Update method of the SqlDataAdapter class, as described in the next section, "Writing Updates Back to the Data Source."
If there are relations defined between the DataTables in the DataSet, it may be necessary to call the DataRow object's BeginEdit method before making changes.:
"SELECT * FROM
Categories". This initializes the value of the SqlDataAdapter object's SelectCommand property.
UPDATE,
INSERT, and
DELETEcommands from the SqlCommandBuilder object.. | http://www.onlamp.com/lpt/a/1473 | CC-MAIN-2014-10 | refinedweb | 1,482 | 56.45 |
Hi! response for the new user
undefined method `last_read_at' for nil:NilClass for Chatrooms#show
how to solve?
Hey Nikola, looks like you don't have a @chatroom_user record set. That was one of the things we discovered was missing, so you'll want to make sure that your user's got that record when they join. We made a couple tweaks to this to fix a couple bugs on Github, so you might want to check those out:-...
Hey Chris,
I couldn't find good resources on how to test actioncable with rspec + capybara. Don't you know some great articles/repos I could check out?
Unfortunately a lot of it is still up in the air directly from Rails.... Hoping to see this start being more available soon.
Thanks Chris for the quick answer. I know that it's not released yet, but I thought something could be done with acceptance test like this:. But for some reason (likely actioncable related) I can't make it work. Do you have any idea?
If I get some time this weekend I'll give it a try and see. I would expect that to work, but there might be some gotchas.
Does anyone have an update on the rspec / actioncable issue? We are slowly starting to implement rspec into our workflow as our app gets bigger.
Hi,
I implemented the unread messages feature, however, when messages are sent in the chatroom and a user has not joined the room prior to that, it sends this error once the join button is clicked.
By joining the chatroom, I mean clicking the join button, which then adds the room to the list of joined rooms of the user before he enters the room.
This is in the rooms/views/show.html.erb...
Thank you!
It's missing setting the last_read_at when you join a new channel. Should be in the chatroom_users controller.
Hi Chris,
When showing the list of chatrooms (chatrooms#index), I would like to display the usernames of the users in that room along with the name of the room.
Do you think this would be a good use case for a array/jsonb column type in the chatrooms, in order to store information about the users subscribed to that chatroom?
That would avoid extra table joins on the users table, but would have the downside of needing to update the chatrooms when a user updates his username.
The alternative would be to go with a regular table join on the users table when fetching the chatrooms.
Thanks for your feedback on this.
Hey Stephane, I would do a join table for this because as things get more complex you'll want to probably support extra features like different roles for users in the room (like Slack allows you to invite guests to specific rooms) and the best way to do that would be with the join table. They're still fast to query and can store all the additional data you might want on it.
Hi Chris, thank you for your response. That makes more sense to go with a Join, I agree with you, thanks for taking the time to answer!
Hi Stephene,
If I understand, your chat room displays the names of the users who are currently in that chatroom chatting? Did you figure out how to get it working?
-Monroe
Hi Chris,
I just finished episode 7 of Group Chat with ActionCable. When I send a message, the "unread messages" div appears in the browser window of the sender as well as the recipient's window. Any ideas why it's appearing in the message sender's window?
barrooms/show
<% unread_messages = false %>
<div data-
<% @messages.each do |message| %>
<% if !unread_messages && @barroom_user.last_read_at < message.created_at %>
<% unread_messages = true %>
<div class="strike">
Unread Messages
</div>
<% end %>
<%= render message %>
<% end %>
</div>
barrooms_controller
def show
@messages = @barroom.messages.order(created_at: :desc).limit(100).reverse
@barroom_user = current_user.barroom_users.find_by(barroom_id: @barroom.id)
end
channels/barrooms.coffee
App.barrooms = App.cable.subscriptions.create "BarroomsChannel",
connected: ->
# Called when the subscription is ready for use on the server
disconnected: ->
# Called when the subscription has been terminated by the server
received: (data) ->
active_barroom = $("[data-behavior='messages'][data-barroom-id='#{data.barroom_id}']")
if active_barroom.length > 0
if document.hidden
if $(".strike").length == 0
active_barroom.append("<div class="strike">Unread Meassages</div>")
if Notification.permission == "granted"
new Notification(data.username, {body: data.body})
else
App.last_read.update(data.barroom_id)
active_barroom.append("<div>#{data.username}: #{data.body}</div>")
else
$("[data-behavior='barroom-link'][data-barroom-id='#{data.barroom_id}']").css("font-weight", "bold")
send_message: (barroom_id, message) ->
@perform "send_message", {barroom_id: barroom_id, body: message}
Rex,
Are you still seeking an answer to this? If so, let me know, and I'll see if we can help you. We just got our chatroom working properly.
-Monroe
Great series. The best on ActionCable I've seen so far and I've been trying to learn it for a couple of weeks.
Hi,
I have appended a chatroom to the chatroom list.
on recieved: (data) I managed to subscibe to all channels again in Users.coffee with
received: (data) ->
chatroom_list = $("[data-behavior='chatrooms']")
# Insert the chateroom
chatroom_list.append("< li>< a< strong<#{data.chatroom_name}< /a>< /li>")
App.cable.subscriptions.create "ChatroomsChannel"
How do I subscribe only to the the recieved chatroom?
Excelent Series!!... everything works perfect!
Hi Chris!
We got it working perfectly! Thanks so much! Can't wait to show you what we're putting together in a few months when we launch :D
One question: when a chatroom switches to bold, it indicates new unread messages. When I click on that chatroom, it loads the chatroom, but then it takes about 3 - 5 seconds before the new messages themselves load.
Do you have any suggestions on how to remove that load delay?
Thanks!
-Monroe
Join 22,346+ developers who get early access to new screencasts, articles, guides, updates, and more. | https://gorails.com/forum/group-chat-with-actioncable-part-7-gorails | CC-MAIN-2019-35 | refinedweb | 986 | 66.54 |
Linux Tools Project/Releng
{{#eclipseproject:technology.linux-distros}}
Contents
- 1 p2 Repositories
- 2 Hudson builds
- 3 Release HOWTO
- 3.1 As code is being worked on
- 3.2 ASAP when a release date is known
- 3.3 3 weeks before the planned release date or RC4 milestone
- 3.4 2 weeks before the planned release date or RC4 milestone
- 3.5 10 days before the planned release date or RC4 milestone
- 3.6 A few days before the release or RC4 milestone
- 3.7 Release day
- 3.8 Day after the release
- 4 JAR Signing
- 5 Simultaneous Release Inclusion
- 6 Builds and how they get places
- 7 New and Noteworthy
- 8 Accepting Patches
- 9 Adding a new component
- 10 Building locally
- 11 Adding a new test plugin to those that are run during the automated build
- 12 Creating Source Tarballs
- 13 eclipse-build tarballs
p2 Repositories
- Nightly (what Git master targets):
- Juno nightly (what Git branches "stable-1.0 through stable-1.2 target):
- Old builds towards Indigo train (like a staging area):
- Releases (latest is always):
- 0.1:
- 0.2: (shows as Access Forbidden in a browser due to no site.xml; using p2 Update Manager works fine)
- 0.2.1: (shows as Access Forbidden in a browser due to no site.xml; using p2 Update Manager works fine)
- 0.3:
- 0.4:
- 0.4.1:
- 0.5:
- 0.5.1:
- 0.6:
- 0.6.1:
- 0.7:
- 0.8:
- 0.8.1:
- 0.9:
- 0.9.1:
- 0.9.2:
- 1.0.0:
- 1.1.0:
- 1.1.1:
- 1.2.0:
- 1.2.1:
- 2.0:
- 2.1:
Hudson builds
Hudson runs builds of our project every 6 hours (0 */6 * * *) if something has changed in Git.
For builds towards the next Kepler service release (SR2), ensure the code is already on the stable-2.2 Git branch. For builds towards our Kepler contributions, ensure the code is on the master branch.
As of 2013-11-15 we have two active Hudson build jobs: (built from stable-2.2) (built from master branch)
Release HOWTO
As code is being worked on
- Write Linux_Tools_Project/Releng#New_and_Noteworthy items as new noteworthy stuff is committed. Put this into new-{next release} in our website git location (ssh://your_committer_id@git.eclipse.org/gitroot/).
- Ensure the project plan (project-info/plan.xml in website repository) is kept up to date with release deliverables and dates
ASAP when a release date is known
- Let project committers and contributors know the cut-off date for accepting patches from non-committers. Reminders on the dates from the table should be send to the mailing list too.
Proposed freeze periods:
3 weeks before the planned release date or RC4 milestone
- Let the project know that the date has now arrived that new contributions from non-committers cannot be accepted for this release.
- Go over IP Contribution Review to ensure patches have iplog+ where necessary (note: I've gone over everything earlier than bug #315815 and concluded we don't need to do anything -- Andrew Overholt)
- Submit IP log with the IP log tool.
- Prepare the release review "docuware". Base it on the previous release's *.mediawiki in our website repository's 'doc' sub-directory. Further details regarding release "docuware" can be found here: Development_Resources/HOWTO/Release_Reviews. Since the release review docuware will point to the N&N, ensure that is finished.
- Create a branch (ex. stable-2.1) from master and push it to the main git repository
2 weeks before the planned release date or RC4 milestone
- Alter the Hudson job configuration for maintenance releases (e.g. for releases based on Kepler) to pull from the correct git branch
- On the stable-x branch
- modify releng/org.eclipse.linuxtools.releng-site/pom.xml so the build output is written to updates-nightly-<release> (e.g. kepler) instead of updates-nightly (reserved for the bleeding-edge master)
- modify the top-level pom.xml so the mirror-repo-name property has a value of update-<version> (e.g. update-0.10) instead of updates-nightly (reserved for the bleeding-edge master)
- On master, update the version number in the pom.xml files (ex. mvn3 org.eclipse.tycho:tycho-versions-plugin:0.17.0:set-version -DnewVersion=${to-be-released-version + 1}-SNAPSHOT; commit and push the resulting changes)
- Check if all versions were updated correctly. Usually maven updates more files than we want. We need in this step to update the versions from the pom.xml file from the root directory and from pom.xml files located in 1st level of directories. We don't need to update the version in plugins and features in this step.
- Ensure the repos listed in our parent pom.xml contain the dependencies of the correct versions against which we should be building/testing <--- this is important!
- Commit any work not targetted at the release to master
- Commit changes specifically targetted towards the release only on the release branch (ex. stable-0.9)
- Commit changes targetted at both the release and trunk on master and then EGit/User_Guide#Cherry_Picking the commit(s) from master to the branch
10 days before the planned release date or RC4 milestone
- After IP log has been cleared, save a copy in the website repository (the legal team usually emails a PDF or HTML snapshot to the IP log approval requestor).
- Use Mylyn WikiText to transform the now-complete release review docuware from .mediawiki to .html
- To do that, create a new eclipse project, import the .mediawiki file, right click it and select WikiText->Generate HTML option.
- Save both the generated HTML and MediaWiki files in the repository alongside previous versions.
- Email our PMC to request approval for release. A previous request looks like this. This is only required for a major (x.0) or minor ({*.x) release.
- Include links to the HTML for the release documentation, IP log and N&N
- Await PMC approval on tools-pmc mailing list. For the Tools PMC, there is no single approval. What happens is that one or more committers will +1 the request and if no one disputes, then consider these +1s as approvals.
- Once the PMC has approved the release, email the resulting release review docuware HTML, the IP log snapshot, and a link to the PMC approval email to emo@eclipse.org. If the emo has opened a release tracker bug, you may consider the release approved if the bug has been closed and you have sent the appropriate documentation. In most cases, you will already have supplied the docuware and ip log to the bug.
- The EMO may need a week to review the documents, so this step should attempt to finish at least one week to the release day. The emo team usually start the review process at Wednesdays. So it is a good idea to finish this step before the Wednesday that it at least one week from the release date. If this won't be possible, ask them about deadlines.
A few days before the release or RC4 milestone
- All component leads should now be satisfied that the branch is stable and all unit tests should be passing
- Remove the "-SNAPSHOT" from pom.xml versions (mvn3 org.eclipse.tycho:tycho-versions-plugin:set-version -DnewVersion=${to-be-released-version} ... commit and push the resulting changes)
- The maven command is removing -SNAPSHOT from places that it shouldn't. So it is necessary to check if everything is correct before committing. -SNAPSHOT should be removed from the root directory and from pom.xml files located in 1st level of directories. -SNAPSHOTS from plugins and features shouldn't be removed. The maven command will erroneously match plug-ins and features that have the same release number as the overall release and this will end up stripping the -SNAPSHOT and more importantly, the .qualifier from the version numbers. You can spot this by looking at the build for features/plug-ins that are missing the date stamp.
- Ensure all p2.inf files found in features, point to the Linux Tools stable update site. To do a mass change, one could do: find . -name p2.inf | xargs sed -i -e 's/updates-nightly/update/g'. The master branch will point to updates-nightly by default. A branch already used for a stable release will likely already be pointed to the correct update site and no action needs to be taken. Look in an existing feature such as autotools/org.eclipse.linuxtools.autotools-feature for confirmation.
- In the case of a release which will be contributed to a simultaneous release, this step needs to be done in advance of the RC4 contribution build.
- Ensure "-P build-server" is in the Hudson job configuration's maven arguments line to get a signed build
- With the version set properly in the repository's pom.xml files, push a build in Hudson. Note that if the release is also targeted for a simultaneous release, then the last build submitted for RC4 must be the release build.
-
- Perform a quick smoke test to ensure the build is acceptable. Zero code changes should have happened since the SHA-1 that was used for the previous build.
- Ensure Equinox/p2/p2.mirrorsURL has been automatically set in artifacts.xml (in artifacts.jar) and that the p2.index file exists.
- If things are acceptable with the signed build, tag the Git repo:
- use a tag of the format vMajor.Minor.Micro (ex. v0.7.1)
- use the -a flag on the tag
- specify --tags on the git push command to ensure that the tag is visible to others
- do not tag until the IP Log is completed and the repository has been tested
- if you tag too early, you can tag again using -a -f to force the re-tag, however, if you re-tag, you need to send a note out to indicate this has happened
- once complete, an announcement note should be sent out to tell the list that the tag has been created and for what commit SHA-1 the tag is for.
- Lock the Hudson job that was used for the release build to prevent automatic deletion (ex. build #263) and add a description (ex. "0.7.1 release")
- Save the archive of the entire build somewhere before it gets over-written (it will not be stored across the next build)
- ex. wget --no-check-certificate*zip*/archive.zip
- Get source tarballs:
- Go to
- Download the .tar.bz2 file for the matching tag or last commit
- rename the tarball to linuxtools-${version}.tar.bz2
- upload the source tarball to{version}-sources/, copying and modifying index.php from a previous release
- chmod 755 the source tarball
Release day
- Put an archive of the entire build into the downloads directory (~/downloads/technology/linuxtools) (Jeff Johnston, Andrew Overholt, and Alex Kurtakov are currently the only committers with access to this area)
- Unzip the archive.zip saved after last successful build
- cd to sub-directory containing actualy repository and re-zip contents to linuxtools-RELEASE.zip in the downloads directory
- Run md5sum linuxtools-RELEASE-incubation.zip >linuxtools-RELEASE.zip.md5.
- move the previous linuxtools-RELEASE.zip and md5 to /home/data/httpd/archive.eclipse.org/linuxtools
- chgrp technology.linux-distros linuxtools-RELEASE.zip*
- chmod 755 linuxtools-RELEASE.zip*
- Make any final changes to the release's Linux_Tools_Project/Releng#New_and_Noteworthy
- Update linuxtools/new to reflect the release
- Keep a new-{version-number} with the updated n&n for the release and edit the new/index.php file to point to the new-{version-number}/index.html file.
- Update the link on downloads.php to point to the correct listing of source tarballs for this release
- Update the "Unit test results" link on downloads.php to point to the Hudson job that was used
- Add a news item to the main wiki page
- Ensure download links -- including the zip of the repository -- are correct
- Re
- Copy our p2 repository to a versioned copy to archive it (ex. cp -rp updates-nightly-juno update-0.5)
- Announce release on mailing list, newsgroup and blog(s)
- Add this release to list of versions in bugzilla (in portal)
- Add release.1 to list of target milestones in bugzilla or add a new major release if the next minor release won't occur (in portal)
- Mark release as complete in list of releases in portal
- Add next version to list of releases in portal
- Add next version to list of target milestones in bugzilla (in portal)
Day after the release
- Relax :)
- Decide whether or not older releases should be moved to archive.eclipse.org (take note of p2.mirrorsURL modification)
- Switch the Hudson job back to use the master or stable-*** branch
- Start on the next release
JAR Signing
Our Tycho builds use the Dash signing/re-packing plugin written by Jesse Mcconnell and Dave Carver. This takes care of signing and re-packing the p2 repository. If this plugin were to stop working for us or if we had to manually sign a build, this is how to do it:
- ensure the build is not already signed
- take a zip of the p2 repository (artifacts.jar, content.jar, features/, plugins/) and put it in /home/data/httpd/download-staging.priv/commonBuild/ on dev.eclipse.org
- call the signer: /usr/bin/sign /home/data/httpd/download-staging.priv/commonBuild/(filename of zip)
- output of /usr/bin/sign --help will guide you with what log to tail, etc. to see when it's finished
- move the signed zip out of the staging area and scp it to your local machine
- use the p2.process.artifacts ant task to re-pack the archive
- ant -f build.xml where build.xml contains something like:
<?xml version="1.0" encoding="UTF-8"?> <project name="process signed zip" default="processZip"> <target name="processZip"> <p2.process.artifacts </target> </project>
- verify you can install from the re-packed repository and that the plugins are all signed (the little certificate icons under Help->About->Installation Details->Plug-ins tab should not be broken)
- upload the resulting re-packed repository to a download area somewhere
Simultaneous Release Inclusion
As of Helios, Linux Tools is a part of the annual Eclipse simultaneous release. The simultaneous release aggregator takes content from the p2 repos that are listed in our b3aggrcon files Kepler, Juno). Builds that we would like to promote to the simultaneous release must:
- be signed
- not necessarily be in category.xml
- exist in the p2 repository listed in the linuxtools.b3aggrcon file with the exact same feature versions/qualifiers
Note also that categories for the main simultaneous release are different from our p2 repository categories (which are set in our category.xml). Categories for the release train are defined in separate files (Indigo, Juno, Kepler and the order of the features must match that in our contribution files (Indigo, Juno, Kepler). Use the b3 aggregator editor to make any adjustments here and read the relevant documentation!
The builds must remain in the p2 repo until we change the feature versions/qualifiers in the b3aggrcon file. This will prevent future aggregation runs from failing. More information can be found here:
- Indigo/Contributing_to_Indigo_Build <-- This one contains the CVS location for org.eclipse.indigo.build where the linuxtools.b3aggrcon file lives and will need to be updated to new qualifiers [and versions] when we want to include a new release in Indigo (for SR1 and SR2)
- Juno/Contributing_to_Juno_Build <-- This one contains the git location for org.eclipse.simrel.build where the linuxtools.b3aggrcon file lives in the Juno_maintenance branch and will need to be updated to new qualifiers [and versions] when we want to include a new release in Juno (for SR2, etc.)
- Simrel/Contributing_to_Simrel_Aggregation_Build <-- This one contains the git location for org.eclipse.simrel.build where the linuxtools.b3aggrcon file lives in the master branch and will need to be updated to new qualifiers [and versions] when we want to include a new release in Kepler (for M6, M7, RC1, RC2, SR0, SR1, SR2, etc.)
- Indigo/Simultaneous_Release_Plan <-- Indigo Dates
- Juno/Simultaneous_Release_Plan <-- Juno Dates
- Kepler/Simultaneous_Release_Plan <-- Kepler Dates
- Luna/Simultaneous_Release_Plan <-- Luna Dates
- Calendar of dates!
- Eclipse_b3/aggregator/manual
The cross-project-issues-dev mailing list must be monitored by project leads and release engineers. This mailing list can help with any problems with the simultaneous release as well as with EPP packages. David Williams leads the simultaneous release creation and Markus Knauer coordinates the EPP packages including "ours" (Indigo).
Note that during the "quiet week" between RC4 and the final release, our release bits must be put into their final place but not be made "visible". Read the Final Daze document for guidelines and pointers to FAQs on how to make things invisible, etc.
Builds and how they get places
New and Noteworthy
New and noteworthy (N&N) items should be written as soon as possible after the item has been committed. This will prevent a frantic scramble at the end of the release cycle. For example, for the 0.3.0 release, we have in GIT (ssh://youruserid@git.eclipse.org/gitroot/). Please add new items under the various headings. Feel free to add new headings as necessary. The parenthesized numbers in the pseudo-index at the top of the file indicate the number of N&N items in that section. There is a template to be used for each release at new-template. Copy this for releases and be sure all recent releases are listed in the table and list at the top. Be sure to add details such as the release version and date, bug count (with a link to the query), a few sentences documenting non-committer contributions, etc.
N&N images
Screenshots should be cropped to show only the pertinent region of the screen and so the N&N page doesn't appear too wide. Use the gimp to add drop shadows (pick the default values for radius and opacity). Save images as PNGs.
Accepting Patches
Patches contributed by non-committers must have the iplog flag set to +1 on the attachment. The flag should be set on the patch attachment itself and not the bug.
Adding a new component
Our build process is pom.xml-driven but we have a mirrored feature structure for use by p2 repos. In order for a new component to be built, it needs to fit into the hierarchy somewhere. When adding a new top-level sub-project feature (sub-features of existing features should get added to their containing feature), follow these steps:
Code-level checklist
- Create your feature and containing plugins
- Name the feature(s) org.eclipse.linuxtools.mycoolstuff and put it in Git as org.eclipse.linuxtools.mycoolstuff-feature (replacing "mycoolstuff", obviously)
- Name the plugin(s) org.eclipse.linuxtools.mycoolstuff.core, org.eclipse.linuxtools.mycoolstuff.ui, etc.
- Do not put _ characters in the bundle ID (this breaks the Maven signing plugin we're using)
- Ensure your packages are all in the org.eclipse.linuxtools namespace (and in the .mycoolstuff.core, .mycoolstuff.ui packages where appropriate)
- Ensure your strings are externalized
- Ensure your feature and plugin provider fields are set to "Eclipse Linux Tools" (no quotes)
- Either copy over existing pom.xml files and manually edit them or generate using Tycho and manually edit
- Copy over existing p2.inf file from any other feature and add p2.inf to your build.properties binary files
- Create your JUnit test plugins
- Name your test plugin the same as your functional plugin but with a ".tests" tacked onto the end
- Ensure your test bundle's pom.xml looks like an existing test bundle's pom.xml
- Enable API Tools on non-example/test/feature plugins
Git-level checklist
- If this is a new sub-project, create a directory to contain it, like "oprofile" or "autotools"
- Check all of your new stuff into Git master
- Add any new top-level feature to the top-level pom.xml in your Git clone
- If your sub-project has dependencies outside the existing ones (BIRT, EMF, DTP, CDT, GEF), notify the mailing list and project leads
- Hopefully this will have been caught in the CQ (legal review)
- Ensure your BREEs are correct in your plugin MANIFEST.MF files
- Ensure the version on your feature ends with ".qualifier" (without the quotation marks)
- Ensure the versions in your MANIFEST.MF and feature.xml files match those in your pom.xml files
- Add new features to our p2 repo's category.xml file
- Ensure your pom.xml files have the same source plugin and feature bits as the others
- Run a full local build from the top-level of your git clone with mvn -fae clean install to ensure the build still works
Building locally
- ensure you have a recent Maven 3 release on your path
- clone our Git repository (perhaps read our Git instructions)
- cd into your clone and run mvn -fae clean install
- You can add the parameter "-Dmaven.test.skip=true" for debug purposes, but remember that no contribution that breaks the unit tests will be accepted.
- Some components have dependencies between then, for example Valgrind plug-ins. In that case, always build the whole Valgrind suite instead of one particular plug-in.
- follow this guide for debugging tests being run by Tycho
Adding a new test plugin to those that are run during the automated build
- Create test plugin(s) (ex. org.eclipse.linuxtools.mycoolfeature.ui.tests)
- Copy an existing test plugin's pom.xml (this is used when the automated build is run)
- Add your test plugin to the parent pom.xml
- Check your plugin(s) into Git
- Verify that your tests are built/run with a local build (see instructions on this page)
The next time a build happens, your test plugin(s) will be built and run. If you need a build pushed sooner than the next 6 hour mark when our scheduled builds happen, speak with the project leads via linuxtools-dev@eclipse.org or #eclipse-linux on Freenode.
Creating Source Tarballs
You may need to create tarballs of some of the sub-projects found in Linux Tools. To create a source tarball of a subproject,
- in your git repository, change into the sub-project (e.g. gprof, autotools)
- there, run: mvn assembly:single (note requires maven3)
- the src tarball will be found in the target directory
- rename as desired since the tarball will be given the same name each time
To get a snapshot for a particular commit:
- clone the Linux Tools git repository
- git checkout -b LOCAL_BRANCH_NAME commit-hash-number
- follow the steps above to get a src tarball of a particular sub-project
eclipse-build tarballs
Tarballs are organized into directories based on version (ex. Indigo tarballs go here). Older releases should be moved here. Don't forget to update md5sums.txt and sha1sums.txt!
- Tag the Git hash
- Generate with buildEclipseBuildSource.sh (ensure proper tag is used!): ./buildEclipseBuildSource.sh -eclipseBuildTag 0.4
- Upload: scp to downloads/technology/linuxtools/eclipse-build/whereyouwantittogo and chgrp technology.linux-distros; chmod 775 fileYouJustUploaded; md5sum fileYouJustUploaded >> md5sums.txt; sha1sum fileYouJustUploaded >> sha1sums.txt. For scp access to download.eclipse, a committer must be in special groups in the Eclipse Foundation systems. If you require access, please check with Andrew Overholt or Alex Kurtakov and then contact the Eclipse webmaster via a bug. | http://wiki.eclipse.org/index.php?title=Linux_Tools_Project/Releng&oldid=353323 | CC-MAIN-2016-07 | refinedweb | 3,889 | 55.54 |
pytest plugin for generating HTML reports
pytest-html is a plugin for pytest that generates a HTML report for the test results.
Requirements
You will need the following prerequisites in order to use pytest-html:
- Python 2.7, 3.6, PyPy, or PyPy3
Installation
To install pytest-html:
$ pip install pytest-html
Then run your tests with:
$ pytest --html=report.html
ANSI codes
Note that ANSI code support depends on the ansi2html package. Due to the use of a less permissive license, this package is not included as a dependency. If you have this package installed, then ANSI codes will be converted to HTML in your report.
Creating a self-contained report
In order to respect the Content Security Policy (CSP), several assets such as CSS and images are stored separately by default. You can alternatively create a self-contained report, which can be more convenient when sharing your results. This can be done in the following way:
$ pytest --html=report.html --self-contained-html
Images added as files or links are going to be linked as external resources, meaning that the standalone report HTML-file may not display these images as expected.
The plugin will issue a warning when adding files or links to the standalone report.
Enhancing reports
Environment
The Environment section is provided by the pytest-metadata, plugin, and can be accessed
via the
pytest_configure hook:
def pytest_configure(config): config._metadata['foo'] = 'bar'
Extra content
You can add details to the HTML reports by creating an ‘extra’ list on the report object. Here are the types of extra content that can be added:
Note: When adding an image from file, the path can be either absolute or relative.
Note: When using --self-contained-html, images added as files or links may not work as expected, see section Creating a self-contained report for more info.
There are also convenient types for several image formats:
The following example adds the various types of extras using a
pytest_runtest_makereport hook, which can be implemented in a plugin or
conftest.py file:
import pytest @pytest.mark.hookwrapper def pytest_runtest_makereport(item, call): pytest_html = item.config.pluginmanager.getplugin('html') outcome = yield report = outcome.get_result() extra = getattr(report, 'extra', []) if report.when == 'call': # always add url to report extra.append(pytest_html.extras.url('')) xfail = hasattr(report, 'wasxfail') if (report.skipped and xfail) or (report.failed and not xfail): # only add additional html on failure extra.append(pytest_html.extras.html('<div>Additional HTML</div>')) report.extra = extra
You can also specify the
name argument for all types other than
html which will change the title of the
created hyper link:
extra.append(pytest_html.extras.text('some string', name='Different title'))
Modifying the results table
You can modify the columns by implementing custom hooks for the header and
rows. The following example
conftest.py adds a description column with
the test function docstring, adds a sortable time column, and removes the links
column:
from datetime import datetime from py.xml import html import pytest @pytest.mark.optionalhook def pytest_html_results_table_header(cells): cells.insert(2, html.th('Description')) cells.insert(1, html.th('Time', class_='sortable time', col='time')) cells.pop() @pytest.mark.optionalhook def pytest_html_results_table_row(report, cells): cells.insert(2, html.td(report.description)) cells.insert(1, html.td(datetime.utcnow(), class_='col-time')) cells.pop() @pytest.mark.hookwrapper def pytest_runtest_makereport(item, call): outcome = yield report = outcome.get_result() report.description = str(item.function.__doc__)
You can also remove results by implementing the
pytest_html_results_table_row hook and removing all cells. The
following example removes all passed results from the report:
import pytest @pytest.mark.optionalhook def pytest_html_results_table_row(report, cells): if report.passed: del cells[:]
The log output and additional HTML can be modified by implementing the
pytest_html_results_html hook. The following example replaces all
additional HTML and log output with a notice that the log is empty:
import pytest @pytest.mark.optionalhook def pytest_html_results_table_html(report, data): if report.passed: del data[:] data.append(html.div('No log output captured.', class_='empty log'))
Contributing
Fork the repository and submit PRs with bug fixes and enhancements, contributions are very welcome.
Tests can be run locally with tox, for example to execute tests for Python 2.7 and 3.6 execute:
tox -e py27,py36
Resources
Release History
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pytest-html/ | CC-MAIN-2017-47 | refinedweb | 726 | 50.02 |
21 May 2013 10:58 [Source: ICIS news]
GUANGZHOU (ICIS)--Increasing number of traders in east and north China are now trying to sell polypropylene (PP) to the southern part of the country, an industry source said on Tuesday.
The price of PP in south ?xml:namespace>
Furthermore, the source said at the Chinaplas conference that the freight rate from east to south or from north to south was no more than CNY300/tonne. Hence, there are margins.
Industry sources said the gap is wide partly because there are no methanol-to-olefins (MTO) or coal-to-olefins (CTO) plants which produce cheaper PP in south
Chinaplas, Asia’s largest plastics raw material and machinery exhibition, runs on 20-23 | http://www.icis.com/Articles/2013/05/21/9670524/chinaplas-13-more-traders-in-east-north-china-to-sell-pp-to.html | CC-MAIN-2014-35 | refinedweb | 120 | 74.53 |
CodePlexProject Hosting for Open Source Software
Hi i have a little question about module initializations.
I have a module called NewsModule for displaying... news !
So this module needs an UserId used to retrieve user associated news from a WCF webservice.
My question : How and what is the best practice to inform my NewsModule that it will display news of a specific UserId ?
Thanks by advance
I currently have a solution that includes a set of Prism projects and a web project. The web project hosts a login page, the containing page for my Prism shell, and the WCF services used to deliver data to the Silverlight modules. These services can
use the same session object established by the login action, so the job of determining which data to return belongs to the service, which to me is as it should be.
Use something like HttpContext.Current.Session["UserId"] (which was created at login) to inform your queries and you are good to go.
I have not yet used a multi-targeted module to access my services (from a WPF app, for example) so I am not sure what devils lie in the details of identifying a user who did not login on the same web service where the WCF services are hosted, but if you
are Silverlight-in-browser-at-a-specific-URL, this scenario is architecturally sound and simple (in my humble opinion).
Rock on.
Hi
If I understand your scenario correctly, you have a
NewsModule, which probably has a NewsView (or something of the sort), with its Presenter (could also be ViewModel orController).
As I picture it, the Presenter could perform the request to the WCF service getting the
UserID from someone.
I do not know your exact scenario, but you could get the
UserID in different ways.
For example, if you have a single
NewsView which only shows news related to the logged in user, you could get the ID from your Login/Authentication Service (if you have it).
Another example can be seen in the
Prism-v2 StockTrader Reference Implementation solution that
comes with the Guidance. You can check the NewsModule in it. The implementation in the RI shows news based on the selected item of a ListView.
Please let me know if this helps.
Damian Schenkelman
i use regular module initialize model to load on demand modules, but i'm facing to a problem :
When my app is launched, this one ask a username et password, i call a web service to retrieve user informations.
So i display a module that load on demand 6 modules (and one of it is NewsModule) the main module represent an user dashboard. So in module news i don't know the user id when this one become loaded.
Generaly in CAL samples modules simply load static data in xml file, etc.
In my case in NewsModule i need to be informed that this one needs to load news data with distant access with a specific Id.
So my question is what the best practice to access to this from NewsModule
NewsModule is also used for display friend dashboard...
Actualy i do that (but i'm not happy of it):
In main module container:
Container.RegisterType<User>("ConnectedUser", new ContainerControlledLifetimeManager());
User u = Container.Resolve<User>("ConnectedUser");
u.FirstName = "Toto";
u.LastName = "Tata";
u.Id= 34;
And in child module :
public class NewsModule : IModule
{
private readonly IRegionManager _regionManager;
private readonly IUnityContainer _container;
private readonly IEventAggregator _eventAggregator;
public NewsModule (IRegionManager regionManager, IUnityContainer container, IEventAggregator eventAggregator)
{
_container = container;
_eventAggregator = eventAggregator;
_regionManager = regionManager;
}
#region IModule Members
public void Initialize()
{
NewsListViewViewModel vm = _container.Resolve<NewsListViewViewModel>();
NewsListView view = _container.Resolve<NewsListView>();
view.DataContext = vm;
User u = _container.Resolve<User>("ConnectedUser");
_regionManager.AddToRegion("MainRegion", view);
}
#endregion
}
So i think that it's not that you specify but i don't understand how inject an instance of my user object in my NewsModule
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://compositewpf.codeplex.com/discussions/59699 | CC-MAIN-2017-13 | refinedweb | 682 | 52.7 |
. Here are nine exciting new features that will ship with Java 9.
1. The Java Platform module system
The defining feature for Java 9 is an all-new module system. When codebases grow larger, the odds of creating complicated, tangled “spaghetti code” increase exponentially. There are two fundamental problems: It is hard to truly encapsulate code, and there is no notion of explicit dependencies between different parts (JAR files) of a system. Every public class can be accessed by any other public class on the classpath, leading to inadvertent usage of classes that weren't meant to be public API. Furthermore, the classpath itself is problematic: How do you know whether all the required JARs are there, or if there are duplicate entries? The module system addresses both issues.
Modular JAR files contain an additional module descriptor. In this module descriptor, dependencies on other modules are expressed through`requires` statements. Additionally, `exports` statements control which packages are accessible to other modules. All non-exported packages are encapsulated in the module by default. Here's an example of a module descriptor, which lives in `module-info.java`:
module blog { exports com.pluralsight.blog; requires cms; }
We can visualize the modules as follows:
Note that both modules contain packages that are encapsulated because they're not exported (visualized with the orange shield). Nobody can accidentally use classes from those packages. The Java platform itself has been modularized using its own module system as well. By encapsulating JDK internal classes, the platform is more secure and evolving it becomes much easier.
When starting a modular application, the JVM verifies whether all modules can be resolved based on the `requires` statements—a big step up from the brittle classpath. Modules allow you to better structure your application with strong enforcement of encapsulation and explicit dependencies. You can learn more about working with modules in Java 9 with this course.
2. Linking
When you have modules with explicit dependencies, and a modularized JDK, new possibilities arise. Your application modules now state their dependencies on other application modules and on the modules it uses from the JDK. Why not use that information to create a minimal runtime environment, containing just those modules necessary to run your application? That's made possible with the new jlink tool in Java 9. Instead of shipping your app with a fully loaded JDK installation, you can create a minimal runtime image optimized for your application.
3. JShell: the interactive Java REPL
Many languages already feature an interactive Read-Eval-Print-Loop, and Java now joins this club. You can launch jshell from the console and directly start typing and executing Java code. The immediate feedback of jshell makes it a great tool to explore APIs and try out language features.
Testing a Java regular expression is a great example of how jshell can make your life easier. The interactive shell also makes for a great teaching environment and productivity boost, which you can learn more about in this webinar. No longer do you have to explain what this `public static void main(String[] args)` nonsense is all about when teaching people how to code Java.
4. Improved Javadoc
Sometimes it's the little things that can make a big difference. Did you use Google all the time to find the right Javadoc pages, just like me? That's no longer necessary. Javadoc now includes search right in the API documentation itself. As an added bonus, the Javadoc output is now HTML5 compliant. Also, you'll notice that every Javadoc page includes information on which JDK module the class or interface comes from.
5. Collection factory methods
Often you want to create a collection (e.g., a List or Set) in your code and directly populate it with some elements. That leads to repetitive code where you instantiate the collection, followed by several `add` calls. With Java 9, several so-called collection factory methods have been added:
Set<Integer> ints = Set.of(1, 2, 3); List<String> strings = List.of("first", "second");
Besides being shorter and nicer to read, these methods also relieve you from having to pick a specific collection implementation. In fact, the collection implementations returned from the factory methods are highly optimized for the number of elements you put in. That's possible because they're immutable: adding items to these collections after creation results in an `UnsupportedOperationException`.
6. Stream API improvements
The Streams API is arguably one of the best improvements to the Java standard library in a long time. It allows you to create declarative pipelines of transformations on collections. With Java 9, this only gets better. There are four new methods added to the Stream interface: dropWhile, takeWhile, ofNullable. The iterate method gets a new overload, allowing you to provide a Predicate on when to stop iterating:
IntStream.iterate(1, i -> i < 100, i -> i + 1).forEach(System.out::println);
The second argument is a lambda that returns true until the current element in the IntStream becomes 100. This simple example therefore prints the integers 1 until 99 on the console.
Besides these additions on Stream itself, the integration between Optional and Stream has been improved. It's now possible to turn an Optional object into a (possibly empty) Stream with the new `stream` method on Optional:
Stream<Integer> s = Optional.of(1).stream();
Turning an Optional into a Stream is especially useful when composing complex Stream pipelines.
7. Private interface methods
Java 8 brought us default methods on interfaces. An interface can now also contain behavior instead of only method signatures. But what happens if you have several default methods on an interface with code that does almost the same thing? Normally, you'd refactor those methods to call a private method containing the shared functionality. But default methods can't be private. Creating another default method with the shared code is not a solution, because this helper method becomes part of the public API. With Java 9, you can add private helper methods to interfaces to solve this problem:
public interface MyInterface { void normalInterfaceMethod(); default void interfaceMethodWithDefault() { init(); } default void anotherDefaultMethod() { init(); } // This method is not part of the public API exposed by MyInterface private void init() { System.out.println("Initializing"); } }
If you're evolving APIs with default methods, private interface methods can be helpful in structuring their implementation.
8. HTTP/2
A new way of performing HTTP calls arrives with Java 9. This much overdue replacement for the old `HttpURLConnection` API also supports WebSockets and HTTP/2 out of the box. One caveat: The new HttpClient API is delivered as a so-called _incubator module_ in Java 9. This means the API isn't guaranteed to be 100% final yet. Still, with the arrival of Java 9 you can already start using this API:
HttpClient client = HttpClient.newHttpClient(); HttpRequest req = HttpRequest.newBuilder(URI.create("")) .header("User-Agent","Java") .GET() .build(); HttpResponse<String> resp = client.send(req, HttpResponse.BodyHandler.asString());
Besides this simple request/response model, HttpClient provides new APIs to deal with HTTP/2 features such as streams and server push.
9. Multi-release JARs
The last feature we're highlighting is especially good news for library maintainers. When a new version of Java comes out, it takes years for all users of your library to switch to this new version. That means the library has to be backward compatible with the oldest version of Java you want to support (e.g., Java 6 or 7 in many cases). That effectively means you won't get to use the new features of Java 9 in your library for a long time. Fortunately, the multi-release JAR feature allows you to create alternate versions of classes that are only used when running the library on a specific Java version:
multirelease.jar ├── META-INF │ └── versions │ └── 9 │ └── multirelease │ └── Helper.class ├── multirelease ├── Helper.class └── Main.class.
As you can see, Java 9 offers a wide array of new features, both small and large. Ready to get started?
Learn more: Java 9 Modularity: First Look | https://www.pluralsight.com/blog/software-development/java-9-new-features | CC-MAIN-2017-47 | refinedweb | 1,338 | 56.05 |
Language Feature Highlight: Local Type Inference in C# 3.0 and Visual Basic 9.0
The focus of this article will be on highlighting the local type inference feature that has been added to C# 3.0 and Visual Basic 9.0 languages. You'll touch on what it is, the syntax behind it, and why it is relevant to understand. You'll also touch on some examples of invalid uses because it can be just as helpful to examine what it is not to get a grasp on the concept.
Local Type Inference Defined
Local type inference is a language feature that allows you to define variables and use them without worrying about their true type. Local type inference is also interchangeably known as implicitly typed local variables. The burden is put on the respective language compiler to determine the type of a variable by inferring it from the expression assigned to the variable. The result is type safety while allowing you to write more relaxed code, which is required to support Language Integrated Query (LINQ).
Based on the description and a first glance of code, it is very easy to mistake type inference to be similar to defining everything as a type object or use of variants, which is heavily used in Visual Basic 6.0. This is entirely untrue and not what type inference is about. Type inferred variables are strongly typed. The type cannot be changed once it is assigned as could be done with a variant type so it does not involve any casting operations or the resulting performance implications. A strong type is assigned, but simply done so by the compiler based on the results of the expression assigned to the variable. The net effect is the true type isn't as readily apparent when reading code, but the Visual Studio IDE will tell you the type assigned along with the GetType() method will return a strong type at runtime.
There may be temptation over time to get lazy and let the compiler do the work for you by using type inference across the board. However, this is where the local part of local type inference comes into play. Type inference can only be used within a local scope where its type can be inferred by the expression assignment. Type inference cannot be applied to any of the following:
- Cannot be a part of a member property declaration on a class, struct, or interface
- Cannot be used in a parameter list on a method
- Cannot be a return type for a method
- Cannot be defined without a right hand assignment expression
- Cannot reassign to be a different type once type has been inferred
Local Type Inference in C# 3.0
C# 3.0 implements local type inference through the var keyword in place of a specific type in a variable declaration.
The sample code below demonstrates the syntax for local type inference in C#. I created a new Windows console project to hold the code. Visual Studio 2008 Beta 2 was used to create the examples contained within.
namespace CodeGuru.TypeInference{ class Program { static void Main(string[] args) { int a = 5; var b = a; // int var x = 5.5M; // double var s = "string"; // string var(); } })
It can be just as useful at times to look at examples where something does not apply. The following sample C# code demonstrates situations in which local type inference cannot be used. The code that is listed below will result in six different compile errors based on invalid usage and intentionally will not compile.
namespace CodeGuru.TypeInference{ class Program { var test = "invalid use"; // invalid in member declaration // Invalid as parameter public void TryAsParameter(var parm) { } // Invalid as return type public var TryAsReturnType() { return "invalid use"; } public void TryInvalidLocalUse() { var local1; // must be initialized var local2 = null; // can't infer type from null var local3 = 5; // valid use local3 = "change type"; // can't change type } }}
Page 1 of 2
| http://www.developer.com/net/article.php/3710921/Language-Feature-Highlight-Local-Type-Inference-in-C-30-and-Visual-Basic-90.htm | CC-MAIN-2014-15 | refinedweb | 659 | 58.21 |
Multi-step-Time-series-predicting using RNN LSTM
Household Power Consumption Prediction using RNN-LSTM
Power outage accidents will cause huge economic loss to the social economy. Therefore, it is very important to predict power consumption.
Given the rise of smart electricity meters and the wide adoption of electricity generation technology like solar panels, there is a wealth of electricity usage data available.
Problem Statement :
Given that power consumption data for the previous week, we have to predict the power consumption for the next week.
Watch Full Video:
Download dataset:
Details:
Dataset Description:
The data was collected between December 2006 and November 2010 and observations of power consumption within the household were collected every minute.
It is a multivariate series comprised of seven variables
-).
This data represents a multivariate time series of power-related variables that in turn could be used to model and even forecast future electricity consumption
Time-series predictions play a major role in machine learning which is often neglected. Nonetheless, there are lots of machine learning algorithms we could use for these problems. The major machine learning algorithms involving Statsmodels and Econometric models etc. Today we will take a look at how to use and apply Deep learning algorithms to predict the time series Data
Why use a Deep Learning Algorithm?
With the data volume growing enormous day by day we shouldn’t confine ourselves to only the standard ML algorithms. Deep learning algorithms help us to handle large volumes of data and without leaving the key insights and by tuning the model within the right way gives us the maximum yield i.e., in our cause maximum accuracy 😊 . The model also determines if our prediction is better or worse from its own neural network architecture.
For this Time series forecasting we will use Long- Short Term Memory unit (LSTM).
Recurrent Neural Network (RNN)
To understand an LSTM Network, we need to understand a Recurrent Neural Network first. This kind of network is used to recognize patterns when past results have influence on the present result. An example of RNN usage is the time-series functions, in which the data order is extremely important. In this network architecture, the neuron uses as input not only the regular input (the previous layer output), but also its previous state.
It is important to notice that H represents the neuron state. Therefore, when in state
H_1, the neuron uses as input the parameter
X_1 and
H_0 (its previous state). The main problem of this model is the memory loss. The network older states are fast forgotten. In sequences where we need to remember beyond the immediate past, RNNs fail to remember.
Long Short Term Memory unit(LSTM) was typically created to overcome the limitations of a Recurrent neural network (RNN). The Typical long data sets of Time series can actually be a time-consuming process which could typically slow down the training time of RNN architecture. We could restrict the data volume but this a loss of information. And in any time-series data sets, there is a need to know the previous trends and the seasonality of data of the overall data set to make the right predictions.
Before going into the brief explanation of LSTM cell, Let us see how the LSTM cell looks like :
The Architecture may look little complicated on the first glance, but it is pretty neat and clear and easily understandable if we break it into parts.
Lets first start understanding what are our inputs and outputs. The typical input if you see on the left-hand side of the diagram
Ct-1 which is the previous cell state and
ht-1 which is the output from the previous cell and
Xt which is the input of the present cell.
The output of the cell is
Ct and ht which are the corresponding cell state and output of the present cell. The first step of an LSTM is the
forget gate layer (f) where we determine what are we going to forget from the previous cell state. This typically takes the input
ht-1 and
Xt and make a linear transformation with some weights and bias terms and pass into the sigmoid function. As we are aware the output of a
sigmoid function is always between 0 and 1. Here 0 will be considered as to forget it and 1 will represent to keep it
Forget gate later=> f = Sigmoid ( Weights (ht-1,Xt) + bias)
The second step is a two-part process and this is the step which tells us actually processing within this layer. Here in the first part we take the same inputs as before the
ht-1 and
Xt and make a linear transformation with some weights and biases and pass on to a
sigmoid function. And the second part we will make a linear transformation again between
ht-1 and
Xt with some weights and biases but this time its going to be a
hyperbolic tangent function (tanh). At the end of this step, we will get vectors of values which can be new candidate values for this present cell.
First part => I = sigmoid( Weights (ht-1,Xt) + bias)
Second part => II = tanh( Weights (ht-1,Xt) + bias)
The third step is the update step which helps us in deriving the new cell state
Ct using our previous steps. First, we will multiply the previous cell state with the forget gate layer and add the vectors we got from the second step which forms the new cell state
Ct of the present cell at
t.
Update layer => Ct = Ct-1 f + I II
The final step is another main output of the cell, for this, we will directly form a linear transformation with the previous output
ht-1 and input of the present cell
Xt with some bias and weight terms and pass on to a sigmoid layer. Finally, now we will multiply this output to the new cell state
Ct which is passed on to a hyperbolic tangent function. This gives us the present output
ht.
Final layer =>
i = sigmoid ( Weights (ht-1,xt) + bias)
final ht = i * tanh(Ct)
Now we have a clear understanding of the step by step dissection of the LSTM layer. Let’s see how we apply our LSTM cell into a time series data.
How to? Let’s Begin
Importing Libraries
import numpy as np import pandas as pd import matplotlib.pyplot as plt from numpy import nan from tensorflow.keras import Sequential from tensorflow.keras.layers import LSTM, Dense from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler
#Reading the dataset data = pd.read_csv('household_power_consumption.txt', sep = ';', parse_dates = True, low_memory = False)
#printing top rows data.head()
#concatenating the date and time columns to 'date_time' columns data['date_time'] = data['Date'].str.cat(data['Time'], sep= ' ') data.drop(['Date', 'Time'], inplace= True, axis = 1) data.head()
data.set_index(['date_time'], inplace=True) data.head()
Next, we can mark all missing values indicated with a ‘?‘ character with a NaN value, which is a float.
#replacing each '?'characters with NaN value data.replace('?', nan, inplace=True)
#This will allow us to work with the data as one array of floating point values rather than mixed types (less efficient.) data = data.astype('float')
#information of the dataset
#checking the null values np.isnan(data).sum()
Global_active_power 25979 Global_reactive_power 25979 Voltage 25979 Global_intensity 25979 Sub_metering_1 25979 Sub_metering_2 25979 Sub_metering_3 25979 dtype: int64.
def fill_missing(data): one_day = 24*60 for row in range(data.shape[0]): for col in range(data.shape[1]): if np.isnan(data[row, col]): data[row, col] = data[row-one_day, col]
fill_missing(data.values)
#checking the nan values np.isnan(data).sum()
Global_active_power 0 Global_reactive_power 0 Voltage 0 Global_intensity 0 Sub_metering_1 0 Sub_metering_2 0 Sub_metering_3 0 dtype: int64
#printing the shape of the data data.shape
(2075259, 7)
Here, we can observe that we have 2075259 datapoints and 7 features
data.head()
Prepare power consumption for each day
We can now save the cleaned-up version of the dataset to a new file; in this case we will just change the file extension to .csv and save the dataset as
‘cleaned_data.csv‘.
#conversion of dataframe to .csv data.to_csv('cleaned_data.csv')
#reading the dataset dataset = pd.read_csv('cleaned_data.csv', parse_dates = True, index_col = 'date_time', low_memory = False)
#printing the top rows dataset.head()
#printing the bottom rows dataset.tail()
Exploratory Data Analysis
#Downsampling the data into dáy-wise bins and sum the values of the timestamps falling into a bin. data = dataset.resample('D').sum()
#data after sampling it into daywise manner data.head()
Plotting the all features in various time stamps
fig, ax = plt.subplots(figsize=(18,18)) for i in range(len(data.columns)): plt.subplot(len(data.columns), 1, i+1) name = data.columns[i] plt.plot(data[name]) plt.title(name, y=0, loc = 'right') plt.yticks([]) plt.show() fig.tight_layout()
Exploring Active power consumption for each
year
#we have considered 5 years here years = ['2007', '2008', '2009', '2010']
Year wise plotting of feature
Global_active_power
fig, ax = plt.subplots(figsize=(18,18)) for i in range(len(years)): plt.subplot(len(years), 1, i+1) year = years[i] active_power_data = data[str(year)] plt.plot(active_power_data['Global_active_power']) plt.title(str(year), y = 0, loc = 'left') plt.show() fig.tight_layout()
#for year 2006 data['2006']
Power consumption distribution with
histogram
Year wise histogram plot of feature
Global_active_power
fig, ax = plt.subplots(figsize=(18,18)) for i in range(len(years)): plt.subplot(len(years), 1, i+1) year = years[i] active_power_data = data[str(year)] active_power_data['Global_active_power'].hist(bins = 200) plt.title(str(year), y = 0, loc = 'left') plt.show() fig.tight_layout()
Histogram plot for
All Features
fig, ax = plt.subplots(figsize=(18,18)) for i in range(len(data.columns)): plt.subplot(len(data.columns), 1, i+1) name = data.columns[i] data[name].hist(bins=200) plt.title(name, y=0, loc = 'right') plt.yticks([]) plt.show() fig.tight_layout()
Plot power consumption hist for each month of
2007
months = [i for i in range(1,13)] fig, ax = plt.subplots(figsize=(18,18)) for i in range(len(months)): ax = plt.subplot(len(months), 1, i+1) month = '2007-' + str(months[i]) active_power_data = dataset[month] active_power_data['Global_active_power'].hist(bins = 100) ax.set_xlim(0,5) plt.title(month, y = 0, loc = 'right') plt.show() fig.tight_layout()
Observation :
1. From the above diagram we can say that power consumption in the month of Nov, Dec, Jan, Feb, Mar is more as there is a long tail as compare to other months.
2. It also shows that the during the winter seasons, the heating systems are used and not in summer.
3. The above graph is highly concentrated on 0.3W and 1.3W.
Active Power Uses Prediction
What can we predict
- Forecast hourly consumption for the next day.
- Forecast daily consumption for the next week.
- Forecast daily consumption for the next month.
- Forecast monthly consumption for the next year.
Modeling Methods
There are many modeling methods and few of those are as follows
- Naive Methods -> Naive methods would include methods that make very simple, but often very effective assumptions.
- Classical Linear Methods -> Classical linear methods include techniques are very effective for univariate time series forecasting
- Machine Learning Methods -> Machine learning methods require that the problem be framed as a supervised learning problem.
- K-nearest neighbors.
- SVM
- Decision trees
- Random forest
- Gradient boosting machines
- Deep Learning Methods -> combinations of CNN LSTM and ConvLSTM, have proven effective on time series classification tasks
- CNN
- LSTM
- CNN – LSTM
Problem Framing:
Given recent power consumption, what is the expected power consumption for the week ahead?
This requires that a predictive model forecast the total active power for each day over the next seven days
A model of this type could be helpful within the household in planning expenditures. It could also be helpful on the supply side for planning electricity demand for a specific household.
Input -> Predict
[Week1] -> Week2
[Week2] -> Week3
[Week3] -> Week4
#top rows data.head()
#printing last rows data.tail()
#here are splitting the dataset #dataset upto end of 2009 is in train dataset and remaining we keeping it in test dataset data_train = data.loc[:'2009-12-31', :]['Global_active_power'] data_train.head()
date_time 2006-12-16 1209.176 2006-12-17 3390.460 2006-12-18 2203.826 2006-12-19 1666.194 2006-12-20 2225.748 Freq: D, Name: Global_active_power, dtype: float64
data_test = data['2010']['Global_active_power'] data_test.head()
date_time 2010-01-01 1224.252 2010-01-02 1693.778 2010-01-03 1298.728 2010-01-04 1687.440 2010-01-05 1320.158 Freq: D, Name: Global_active_power, dtype: float64
data_train.shape
(1112,)
data_test.shape
(345,)
Observation :
- We have 1112 datapoints in train dataset and 345 datapoints in test dataset
Prepare training data
#training data data_train.head(14)
date_time 2006-12-16 1209.176 2006-12-17 3390.460 2006-12-18 2203.826 2006-12-19 1666.194 2006-12-20 2225.748 2006-12-21 1723.288 2006-12-22 2341.338 2006-12-23 4773.386 2006-12-24 2550.012 2006-12-25 2743.120 2006-12-26 3934.110 2006-12-27 1528.760 2006-12-28 2072.638 2006-12-29 3174.392 Freq: D, Name: Global_active_power, dtype: float64
#converting the data into numpy array data_train = np.array(data_train)
#we are splitting the data weekly wise(7days) X_train, y_train = [], [] for i in range(7, len(data_train)-7): X_train.append(data_train[i-7:i]) y_train.append(data_train[i:i+7])
#converting list to numpy array X_train, y_train = np.array(X_train), np.array(y_train)
#shape of train and test dataset X_train.shape, y_train.shape
((1098, 7), (1098, 7))
#printing the ytrain value pd.DataFrame(y_train).head()
#Normalising the dataset between 0 and 1 x_scaler = MinMaxScaler() X_train = x_scaler.fit_transform(X_train)
#Normalising the dataset y_scaler = MinMaxScaler() y_train = y_scaler.fit_transform(y_train)
pd.DataFrame(X_train).head()
#converting to 3 dimension X_train = X_train.reshape(1098, 7, 1)
X_train.shape
(1098, 7, 1)
Build LSTM Model
#building sequential model using Keras reg = Sequential() reg.add(LSTM(units = 200, activation = 'relu', input_shape=(7,1))) reg.add(Dense(7))
#here we have considered loss as mean square error and optimizer as adam reg.compile(loss='mse', optimizer='adam')
#training the model reg.fit(X_train, y_train, epochs = 100)
Train on 1098 samples Epoch 1/100 1098/1098 [==============================] - 2s 2ms/sample - loss: 0.0626 Epoch 2/100 1098/1098 [==============================] - 0s 296us/sample - . . . . . Epoch 99/100 1098/1098 [==============================] - 0s 270us/sample - loss: 0.0228 Epoch 100/100 1098/1098 [==============================] - 0s 269us/sample - loss: 0.0228
<tensorflow.python.keras.callbacks.History at 0x19ba56fc668>
Observation:
- We have done with training and loss which we have got is 0.0232
Prepare test dataset and test LSTM model
#testing dataset data_test = np.array(data_test)
#here we are splitting the data weekly wise(7days) X_test, y_test = [], [] for i in range(7, len(data_test)-7): X_test.append(data_test[i-7:i]) y_test.append(data_test[i:i+7])
X_test, y_test = np.array(X_test), np.array(y_test)
X_test = x_scaler.transform(X_test) y_test = y_scaler.transform(y_test)
#converting to 3 dimension X_test = X_test.reshape(331,7,1)
X_test.shape
(331, 7, 1)
y_pred = reg.predict(X_test)
#bringing y_pred values to their original forms by using inverse transform y_pred = y_scaler.inverse_transform(y_pred)
y_pred
array([[1508.9413 , 1476.1537 , 1487.5676 , ..., 1484.8464 , 1459.3864 , 1551.5675 ], [1158.2788 , 1287.0326 , 1346.428 , ..., 1430.5685 , 1420.6346 , 1472.5759 ], [1571.7665 , 1507.0337 , 1516.5574 , ..., 1432.5813 , 1393.9161 , 1504.1714 ], ..., [ 952.85785, 852.4236 , 933.62585, ..., 800.12006, 831.2844 , 1005.20844], [1579.4896 , 1353.6078 , 1278.9501 , ..., 981.4198 , 967.6466 , 1146.7898 ], [1629.0509 , 1392.7751 , 1288.7218 , ..., 1052.977 , 1070.8586 , 1243.1346 ]], dtype=float32)
y_true = y_scaler.inverse_transform(y_test)
y_true
array([[ 555.664, 1593.318, 1504.82 , ..., 0. , 1995.796, 2116.224], [1593.318, 1504.82 , 1383.18 , ..., 1995.796, 2116.224, 2196.76 ], [1504.82 , 1383.18 , 0. , ..., 2116.224, 2196.76 , 2150.112], ..., [1892.998, 1645.424, 1439.426, ..., 1973.382, 1109.574, 529.698], [1645.424, 1439.426, 2035.418, ..., 1109.574, 529.698, 1612.092], [1439.426, 2035.418, 1973.382, ..., 529.698, 1612.092, 1579.692]])
Evaluate the model
Here, we using metric as mean square error since it is a regression problem
def evaluate_model(y_true, y_predicted): scores = [] #calculate scores for each day for i in range(y_true.shape[1]): mse = mean_squared_error(y_true[:, i], y_predicted[:, i]) rmse = np.sqrt(mse) scores.append(rmse) #calculate score for whole prediction total_score = 0 for row in range(y_true.shape[0]): for col in range(y_predicted.shape[1]): total_score = total_score + (y_true[row, col] - y_predicted[row, col])**2 total_score = np.sqrt(total_score/(y_true.shape[0]*y_predicted.shape[1])) return total_score, scores
evaluate_model(y_true, y_pred)
(579.2827596682928, [598.0411885086157, 592.5770673397814, 576.1153945912635, 563.9396525162248, 576.5479538079353, 570.7699415990154, 576.2430188855649])
#standard deviation np.std(y_true[0])
710.0253857243853
Conclusions:
- From the above experiment, we have got
root mean square erroraround
598 watts.
- In order to check whether our model is performing good or bad, we need to evaluate
standard deviationwhich we have got here as
710 watts.
- Here
mean square erroris less than
standard deviation. Hence, we can say that our model is performing
good. | https://kgptalkie.com/multi-step-time-series-predicting-using-rnn-lstm/ | CC-MAIN-2021-17 | refinedweb | 2,865 | 58.69 |
>
Hello , sooo i am new to quaternion and i've been trying hard to understand them ... I want to smoothly rotate a cube a 90 degrees so I did the following :
using UnityEngine; using System.Collections;
public class TriggerRotator : MonoBehaviour {
private GameObject cube;
private bool startRotation;
float counter = 0;
public Quaternion from;
public Quaternion to;
// Use this for initialization
void Awake() {
cube = GameObject.FindGameObjectWithTag("Plateform");
//from = new Quaternion(0, 0, 0, 1);
//to = new Quaternion(0, 0, 0, 1);
//from.SetEulerAngles(0, 0, 0);
//to.SetEulerAngles(1f, 0, 0);
from = new Quaternion(0, 0, 0, 1);
to = new Quaternion(1.0f, 0, 0, 1);
//from.SetEulerAngles(0, 0, 0);
//to.SetEulerAngles(1f, 0, 0);
//this
}
// Update is called once per frame
void Update () {
Debug.Log(from.eulerAngles);
if(startRotation)
{
cube.transform.rotation = Quaternion.Slerp(from, to, counter);
counter += 0.01f;
}
}
void OnTriggerEnter(Collider other)
{
if(other.gameObject.tag=="Player")
{
Debug.Log("entered");
startRotation = true;
// cube.transform.Rotate(40, 0, 0);
}
}
}
the code is running but the cube reaches only 89.981 degrees How can i do to make it a sharp 90 degrees ? Thanks for all
does counter reach exactly one ?
also, i'm not sure that's how you want to construct the quaternion.
that form initializes the x,y,z,w components of the quaternion,
which are not the same as axis, angle.
per the docs, "don't modify this unless you know quaternions inside out".
i'd suggest constructing your quaternions with Quaternion.AngleAxis or Quaternion.Euler.
Answer by Bunny83
·
Jan 12, 2017 at 06:03 AM
Your quaternion is not normalized. As elenzil said you usually use either the Euler, AngleAxis, FromToRotation or LookRotation method to construct a quaternion. However if you want to create it manually you should know how it works under the hood.
So if you want to rotate around a given vector "v" by the angle of "a" you would do:
float radians = a * Mathf.Deg2Rad;
v = v.normalized * Mathf.Sin(radians / 2);
Quaternion q = new Quaternion(v.x, v.y, v.z, Mathf.Cos(radians / 2));
This does exactly the same as Quaternion.AngleAxis(a, v);
Quaternion.AngleAxis(a, v);
So as an example with numbers if you want to rotate around the x axis by an angle of 90° you would get
new Quaternion(0.70710678f, 0f, 0f, 0.70710678f);
0.70710678f is sin(45°) or "1 / sqrt(2)"
0.70710678f
As you can see the resulting quaternion is always normalized as it's "length" is 1.0 at all times. In this example since "0.70710678f" is the inverse of the square root of 2 when you square it you get "0.5". 0.5 + 0 + 0 + 0.5 == 1.0.
1.0
Hello , thanks for replying . In the case of Quaternion.AngleAxis(a, v); I looked up Unity Scripting API and i got this : public static Quaternion AngleAxis(float angle, Vector3 axis); so it takes an angle (perhaps 90 degrees in my case ) and a vector3 axis (which might be x , y or z axis) my question now is : how do I rotate it over a certain amount of time ? (for example rotate 90 degrees in 10 seconds)
your original use of slerp() was fine.
all you need to do is construct from and to using AngleAxis() or one of the other methods Bunny83 mentioned. I'd recommend AngleAxis() or Euler(), whichever looks more comfortable to you.
from
to
Yes, like elenzil said you can use slerp just fine as long as you use proper absolute rotations. Another way is to do relative rotations. when you deal with Transforms Unity has several helpers which do that for you like Rotate or RotateAround. However you can also do it "manually" by rotating only a fraction each frame
void Update()
{
Quaternion q = Quaternion.AngleAxis(90f * Time.deltaTime / 10f, yourAxis);
cube.transform.rotation = q * cube.transform.rotation;
}
This would rotate the cube by 90° around "yourAxis" within 10 seconds. Of course this would never "stop" rotating as you do a relative rotation each frame at a rotational speed of 9° per second.
Technically it's also possible to use Vector3.RotateTowards and rotate the current object space to a target space. However keep in mind that you need at least two linear independent vectors to specify a coordinate space. Usually Unity uses the forward vector "z" and the up vector "y" (like the parameters for LookRotation).
There's no "best way" to do rotations as it depends entirely on your usecase.
I'm sorry for disturbing , I also just tried it out
if(startRotation)
{
cube.transform.rotation = Quaternion.AngleAxis(90, Vector3.up);
}
it gave me a 90.000001
90.000001 is about as close to 90 as you can expect to get. it's a fundamental aspect of doing math with floating-point numbers.
:) So you complain about an error of "one millionth" of a degree? Unity stores rotations as quaternions and not in euler angles representation. Euler angles give you way too many problems. The (euler) angles you see in the inspector are calculated from the internal quatern.
Manual Quaternions
3
Answers
Child versus Parent rotations
3
Answers
3D nested Turret Prefab Rotation
1
Answer
Simple Rotation
1
Answer
Problem with quaternion rotation zxis
0
Answers | https://answers.unity.com/questions/1297214/quaternions-not-exact.html | CC-MAIN-2019-13 | refinedweb | 876 | 59.4 |
The QColormap class maps device independent QColors to device dependent pixel values. More...
#include <QColormap>
The QColormap class maps device independent QColors to device dependent pixel values.
This enum describes how QColormap maps device independent RGB values to device dependent pixel values.
Constructs a copy of another colormap.
Destroys the colormap.
Returns a QColor for the pixel.
See also pixel().
Returns a vector of colors which represents the devices colormap for Indexed and Gray modes. This function returns an empty vector for Direct mode.
See also size().
Returns the depth of the device.
See also size().
This function is only available on Windows.
Returns an handle to the HPALETTE used by this colormap. If no HPALETTE is being used, this function returns zero.
Returns the colormap for the specified screen. If screen is -1, this function returns the colormap for the default screen.
Returns the mode of this colormap.
See also QColormap::Mode.
Returns a device dependent pixel value for the color.
See also colorAt().
Returns the size of the colormap for Indexed and Gray modes; Returns -1 for Direct mode.
See also colormap().
Assigns the given colormap to this color map and returns a reference to this color map.
This function was introduced in Qt 4.2. | http://doc.qt.nokia.com/4.5-snapshot/qcolormap.html#colorAt | crawl-003 | refinedweb | 208 | 63.15 |
Does anyone know the api command for returning an objects Namespace/Class. In my example I want to feed in an element from Revit, such as a piece of furniture and my python return Autodesk.Revit.DB.FamilyInstance. In a different example a wall would return Autodesk.Revit.DB.Wall. I can get the in-built category but seem to be going round in circles trying to access the class, when I’m sure its fairly straightforward.
Not sure if this is what you wanted but Clockwork and Blackbox have nodes that seem to do what I think you wanted. (And yes I have walls to be used as canvas in my current file
)
Dynamo and other packages give you the in built categories, but I don’t want it the Revit category, rather its Namespace within the API. In your example with walls I would like to be seeing the full class of Autodesk.Revit.DB.Wall rather than just wall. The reason I need the whole namespace is because these are categorised differently to Revit’s in built categories. As an example a chair created as a family in the furniture category will show as furniture within the in built categories, whereas for this instance I would expect to see Autodesk.Revit.DB.FamilyInstance as the class/namespace. Hope that clears things up slightly.
Maybe this helps:
Marcel
Mark, you’ll want to use Element.TypeName from Julien Benoit’s excellent package SteamNodes. If you’re dealing with a list of elements, you’ll have to use it with List.Map - otherwise it’ll just tell you the namespace of the list.
Andreas, perfect that’s what I’m after. Always seem to forget about unwrapping elements in python! This leads on to my second question is their a current node that lists out the current revit projects units. So would tell me Length is mm, Area is m² etc, etc or should I look at this in python also. They need to make searching for useable nodes more intuitive, through peoples libraries.
Mark, since I was not aware of such a node (and because I’ll be needing it for unit conversions when migrating Clockwork to Dynamo 0.8) I went ahead and added that functionality to Clockwork - juts download the latest version (0.75.33):
| https://forum.dynamobim.com/t/revit-api-and-python/1570 | CC-MAIN-2020-50 | refinedweb | 389 | 72.56 |
Hello , my name is kostas and i am 18 years old student at ionian university( computer science ). My hopes are that someday i will become a good game / graphics programmer. After 7 hours of work i finished my first game tic tac toe ( 2 players no AI ) using c++(ide visual studio 2012 prof since its free for students ) and SDL . I will post my c++ fille and hopefully you could do a review of my code and suggest me what should i do better.... Your suggestions are necessary so i can become better! Thanks a lot for your time. link here : . For a reason i couldnt attach the cpp fille so i had to upload it ?!
Code review for my first game ?
#1 Members - Reputation: 124
Posted 25 September 2013 - 02:16 PM
#2 Crossbones+ - Reputation: 1653
Posted 25 September 2013 - 03:17 PM
You have several typoes or spelling mistakes in code or comments, these are confusing or annoying to a maintainer but not really a problem (most notably SCREEN_WITDH)
There are a lot of global variables, some of which are not very descriptive or helpful, e.g. event / gameEvent. You might reduce the global variables by combining some of them into structs and/or putting them into arrays or some other container. It's also possible that some of them don't need to be global at all, and can be passed as parameters instead and/or held in a local somewhere.
Your variable and function names are not very descriptive and a maintenance programmer will need to refer to their declaration comments a lot. "state" might be better as "board_state", then we know what it's the state of. (conceptually, every variable is a state of something). Some function names don't start with a verb, for example "position" or "score".
"using namespace std" - I could possibly get into a Holy War over this, but I prefer not to use this nowadays.
Function position() ... this function needs the most work.
1. It is trying to do at least two things at once - determine the mouse click target and draw things on the screen. Logically, I'd try to separate these two operations
2. It does many rectangle-tests for the mouse-position, it looks like these could easily be refactored into an array or something else.
3. There is a great deal of repeated code
I suggest you might refactor position() into several stages, possibly different functions:
a. Determine whether the mouse-button has been clicked on a square, and which one
b. Check whether the square is a legal move (i.e. not already taken)
c. Draw the mark in the box
Obviously I can't see what's in your background image, but I suspect that the board has a regular grid, so you might be able to use a bit of arithmetic to work out the boxes' coordinates programmatically and not have to hard-code the locations of 9 squares.
However, if your tic-tac-toe grid is highly irregular, you could make an array of structs to store their coords thus:
struct myrect { int x1,y1,x2,y2; };
myrect rects[9] = {
{30,25,210,165} , ...
}
Or whatever the correct syntax is!
#3 Members - Reputation: 124
Posted 25 September 2013 - 04:25 PM
Thank you very much for your feedback and i am looking forward to correct all the things you said me. Also about the std i agree that is essential but since the program wasnt big enough i prefered not to use it. I will create my next game now , propably a blackjack and then a pong maybe ? Anyway when i finish it , i will post it here and hopefully i my code will be better eventually. Thank again and i appreciate your help! | http://www.gamedev.net/topic/648253-code-review-for-my-first-game/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2014-52 | refinedweb | 633 | 69.21 |
This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.
import on your blocked unwanted, and repeat some control once a month to control that it's still valid, zip files contain two xml's files, first with one proxydomains url, and the second with proxy url, when you import those xml pre configurate list, you teach your isa server to block the most common service of annonymaze and proxy to your client, but the security and the tricks to surf free are multiple, and maybe service as ultrasurf has now 100 server that offer proxy service, you with those list block all of them, but in a month ultrasurf publish another 3 server, then you blocking become vulnerabile.
the security it's a continuos concept
then i suggest you, to install a Virtual Macchine with the most common Proxy client, annomyzer ecc, services, and once a month, as a part of your security periodical controls, test those client, to verified that you still blocking.
i'm searching something like a online service that offer the updated list of this services, to reach to do a update list frequently, if i found it, i publish to you :)
it's clear? :)
you can found the most updates list of potential proxyes. but you have to accept the risk of it, because is a very particular situation, you have to become the most secure possible blocking the most updated list of proxy available on the web, and understand that at now an simple or automatic method to detect doesn't exist
please read this document to
use as a updates balcklist, with the proper linux command you can create a linux.txt file updated, it's can be scheduled with a script
Use CURL cmd to copy web page, then GREP, CUT, and SORT to create IP blacklist Repeat for each page of proxies:
curl > proxy1.html
grep whois\.cgi\?domain\= proxy1.html | cut -d \= -f 3 | cut -d \" -f 1 | sort | uniq > proxy.txt
• Alter accordingly for different sites and when site alters page formatting.
proposal 2.
Detecting a regular expression of glype server proxy request
Example Glype URL:
Format:
{hostname}/browse.php?u={o
Regular Expression to Match:
(browse\.php\?u=).+(&b).*
proposal 3
• The format of the proxy server URL can be turned into a Snort IDS rule
• Example rule for a Glype Proxy:
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg: “GlypeProxy detected”;
pcre:”/(browse\.php\?u=).+
regards
what do you think?
Am from saudi Arabia...This link blocked by ISP...So i cant access this sites.
you can use a proxy to reach them.
:(
someone of this site maybe can help you
let's me know
Ultrasurf's signature is 140300000101 , and installs by .exe
Blocking the signature in ISA server will help by following the link below: | https://www.experts-exchange.com/questions/26636161/PROXY.html | CC-MAIN-2018-22 | refinedweb | 506 | 56.79 |
Home » Support » Index of All Documentation » How-Tos » How-Tos for Other Libraries »
Wing IDE is an integrated development environment that can be used to write, test, and debug Python code that is written for Twisted..
Installing Twisted
The Twisted website provides complete instructions for installing and using Twisted.
Debugging in Wing IDE
To debug Twisted code launched from within Wing IDE, create a file with the following contents and set it as your main debug file by adding it to your project and then using the Set Main Debug File item in the Debug menu:
from twisted.scripts.twistd import run import os try: os.unlink('twistd.pid') except OSError: pass run()
Then go into the File Properties for this file (by right clicking on it) and set Run Arguments to something like:
-n -y name.tac
The -n option tells Twisted not to daemonize, which would cause the debugger to fail because sub-processes are not automatically debugged. The -y option serves to point Twisted at your .tac file (replace name.tac with the correct name of your file instead).
You can also launch Twisted code from outside of Wing using the module wingdbstub.py that comes with Wing. This is described in Debugging Externally Launched Code in the manual.
Related Documents
Wing IDE provides many other options and tools. For more information:
- Wing IDE Reference Manual, which describes Wing IDE in detail.
- Twisted home page, which provides links to documentation.
- Wing IDE Quickstart Guide which contains additional basic information about getting started with Wing IDE. | https://wingware.com/doc/howtos/twisted | CC-MAIN-2014-15 | refinedweb | 258 | 64.3 |
UnsatisifedLinkError exception. why am I getting this?
tyler jones
Ranch Hand
Joined: Dec 01, 2000
Posts: 101
posted
Jun 17, 2002 07:25:00
0
My
applet
is running in the background of a page. Then all of a sudden, this message will pop up ...
java/lang/UnsatisifedLinkError Exception was not handled.
I know it's an exception that I need to catch, but I can't figure out where it's occuring in my code or even why it's occuring. Does anyone see a reason that this would be happening? Thanks.
import java.io.*; import java.util.*; import java.awt.*; import java.net.*; import java.applet.*; import java.text.*; import java.awt.event.*; import netscape.javascript.JSObject; public class AlarmContainer extends java.applet.Applet { private int index; int background[] = new int[3]; private String tmpStr = ""; private java.util.Date d = new java.util.Date(); private DateFormat df = new SimpleDateFormat("MM/dd/yy H:mm"); private GregorianCalendar today = new GregorianCalendar(); private int day = today.get( today.DAY_OF_MONTH ); private int month = today.get( today.MONTH )+1; private int year = today.get( today.YEAR ); private String sSoundFile; private String messageID; private int IDCount; Thread t; JSObject win; public Long l; public void init() { win = JSObject.getWindow(this); setSize(1, 1); sSoundFile = getParameter("snd"); StringTokenizer bgColor = new StringTokenizer(getParameter("bgColor"),","); StringTokenizer Times = new StringTokenizer(getParameter("Times"),"|"); StringTokenizer IDS = new StringTokenizer(getParameter("IDS"),"|"); StringTokenizer alarmTimes = new StringTokenizer(getParameter("alertTimes"),"|"); IDCount = IDS.countTokens(); //populate array of alarm clocks with time values passed in if ( Times.countTokens() > 0 ) { AlarmClock c[] = new AlarmClock[Times.countTokens()]; for ( int i = 0; i < c.length; i++ ) { tmpStr = Times.nextToken(); if ( tmpStr.trim().length() > 0 ) { try { d = df.parse( month + "/" + day + "/" + year + " " + tmpStr ); messageID = (IDCount >= i+1) ? IDS.nextToken() : "0"; c[i] = new AlarmClock( d, alarmTimes.nextToken(), messageID, sSoundFile ); } catch (ParseException pe) {} } } } //create array of rgb values for background color for ( int i = 0; i < 3; i++ ) { if ( bgColor.hasMoreTokens() ) background[i] = Integer.parseInt(bgColor.nextToken()); else background[i] = 0; } setBackground(new Color(background[0],background[1],background[2])); } public void start() { } public void stop() { if ( t != null ) { t.destroy(); t = null; } } public void paint(Graphics g) { } private class AlarmClock implements Runnable { private String sounds; private AudioClip audio; private java.util.Date activityDate = new java.util.Date(); private int MessageID; public AlarmClock( java.util.Date timeToSet, String alertTimeMessage, String ID, String soundFile) { MessageID = Integer.parseInt(ID); activityDate = timeToSet; sounds = soundFile; t = new Thread(this); t.setDaemon(true); t.start(); } public void run() { boolean tryHit = false; while ( !tryHit ) { try { if ( new java.util.Date().getTime() >= activityDate.getTime() ) { tryHit = true; ringAlarm( Integer.toString(MessageID) ); } Thread.sleep(1000); } catch (Exception e ) { tryHit = true; } } } public void ringAlarm( String mID ) { String [] stringArgs = new String[1]; stringArgs[0] = mID; win.call("PopUpAlarm", stringArgs); try { if (audio != null) { audio.stop(); audio = null; } String url = sounds; if (url.length() > 0) { audio = getAudioClip(new URL(getDocumentBase(), url)); audio.play(); } } catch(Exception e) {} } } //end of inner class } //end of outer AlarmContainer class
Rakesh Ray
Ranch Hand
Joined: Jul 25, 2001
Posts: 51
posted
Jun 17, 2002 07:38:00
0
Look at
java
console and it will tell you from where this is coming.
My best guess would be, if you have compiled your applet code with java2( becuase you are using some method exclusively available in 1.2) and your browser's jvm is 1.? then you should get this problem.
tyler jones
Ranch Hand
Joined: Dec 01, 2000
Posts: 101
posted
Jun 17, 2002 07:43:00
0
I've written my applet to be 1.1 compliant though so that I wouldn't have issues with anyone who didn't have anything higher installed. I don't know what part of my code isn't 1.1 compliant. Do you see anywhere where it isn't? Thanks.
Dirk Schreckmann
Sheriff
Joined: Dec 10, 2001
Posts: 7023
posted
Jun 17, 2002 18:41:00
0
Even if you aren't including any post-1.1 classes, you may still need to compile with the
target
switch like this:
javac -target 1.1 Whatever.java
This is probably only necessary if you're using Sun's Java SDK 1.4 or newer (I'm not familiar with any third party Java compilers).
For more information on javac, take a look at
The javac Documentation
.
[
How To Ask Good Questions
] [
JavaRanch FAQ Wiki
] [
JavaRanch Radio
]
Rakesh Ray
Ranch Hand
Joined: Jul 25, 2001
Posts: 51
posted
Jun 18, 2002 06:57:00
0
Try to use printStackTrace() method in exception handling which will tell you about the origin of the problem.
I agree. Here's the link:
subject: UnsatisifedLinkError exception. why am I getting this?
Similar Threads
applet not working..... and I can't figure out why
I am supposed to call two service classes from one client(main) class?
how do I get the next line in the file and tokenize it
Quit audio before application exits.
why won't my pop up window close?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/254546/Applets/java/UnsatisifedLinkError-exception | CC-MAIN-2014-35 | refinedweb | 843 | 59.09 |
Exception Logging to Database using Enterprise Library 5.0
This tutorial on Exception logging to SQL Server database using Microsoft Enterprise Library 5.0 will get you started in 15 minutes flat. Use it to help yourself whenever you want to configure Exception logging, please find sample application attached.
Follow the steps to configure Exception logging to database by defining Policies for your application using Microsoft Enterprise Library 5.0 (further referred to as EL)
1. Download and Install Enterprise Library 5.0 (further referred to as EL)
or from
2. Run the script LoggingDatabase.sql located in your machine at \Documents\EntLib50Src\Blocks\Logging\Src\DatabaseTraceListener\Scripts. Map the path relevant to you system configuration. This will create the required database called "Logging" and the necessary 3Tables and 4Stored Procedures.
3. In your application, Right Click on the web.config and select EL config settings console. When fully configured the settings look like the attached image, please take time to match it with yours.
4. Click on Database settings to expand, then on the + sign to Add Database connection String – enter the connection string of the database you just configured, give it a name, say – "ExcepConnString". In Database provider select SqlDataClient.
5. From the main menu click on block and Add Exception Handling Settings
6. Name the Policy as Policy1 and similarly add one more to call it as Policy2.
7. In the Handlers, enter Policy1 and Policy2 in the Title section respectively so that you will be able to track in db for which policy the exception corresponds.
Please be aware that for each Exception Type you should have handlers defined.
8. From the main menu again add Logging settings block.
9. In Logging Target Listener, click the + sign to add Database Trace Listener.
10. In database instance select the connection string name as defined above from the dropdown.
11. In select formatter as Text Formatter.
12. In Categories Section of Logging settings block – For General Tab, add Listeners name and select Database Trace Listener from the dropdown.
13. Click Save and you are done configuring the settings for web.config. If you get error in configuring this, I will not be of any help to you, better to exit without saving and start all over again!
Configuring the way as we have done here lets you log exceptions depending upon the exception types and if you have multi-tier architecture in your application and you want different exception category and severity for each layer by defining various policies according to layers. This helps to drill down as to wherefrom the exception originated and it becomes easy to diagnose.
Now add references to your project, browse to reach - C:\Program Files\Microsoft Enterprise Library 5.0\Bin and choose the relevant dll's, for if you rae using for database choose all that have data in their name and for caching choose caching dll's.
Add namespace to your codebehind:
using Microsoft.Practices.EnterpriseLibrary.ExceptionHandling;
using Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging;
And the code that will create the exception:
protected void Page_Load(object sender, EventArgs e)
{
throwex();
}
public void throwex()
{
try
{
int i = 5;
int j = 0;
int z = i / j;
}
catch (Exception ex)
{
bool rethrow = ExceptionPolicy.HandleException(ex, "Policy1");
//if (rethrow)
//{
// throw;
//}
}
finally
{
//Code ---
}
}
Troubleshooting:
1. If it doesn't work, do not give up. EL's config interface is very confusing and it may ask you for 1000 tries before actually letting you log the exceptions!
2. Check for the connection string, in my case I found that the password was not getting copied from the server explorer and I never had a chance to look back – If I copy, I expect the same to be pasted – not a part of it!! So for me this was the issue and it was difficult to track as you get no error for this.
3. Also, I found that for each Exception Type you should have handlers defined, do not forget this.
Sample application attached, configure the database, change the connection string and start using it.
Feel free to ask if you encounter any problem.
References: | http://www.dotnetspider.com/resources/44107-exception-logging-database-using-enterprise-library-50.aspx | CC-MAIN-2019-26 | refinedweb | 688 | 55.95 |
.
SQL Server 2000 included some XML features out of the box. Key among these features was the ability to return results as XML using the FOR XML clause. SQL Server 2005’s functionality is markedly different. In SQL Server 2005, XML is a genuine data type, which means that you can use XML as a column in tables and views, in T-SQL statements, or as parameters of stored procedures. You can now store, query, and manage XML documents directly in the database.
SQL Server 2005's XML functionality is markedly different from what SQL Server 2000 provides.
More importantly, you can also now specify the schema to which your XML must conform. Aside from providing a mechanism to validate your XML in the database, this also allows you to describe complex types of data to be stored and to have an engine that enforces those rules.
Using the XML Datatype
The XML datatype is not substantially different than any other datatype in SQL Server. It can be used in any place you would ordinarily use any SQL datatype. For example, the following creates an XML variable and fills it with a XML:
DECLARE @doc xml SELECT @doc = '<Team name="Braves" />'
Although literal XML is useful, you can also fill an XML variable using a query and the SQL Server’s FOR XML syntax:
SELECT @doc = (SELECT * FROM Person.Contact FOR XML AUTO)
The XML datatype is not limited to use as a variable. You can also use the XML data type in table columns. You can assign default values and the NOT NULL constraint is supported:
CREATE TABLE Team ( TeamID int identity not null, TeamDoc xml DEFAULT '<Team />' NOT NULL )
Inserting XML data into tables is just a matter of specifying the XML to add in the form of a string:
-- Insert a couple of records INSERT INTO Team (TeamDoc) VALUES ('<Team name="Braves"> <Players> <Pitcher name="John Smoltz" role="Closer"/> </Players> </Team>'); INSERT INTO Team (TeamDoc) VALUES ('<Team name="Red Sox"> <Players> <Pitcher name="Petro Martinez" role="Starter"/> </Players> </Team>');
When creating instances of XML in SQL Server 2005, the only conversion is from a string to XML. Similarly, going in the reverse direction, you can only convert to a string. Converting to and from text and ntext is not allowed.
Limitations of the XML Data Type.
XML Type Methods
Up to this point, the examples have shown the XML datatype being used as just a blob of data, but this is where the real power of the XML data type shows itself. The XML data type supports several methods that can be called using the UDT dot syntax (myXml.operation()) syntax. The supported operations are listed in Table 1.
You can use XML as a column in tables and views, in T-SQL statements, or as parameters of stored procedures.
For the following sections, you will use a table called Team that contains a row for every team name. In each row, there is a TeamDoc row that contains XML about the team:
CREATE TABLE Team ( TeamID int identity not null, TeamDoc xml DEFAULT '<Team />' NOT NULL )
In the examples, assume that the following XML document exists in the Braves row of the table:
<Team name="Braves"> <Players> <Pitcher name="John Smoltz" role="Closer"/> <Pitcher name="Russ Ortiz" role="Starter" /> <ThirdBase name="Chipper Jones" role="Starter" bats="switch"/> </Players> </Team>
Query Method
The query method allows you to specify an XQuery or XPath expression to evaluate. The result of the query method is an XML data type object. The specific syntax of the query method is:
query(XQuery)
The first parameter is always an XQuery expression. The following example uses a query to return an XML document with information about each team’s pitcher:
SELECT TeamDoc.query('/Team/Players/Pitcher') FROM Team
This produces the following results:
---------------------------------------------- <Pitcher name="John Smoltz" role="Closer" /> <Pitcher name="Russ Ortiz" role="Starter" /> (1 row(s) affected)
The query method allows you to find and return nodes lists that match the XQuery expression you specify. The real power of the query method comes from the XQuery syntax, which is covered in detail later in this article.
Exist Method
The exist method is similar to the query method except that it is used to determine whether a query yields any results. The syntax for the exist method is:
exist(XQuery)
When you use the exist method, it evaluates the query and returns the value of 1 if the query yields any results. For example, this query finds the rows in the team table where the TeamDoc field has starting pitchers:
-- Simple Exist clause SELECT Count(*) FROM Team WHERE TeamDoc.exist( '/Team/Players/Pitcher[@role="Starter"]') = 1
Value Method
There are times when you do not want to interpret a whole query’s result just to get a scalar value: this is where the value method is helpful. The value method is used to query the XML and return an atomic value. The syntax for the value method is:
value(XQuery, datatype)
Use the value method when you want to get a single scalar value from the XML. You must specify the XQuery statement and the datatype you want it to return and you can return any datatype except the XML datatype. For example, if you want to get the name of the first pitcher on every team, you can write the query like this:
-- Do a Query to get an individual value SELECT TeamDoc.value('(/Team/Players/Pitcher/@name)[1]', 'nvarchar(max)') as FirstPitcher FROM Team
This query results in the scalar value of the first pitcher for each team returned in the result:
FirstPitcher ------------------------------ John Smoltz (1 row(s) affected)
The difference between the query and value methods is that the query method returns an XML datatype that contains the results of the query, and the value method returns a non-XML datatype with the results of the query. The value method can only return a single (or scalar) value. You will get an error if you try to create an XQuery expression that returns more than one value using the value method.
Modify Method
Although the XQuery standard does not provide a mechanism for updating XML, SQL Server 2005 supports a way of modifying parts of an XML object in place. This means that you do not have to retrieve an entire XML document just to make changes. To modify a document in place, you use a combination of the modify method and SQL Server 2005’s new XML Data Modification Language (XML DML).
The syntax for the Modify method is:
modify(<XMLDML>)
The Modify method takes only a single parameter, the XML DML statement. XML DML is similar, but not identical, to SQL’s insert, update and delete syntax. For example, you can modify the XML by using the insert DML statement:
SET @doc.modify(' insert <Pitcher name="Jaret Wright"/> as last into (/Team/Players)[1] ')
You can do the same thing to modify an XML column by calling modify in an UPDATE statement:
-- Modify an XML doc without replacing -- it completely! UPDATE Team SET TeamDoc.modify(' insert <Pitcher name="Jaret Wright"/> as last into (/Team/Players)[1] ') WHERE TeamDoc.exist('/Team[@name="Braves"]') = 1
Notice that the SET clause in this UPDATE statement does not follow the SET x = y pattern that you may be used to from writing SQL in the past. That syntax assumes that you will provide a complete new value to replace the old, which, in the case of XML, means a completely new document to replace the old. When using the XML type, the modify method changes the original document in place. There’s no need for generating a completely new and separate document, or of SQL Server attempting to replace an entire document with every change. The SET syntax in the example reflects the more efficient approach of updating a document in place.
There are three XML DML statements: insert, update, and delete. Not surprisingly, they are used to insert, update, and delete parts of an XML object. Each of these syntaxes are similar to SQL, but with some definite differences. Let’s look at the syntax for each statement separately.
Here is the syntax for the insert statement:
insert InsertExpression ( {{as first | as last} into | after | before} LocationExpression )
Immediately following the insert statement is the XML that you want to insert (InsertExpression). Next you specify how you want the XML inserted. Your choices are into, after, or before. The before and after clauses instruct the database to insert the InsertExpression as a sibling to the LocationExpression. The use of before or after specifies whether to insert it before or after the LocationExpression:
SET @doc.modify(' insert <Pitcher role="Starter" name="Jaret Wright"/> before (/Team/Players/Pitcher)[1] ')
The into clause inserts the InsertExpression as a child of the LocationExpression. The optional clauses of as start and as last are used to specify position of the insertion within the children:
-- Insertion within Team SET @doc.modify(' insert <Pitcher role="Starter" name="Jaret Wright"/> into (/Team/Players)[1] ') -- Insertion within Team, specifying it should -- be inserted as the last element SET @doc.modify(' insert <Pitcher role="Starter" name="Jaret Wright"/> as last into (/Team/Players)[1] ')
The syntax for the delete statement is very straightforward:
delete LocationExpression
The LocationExpression specifies what to delete from the XML data. For example, to delete all the Pitchers:
SET @doc.modify('delete /Team/Player/Pitcher')
Because the query specifies all pitcher elements, they will all be deleted. If you want to delete just a single element, you can specify identifying attributes. To delete just the pitcher named John Smoltz, you write the delete statement like so:
SET @doc.modify('delete /Team/Players/Pitcher[@name="John Smoltz"]')
You can also tell the delete statement to remove an individual attribute. For example, to delete the role attribute for the pitcher named John Smoltz the XML DML looks like this:
SET @doc.modify('delete /Team/Players/Pitcher[@name="John Smoltz"]/@role')
Lastly, the replace value statement describes changes to make to the XML data. The syntax of the replace value statement is:
replace value of OriginalExpression with ReplacementValue | if
The replace value statement is used to change discrete values in the XML. The only discrete values possible are the literal contents of a tag or the value of an attribute. The OriginalExpression must resolve to a single node or attribute. The ReplacementValue is usually a literal value to replace. Replacing the literal contents of a node requires the XQuery expression using the text() function to specify that you want to replace the text of a node. For example, to replace the inner text for a pitcher, you write the modify like this:
DECLARE @doc xml SELECT @doc = ' <Team name="Braves" > <Players> <Pitcher name="John Smoltz" role="Closer"> With team since 1989 </Pitcher> </Players> </ Team > ' SET @doc.modify(' replace value of (/Team/Players/Pitcher[ @name="John Smoltz"]/text())[1] with "May start in 2005" ')
Modifying an attribute is straightforward : you just need the XQuery expression to resolve to a single attribute. For example, to replace the value of the role attribute for the pitcher named John Smoltz with Starter, do this:
SET @doc.modify(' replace value of ( /Team/Players/Pitcher[ @name="John Smoltz"]/@role)[1] with "Starter" ')
The replace value syntax also supports conditional replacement by using the if…then…else syntax within the with clause of the replace value statement. For example, to replace John Smoltz’s role to Starter if he is a closer, but change it to a Closer if he is not a Starter, you could write the code:
SET @doc.modify(' replace value of ( /Team/Players/Pitcher[ @name="John Smoltz"]/@role)[1] with ( if ( /Team/Players/Pitcher[ @name="John Smoltz"]/@role = "Closer" ) then "Starter" else "Closer" ) ')
Nodes Method
The purpose of the nodes method is to allow normalizing of a set of nodes returned by a query into a set of rows in a table-like result set. The syntax of the nodes method is:
nodes (XQuery) Table(Column)
The XQuery is the expression that picks the nodes to be exposed as a result set. The Table and Column are used to specify names in the result set. Note that you can only have one column and that it is automatically of type XML. For example, to query to get each of the pitchers, write the code like this:
DECLARE @doc xml SELECT @doc = ' <Team name="Braves" > <Players> <Pitcher name="John Smoltz" role="Closer"> With team since 1989 </Pitcher> </Players> </ Team > ' SELECT Team.player.query('.') as Pitcher FROM @doc.nodes('/Team/Players/Pitcher') Team(player)
This results in a single result set containing rows for each of the Pitchers’ elements:
Pitcher -------------------------------------------- <Pitcher name="John Smoltz" role="Closer" /> <Pitcher name="Russ Ortiz" role="Starter" /> (2 row(s) affected)
Notice that you used the query method to return these nodes in the result. The reason for this is the results of a nodes method may only be referred to by the XML methods (query, modify, delete, and update) or IS NULL and IS NOT NULL statements.
Gone are the days of needing to pull the entire XML document out of the database as a string, parsing it, making changes, and replacing the entire document.
More ordinarily, you may use the nodes method to break apart XML into a more useful result. For instance, you could get the players’ nodes by using the nodes method, and then retrieve them with the value method to get the individual values as scalar data:
SELECT Team.player.value( './@name', 'nvarchar(10)') as Name, Team.player.value( './@role', 'nvarchar(10)') as PlayerRole FROM @doc.nodes('/Team/Players/Pitcher') Team(player)
This results in the following result set:
Name PlayerRole --------------- --------------- John Smoltz Closer Russ Ortiz Starter (2 row(s) affected)
XML Indexes
As you might expect, the speed of searches based on XML data in the database varies depending on how indexes are set up. For XML data, there are special indexes called XML Indexes. These indexes have a subset of the full configurability that standard indexes have, but they treat XML in a way that aids in the speed of performing searches though XML data.
Requirements
There are some limits regarding indexes on XML columns:
- The only indexes that can be created for XML columns are XML indexes.
- You can only add XML indexes to tables, views, table-valued variables with XML columns, or XML variables.
- An XML index only supports indexing a single XML column.
- Once XML indexes exist on a table, you cannot modify the primary key. If you need to do so, you must drop all XML indexes first.
Index Types
If your table meets the requirements, you can create indexes on the XML data in your tables.
The first of these index types is the primary XML index. Like its name suggests, there can be only one primary XML index on any table. You can use the CREATE PRIMARY XML INDEX command to create the primary XML index:
CREATE PRIMARY XML INDEX IXML_Teams ON Team (TeamDoc)
The primary XML index on a column creates a lookup based on the nodes of each node of the XML. Although this allows for speedy retrieval of individual nodes, there are other types of queries that benefit from their own indexes.
SQL Server 2005 supports three types of secondary indexes; PATH, PROPERTY, and VALUE. These secondary indexes are used to optimize certain types of operations as necessary. These secondary indexes are based on the Primary index and are used to tune specific types of queries. The secondary index types are listed in Table 2.
Use the CREATE XML INDEX syntax to create a secondary index. After you specify the table and column name, add the USING XML INDEX clause.
CREATE XML INDEX IXML_Team_Path ON Team (TeamDoc) USING XML INDEX IXML_Teams FOR PATH
The USING XML INDEX clause takes the name of the primary XML index to create the secondary index and a type (PATH, PROPERTY or VALUE) of index. For example, create the PROPERTY or VALUE secondary indexes:
CREATE XML INDEX IXML_Team_Prop ON Team (TeamDoc) USING XML INDEX IXML_Teams FOR PROPERTY CREATE XML INDEX IXML_Team_Value ON Team (TeamDoc) USING XML INDEX IXML_Teams FOR VALUE
Index Maintenance
Maintaining and modifying XML indexes is similar to maintaining standard indexes. You use the ALTER INDEX and DROP INDEX syntax for modifying XML indexes. These commands are the same commands you use for standard indexes:
ALTER INDEX IXML_Teams ON Teams REBUILD DROP INDEX IXML_Teams ON Teams
Typed XML
The SQL language (and SQL database servers, by extension) represents a type system for storing information. You define data types in databases all the time. That is called schema. You define types tables and views), with certain attributes (columns and data types in those columns), relationships between types (foreign keys), and rules about the data that can be stored in types (constraints and triggers). The same thing happens in XML. In many situations, you want to dictate rules about what can be stored in XML data types. In SQL Server 2005, you can register XML schemas with the database. These schemas can be used to specify what XML can be used in a particular situation. In particular, using schemas can allow you to extend the type system by using XML schemas to specify complex data types.
In SQL Server 2005, you can register XML schemas with the database.
SQL Server 2005 allows both the use of XML columns and variables with generic XML (as you have seen earlier in this article) and the use of XML columns and variables that are typed with XML schema. When you use the XML data type with schema information, any XML inserted into a typed XML column is validated against the schema. In this way, the database ensures that the data stored is not only well-formed, but also conforms to the schema.
Using Typed XML
The first step in using typed XML is registering a schema. This is done by using the new CREATE XML SCHEMA COLLECTION statement. This new statement allows you to store schemas for XML that are used to validate XML stores:
CREATE XML SCHEMA COLLECTION BaseballSchema AS ' <? xml <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element <xsd:complexType> <xsd:attribute </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute </xsd:complexType> </xsd:element> </ xsd:schema > '
The CREATE XML SCHEMA COLLECTION statement creates a collection of schemas, any of which are used to validate XML typed with the name of the collection. This example shows a new schema called BaseballSchema being added to the database. The schema is entered as a string value. Once you have a schema registered, you can use the schema in new instances of the XML data type:
DECLARE @team xml(BaseballSchema) SET @team = '<Team name="Braves"> <Pitcher name="John Smoltz" /> </Team>' SELECT @team
Like you did earlier in the article, you can create a variable of XML type, but because you want to dictate the type of XML, use the parenthetical syntax specifying the schema name you registered. You can store the XML in the variable. You want a specific type of XML, one that conforms to the BaseballSchema. But what if the XML attempting to be stored does not conform to BaseballSchema? In this example, you try and store a piece of XML that has an undefined attribute (or role, if you prefer that term):
SET @team = '<Team name="Braves"> <Pitcher name="John Smoltz" role="Closer" /> </Team>'
This fails because the XML stored in the @team variable does not conform to the specific schema type. Executing the SET statement just shown yields a specific XML validation error:
XML Validation: Undefined or prohibited attribute specified: 'role'
There may be other reasons that specifying a schema is helpful, such as aiding in queries.
As you might expect, using typed XML is straightforward in table creation:
CREATE TABLE Team ( TeamID int identity not null, TeamDoc xml(BaseballSchema) )
Much like the XML variable example above, you can create new rows with the typed XML data, like so:
INSERT INTO Team (TeamDoc) VALUES ('<Team name="Braves"> <Pitcher name="John Smoltz" /> </Team>')
When the insertion happens, SQL Server 2005 validates the XML against the schema specified in the table declaration (BaseballSchema). The following insertion violates that schema, again by specifying an invalid role, so it will fail:
UPDATE Team SET TeamDocTyped = '<Team name="Braves"> <Pitcher name="John Smoltz" role="Closer" /> </Team>'
Being able to specify the types of XML that are allowed in a particular case is helpful to extend the type system to include complex types that are not allowed with standard SQL. If typed XML allows you to specify these complex types as type information, you need a way to manage the schemas to allow for extension and change of these types as the data matures.
Managing XML Schema
What happens when a schema changes? During development, this is likely to happen quite a bit. For example, if you wanted to add the new role attribute to the Pitcher type as defined in BaseballSchema, you would have to drop the entire schema collection:
DROP XML SCHEMA COLLECTION BaseballSchema
But this schema is in use in the Team table, so SQL Server 2005 does not allow you to drop the schema collection:
Specified collection 'BaseballSchema' cannot be dropped because it is used by object 'Team'.
Instead, you can alter the table to drop the column referencing the schema, drop the schema itself, then re-create the schema with the new attribute, and finally, modify the table to add back the column that you dropped, this time referencing the new schema version, as seen in Listing 1.
If you were allowed to change the schema in place, you would have to re-validate all the data in the database, so SQL Server 2005’s approach of just not allowing such a change seems reasonable. There is no good solution when you need to change the schema of existing typed XML.
You can extend the schema collection by adding new schemas to them to allow new types of typed XML. This is done with the ALTER XML SCHEMA COLLECTION statement, as seen in Listing 2.
The ALTER XML SCHEMA COLLECTION statement allows you to alter a schema collection to create new top-level types to be used in the same schema collection. By using the ALTER XML SCHEMA COLLECTION syntax and adding the new Score element type, you extend the types that are allowed within the BaseballSchema. Once you add this schema, you can use the new schema types:
DECLARE @team xml(BaseballSchema) SET @team = '<Score HomeTeam="Braves" AwayTeam="RedSox" HomeScore="5" AwayScore="4" />'
As mentioned earlier, because the XML is really typed to the schema collection rather than a single schema, this is perfectly acceptable. You should note that the XML data type schema name is still the same (BaseballSchema), but you can use the new types in the XML.
XQuery
Now that XML is in the database, you will want a smart way to query it. Earlier, you saw that you could use the query method of the XML datatype to search through instances of XML. In those examples, you used an XML language called XPath to search the document. XPath is a good starting point for asking simple questions of XML documents, but has limitations about asking robust questions. To this end, the World Wide Web Consortium has a working group that created a true query language for XML called XML Query (or XQuery for short).
XQuery is a language for querying XML documents. The topic of XQuery is big enough for its own book, but I’ll give you some basic information to get you started.
Every XQuery is an expression language. What this means is that every XQuery expression must evaluate to a result. Therefore, it is valid to write any expression that evaluates back to a result. That makes these valid XQuery expressions even though they do not do anything to search XML:
SELECT @doc.query('"Hello"') SELECT @doc.query('15 * 10')
XQuery has adopted most of the XPath expression language to describe paths within XML documents. So the examples above were valid XQuery expressions, even though they looked like XPath. For example, to get the list of players, you used:
SELECT @doc.query('/Team/Players')
But to really see the power of XQuery, you want to use it to answer questions. For example, you may want to create a set of XML nodes called <Player/> that contain starting pitchers for the team:
SELECT @doc.query(' for $b in /Team/Players/Pitcher[@</Player>)' )
This XQuery expression uses several different pieces of XQuery to create the expected results. It uses the for...in keyword to say, go through all the nodes that the path expression (/Team/Players/Pitcher[@role = "Starter"]) returns (which is stored in a variable called $b), and create a <Player/> tag for each of the nodes. XQuery inserts a name attribute into the Player tag and calculates the name attribute’s value by getting the name attribute from the node ($b). This query results in an XML fragment that looks like this:
<Player name="Russ Ortiz" /> <Player name="John Thomson" /> <Player name="Mike Hampton" /> <Player name="Horacio Ramirez" />
Although this query is not rocket science, it does hint at some of the power of XQuery. Now that you’ve got a quick introduction, let’s look at some of the components of an XQuery expression.
The syntax of the XQuery language is made up of several parts:
- Prolog
- Iteration
- Path Expressions
- Conditional Expressions
- Quantified Expressions
XQuery Prolog
The prolog is a place to specify any namespaces to declare for the XQuery expression. The entire prolog prefaces the body of the query. For example, you can specify the default namespace of the expression, like so:
SELECT @doc.query(' declare namespace T=""; return /T:Team/T:Players ') as Result
The purpose of declaring the namespace is to allow you to alias it across the query. You can define multiple namespaces within the prolog. In addition, the prolog can also contain a definition for the default namespace. For example:
SELECT @doc.query(' declare default element namespace = ""; return /Team/Players ') as Result
Other than namespaces, you can also include XML schema imports in the prolog. If you are working with typed XML, their schemas are automatically imported by the engine. For additional schemas, you can use the import schema syntax:
SELECT @doc.query(' import schema ""; return /Team/Players ') as Result
Path Expressions
XQuery adopts most of the XPath expression language to accomplish specifying paths. Therefore most of the XPath expression language is perfectly appropriate for use in XQuery. In the simple case, you can use XPath raw as the query like this:
SELECT @doc.query('/Team/Players')
You can use axis specifiers like you would in XPath as well:
SELECT @doc.query('/Team/child::Players')
Or you can use it to retrieve values from a document:
SELECT @doc.value('/Team[@name][1]', 'varchar(50)')
Lastly, most of XPath’s function library is part of XQuery. Therefore you can use calculations that you are already familiar with from XPath. For example, to get the count of players on the team, you can do this:
SELECT @doc.query('count(/Team/Players/*)')
Conditional Expressions
XQuery includes the if...then...else construct to allow for conditional expressions. This construct allows you to make tests to allow branching based on specific results. For example, if you want to return a result of whether the roster is full or not, you can test the team by using a conditional expression:
SELECT @doc.query('if (count(/Team/Players/*) < 25) then "Need Players" else "Roster Full"')
You can nest conditional expressions to do more elaborate testing. For example, you can add a test to see if any players exists, then you can report that you have No Players, instead of just saying Need Players:
SELECT @doc.query('if (count(/Team/Players/*) < 25) then if (count(/Team/Players/*) = 0) then "No Players" else "Need Players" else "Roster Full"')
Conditional expressions also allow you to compound several tests together using and, or, and parentheses. This next example shows testing for both 25- and 40-man rosters:
SELECT @doc.query(' if ((count(/Team/Players/*) = 25) or (count(/Team/Players/*) = 45)) then "Roster Full" else "Need Players"')
Quantified Expressions
There are times where you need to test for whether all or some results match some specific criteria. XQuery allows for this with Quantified Expressions. Quantified Expressions use the following syntax:
( some | every ) <variable> in <Expression> satisfies <Expression>
With this syntax you can test a set of nodes based on a criteria. In this example, you are testing to see if all of the players are starters or not:
SELECT @doc.query(' if (every $player in /Team/Player/* satisfies $player/@role="Starter") then "We have all Starters" else "We have Starters and others" ')
You can use the some clause to specify that at least one needs to pass the test instead of requiring all to pass the test (as seen with the use of every above). You can see this work in the example below:
SELECT @doc.query(' if (some $player in /Team/Player/* satisfies $player/@role="Starter") then "We have some Starters" else "We no starters" ')
Iteration
One of the most common uses for XQuery is to iterate through all the results of a node test and perform some work. To do this, XQuery supports a set of clauses that they shorten to FLWOR (pronounced flower). FLWOR stands for the different pieces of the iteration syntax. FLWOR stands for FOR, LET, WHERE, ORDER BY, RETURN.
As you saw in the earlier example, you can use the most common parts of this syntax (the for and return clauses) to create results that concatenate results. For example, here you want to create a node for each player on the team:
SELECT @doc.query(' for $b in /Team/Players/Pitcher[@ </Player>)')
This creates a variable called $b for each pitcher that is a starter. Then it returns a Player element with the name of the player embedded in it.
You can further enhance the query by using the where clause to specify a condition that each node must pass to become part of the node set. For example, you can make sure that every player has a name by doing this:
SELECT @doc.query(' for $b in /Team/Players/Pitcher[@ </Player>)')
Lastly, you can further improve this query by sorting the pitchers by name using the order by clause:
SELECT @doc.query(' for $b in /Team/Players/Pitcher[@ </Player>)')
You may have noticed that I skipped the let clause. In SQL Server 2005-the let clause is unsupported.
XQuery Extension Functions
To enable better integration with the database engine, Microsoft includes two functions that extend the functionality of XQuery by giving access to columns and local variables in the SELECT clause. These functions are called sql:column and sql:variable. For example, to include the TeamID in the <Player/> elements you are creating, you can call sql:column with the name of the column in the SELECT statement:
SELECT TeamID, TeamDoc.query(' for $b in /Team/Players/Pitcher where count($b/@name) > 0 order by ($b/@name) return (<Player team="{sql:column("TeamID")}" name="{$b/@name}"> </Player>)') FROM Team
In addition, you can use the sql:variable function to have access to local variables. For example, if you create a local variable with the date of the roster, you can insert it into the output XML by using the sql:variable clause:
DECLARE @today datetime SET @today = '12/31/2004' SELECT TeamID, TeamDoc.query(' for $b in /Team/Players/Pitcher where count($b/@name) > 0 order by ($b/@name) return (<Player dt="{sql:variable("@today")}" name="{$b/@name}"> </Player>)') FROM Team
For more information on XQuery:
- The XQuery page on w3c.org’s Web site ()
- The XQuery specification ()
Conclusion
Storing XML has become a mainstay of many software architectures these days. SQL Server 2005 follows this trend by including a real XML data type, but XML is not just structured storage in this case. It represents a way to extend the SQL type system with a well known and open type system: XML Schema. By treating XML as a mature type, the database can deal with XML in an efficient way.
Gone are the days of needing to pull the entire XML document out of the database as a string, parsing it, making changes, and replacing the entire document. SQL Server 2005 lets you do searches, additions, changes and deletions of parts of a document in-place. This represents a great leap in ease and performance when using XML in the database. | https://www.codemag.com/article/0605081 | CC-MAIN-2019-13 | refinedweb | 5,460 | 58.01 |
Related
Join 1M+ other developers and:
- Get help and share knowledge in Q&A
- Get courses & tools that help you grow as a developer or small business owner
Question
Kubernetes cluster not working, status shows down
I have followed and to setup cicd and a loadbalancer configuration. However my cluster is clearly not working as it’s show status down. Could someone enlighten me on what I could be doing wrong? Below is some output of the services and pods.
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE do-kubernetes-sample-app ClusterIP 10.245.218.135 <none> 80/TCP 10d kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 11d sample-load-balancer LoadBalancer 10.245.8.9 XXX.XXX.XXX.XXX 80:32454/TCP 38h $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default do-kubernetes-sample-app-6fbf58f5bb-hfhpw 1/1 Running 0 19h default do-kubernetes-sample-app-79665478bf-b8dm5 0/1 InvalidImageName 0 9h kube-system cilium-operator-6b899cc7db-946pr 1/1 Running 19 10d kube-system cilium-vtvw8 1/1 Running 6 10d kube-system coredns-78dc9d6fc7-8lx2p 1/1 Running 2 10d kube-system coredns-78dc9d6fc7-zz6jk 1/1 Running 2 10d kube-system csi-do-node-6cx7t 2/2 Running 2 10d kube-system do-node-agent-zmffw 1/1 Running 1 10d kube-system kube-proxy-j8j9s 1/1 Running 1 10d $ kubectl get pods --field-selector=status.phase=Running NAME READY STATUS RESTARTS AGE do-kubernetes-sample-app-6fbf58f5bb-hfhpw 1/1 Running 0 19h
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.×
Hi there!
When you say it shows as down, are you talking about the application itself? the LB? or the cluster/nodes?
Can you provide error logs or open up a support case so I can take a deeper look?
Regards,
John Kwiatkoski
Senior Developer Support Engineer
I’m getting a similar issue? My load balancer says that my nodes have a status of down.
I’m getting same error too
Same here. Load balancer says all the nodes are down. kubectl works fine, and all the pods are running as usual. Our website just can’t be reached from the outside anymore.
Any Updates on how to fix it, I have the same issue. | https://www.digitalocean.com/community/questions/kubernetes-cluster-not-working-status-shows-down?comment=81353 | CC-MAIN-2021-49 | refinedweb | 412 | 55.54 |
Hi everyone. I am currently working on this problem.
================
Machine learning and data mining has become a very important part of computer science as huge datasets are becoming more and more prevalent.
This program is designed to help a computer “discover” the appropriate hit and stay percentages for blackjack. To implement this create an array of numbers that has indexes up to 21, and assign each location a starting value of .5. This indicates at the beginning that a computer would have a 50/50 chance of choosing to hit at that sum value of cards.
Decide on a way to draw cards (many times you can use a previously constructed deck class) and after dealing two cards, have the computer decide to hit/stay based on the percentage that it has stored at the index for the card sum. If the action does not result in going over 21, then reinforces that percentage (by adding a small amount). If you go over 21 then you should decrease the possibility of that action in the future.
After getting the program to work, run it for 100 trials and discuss the results, then run it for 200, 1000, 2000. Talk about how the increased number of trials give you better results, and what the computer “learned” from the trials.
============
import java.util.ArrayList; public class SimpleDeck { private int numberOfDecks = 40; private ArrayList<Integer> deck = new ArrayList<Integer>(52*numberOfDecks); public SimpleDeck() { for (int i = 0; i < 52*numberOfDecks; i++) { int value = (i+1) % 13; if ((value > 10) || (value == 0)) value = 10; deck.add(value); //values for cards ACE=1, JACK=QUEEN=KING=10 } } public int getACard() { if (deck.size() == 0) return 0; //if out of cards, return 0 int choice = (int)(deck.size() * Math.random()); int cardValue = deck.get(choice); //get card value deck.remove(choice); //remove chosen card from deck return cardValue; } public static void main() { int x; SimpleDeck d = new SimpleDeck(); //creates deck(s) do { x = d.getACard(); //pick a random card from deck(s) System.out.println(x); } while (x != 0); } }
I have this base code. Can someone help me to solve the question?
THanks ! | http://www.javaprogrammingforums.com/whats-wrong-my-code/8307-urgent-plz-help-me-black-jack-program.html | CC-MAIN-2013-48 | refinedweb | 357 | 63.29 |
prompter_mh is a dart CLI package for asking questions and retrieving the results. Flip on to the Example tab to see how it is used.
A Map<String, dynamic> of is available as result on your instance of the prompter. The String is the key specified to the ask method whiles dynamic is the value desired.
Supported prompt types
MIT LICENSE
example/main.dart
import 'package:prompter_mh/prompter_mh.dart'; void main() { // Instantiate a prompter instance Prompter prompter = Prompter(); // Ask some questions prompter.ask( 'color', 'What Color Do You Want?', 'multiple', [ Option(label: 'I want red', value: '#f44141'), Option(label: 'I want blue', value: '#4158f4'), Option(label: 'I want green', value: '#42f47d'), Option(label: 'I want yellow', value: '#f4f441'), ] ); prompter.ask( 'likeBanana', 'Do you like banana?', 'binary', null, ); prompter.ask( 'name', 'What is your name?', 'text', null, ); print(prompter.results); }
Add this to your package's pubspec.yaml file:
dependencies: prompter_mh: ^0h/prompter_mh.dart';
We analyzed this package on Apr 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, other
Primary library:
package:prompter_mh/prompter_mh.dartwith components:
io.
Document public APIs. (-1 points)
12 out of 12 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.. | https://pub.dartlang.org/packages/prompter_mh | CC-MAIN-2019-18 | refinedweb | 225 | 61.73 |
Hi,
Sorry if this is the wrong place to post this as I don't see a Django specific area.
I'm new to Python and Django. Spent a few weeks getting to know Python; went through "Python in a Day" and about half of "Python the Hard Way" before I dove into Django. I'm understanding a lot of things, but still plenty that is over my head.
Currently I've got a Blog app set up and my models.py completed. Problem is, the form fields are all about 200px wide. I'd like to customize the fields with custom widths and cols/rows for textareas.
Here's my current admin.py file:
from django.contrib import admin
from blog.models import Post
from django import forms
class PostAdminForm(forms.ModelForm):
title = forms.CharField(widget=forms.TextInput(attrs={'size':100}))
class PostAdmin(admin.ModelAdmin):
list_display = ('title', 'status', 'pub_date', 'image', 'summary')
list_filter = ['pub_date', 'status']
search_fields = ['title', 'summary', 'body']
date_hierarchy = 'pub_date'
admin.site.register(Post, PostAdmin)
You can see where I've tried to set a custom size for the title field. I get no errors or anything with this code yet the title field doesn't change in size. What am I missing?
I also found this bit of code which was from the same thread as I found the above. The poster indicated that it was "better" in some way but I can't seem to make it work either:
def get_form(self, request, obj=None):
form = super(EventAdmin,self).get_form(request, obj=None)
name = forms.CharField(widget=forms.TextInput(attrs={'size':
100, 'max_length':200}),required=True,)
form.base_fields['name']=name
return form
Does the rendered HTML show actual size attributes showing up?
Secondly, I believe that even with inline size attributes, CSS can override those by setting inputs to particular widths. So this might be a non-issue with the size but it would be good to know about the max-lengths for example, since form validators sometimes make use of them.
We use WTForms at work and I'm always wasting time trying to figure out how to add attributes and edge cases I need to them. Generated forms are my enemy. But often I find how a lot of things are defaulted by looking at the main forms classes, so you'd want to look at where Django imports "forms" and see what's defined in for example the widgets.
Good question. Not sure. Never thought to view source and check that.
Yes, that's what I found out browsing today trying to again to fix it. That CSS will override any size attribute. Seems like creating my own custom "static" CSS file and including that file into the admin.html template is the way to go for how to size these fields. I'm not exactly sure how to do that, but I'm going to give it a shot this evening.
Can you tell me where to find the main forms classes? Sorry, I'm pretty new to this and I'm not sure what you're referring to.
Thanks!
I don't know django but I'm referring to thisfrom django import forms
forms could either be a class or a module (.py file).
However someone in the house has the 2 scoops of Django book so I may run through it quickly to see if they mention forms, and if so, where they tend to sit.
Thanks. I've actually got that book somewhere I think. I forgot I had it. I'll check it also.
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/changing-widths-of-django-admin-forms/34855 | CC-MAIN-2015-18 | refinedweb | 606 | 75.3 |
What will you learn?
- The password validation process in Mac
- How to extract the password validation values
- Implementing the check in Python
- Understand why the values are as they are
- The importance of using a salt value with the password
- Learn why the hash function is iterated multiple times
The Mac password validation process
Every time you log into your Mac it needs to verify that you used the correct password before giving you access.
The validation process reads hash, salt and iteration values from storage and uses them to validate your password.
The 3 steps below helps you to locate your values and how the validation process is done.
Step 1: Locating and extracting the hash, salt and iteration values
You need to use a terminal to extract the values. By using the following command you should get it printed in a readable way.
sudo defaults read /var/db/dslocal/nodes/Default/users/<username>.plist ShadowHashData | tr -dc 0-9a-f | xxd -r -p | plutil -convert xml1 - -o -
Where you need to exchange <username> with your actual user name. The command will prompt you for admin password.
This should result in an output similar to this.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>SALTED-SHA512-PBKDF2</key> <dict> <key>entropy</key> <data> 1meJW2W6Zugz3rKm/n0yysV+5kvTccA7EuGejmyIX8X/MFoPxmmbCf3BE62h 6wGyWk/TXR7pvXKg\njrWjZyI+Fc3aKfv1LNQ0/Qrod3lVJcWd9V6Ygt+MYU 8Eptv3uwDcYf6Z5UuF+Hg67rpoDAWhJrC1\nPEfL3vcN7IoBqC5NkIU= </data> <key>iterations</key> <integer>45454</integer> <key>salt</key> <data> 6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI= </data> </dict> </dict> </plist>
Step 2: Understand the output
The output consists of four pieces.
- Key value: SALTED-SHA512-PBKDF2
- Entropy: Base64 encoded data.
- Number of iteration: 45454
- Salt: Base64 encoded data
The Key value is the tells you which algorithm is used (SHA512) and how it is used (PBKDF2).
The entropy is the actual result of the validation algorithm determined by the key value . This “value” is not an encryption of the password, which means you cannot recover the password from that value, but you can validate if the password matches this value.
Confused? I know. But you will understand when we implement the solution
The number of iterations, here 45454, is the number of times the hash function is called. Also, why would you call the hash function multiple times? Follow along and you will see.
Finally, we have the salt value. That is to ensure that you cannot determine the password from the entropy value itself. This will also get explained with example below.
Step 3: Validating the password with Python
Before we explain the above, we need to be have Python do the check of the password.
import hashlib import base64 iterations = 45454 salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode()) password = "password".encode() value = hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128) print(base64.b64encode(value))
Which will generate the following output
b'1meJW2W6Zugz3rKm/n0yysV+5kvTccA7EuGejmyIX8X/MFoPxmmbCf3BE62h6wGyWk/TXR7pvXKgjrWjZyI+Fc3aKfv1LNQ0/Qrod3lVJcWd9V6Ygt+MYU8Eptv3uwDcYf6Z5UuF+Hg67rpoDAWhJrC1PEfL3vcN7IoBqC5NkIU='
That matches the entropy content of the file.
So what happened in the above Python code?
We use the hashlib library to do all the work for us. It takes the algorithm (sha512), the password (Yes, I used the password ‘password’ in this example, you should not actually use that for anything you want to keep secret from the public), the salt and the number of iterations.
Now we are ready to explore the questions.
Why use a Hash value and not an encryption of the password?
If the password was encrypted, then an admin on your network would be able to decrypt it and misuse it.
Hence, to keep it safe from that, an iterated hash value of your password is used.
A hash function is a one-way function that can map any input to a fixed sized output. A hash function will have these important properties in regards to passwords.
- It will always map the same input to the same output. Hence, your password will always be mapped to the same value.
- A small change in the input will give a big change in output. Hence, if you change one character in the password (say, from ‘password’ to ‘passward’) the hash value will be totally different.
- It is not easy to find the given input to a hash value. Hence, it is not easily feasible to find your password given the hash value.
Why use multiple iterations of the hash function?
To slow it down.
Basically, the way your find passwords is by trying all possibilities. You try ‘a’ and map it to check if that gives the password. Then you try ‘b’ and see.
If that process is slow, you decrease the odds of someone finding your password.
To demonstrate this we can use the cProfile library to investigate the difference in run-time. First let us try it with the 45454 iterations in the hash function. = 45454 salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode()) cProfile.run("crack_password(entropy, iterations, salt)")
This results in a run time of.
1 0.011 0.011 58.883 58.883 ShadowFile.py:6(crack_password)
About 1 minute.
If we change the number of iterations to 1. = 1 salt = base64.b64decode("6VuJKkHVTdDelbNMPBxzw7INW2NkYlR/LoW4OL7kVAI=".encode()) cProfile.run("crack_password(entropy, iterations, salt)")
I guess you are not surprised it takes less than 1 second.
1 0.002 0.002 0.010 0.010 ShadowFile.py:6(crack_password)
Hence, you can check way more passwords if only iterated 1 time.
Why use a Salt?
This is interesting.
Well, say that another user used the password ‘password’ and there was no salt.
import hashlib import base64 iterations = 45454 salt = base64.b64decode("".encode()) password = "password".encode() value = hashlib.pbkdf2_hmac('sha512', password, salt, iterations, 128) print(base64.b64encode(value))
b='
Then you would get the same hash value.
Hence, for each user password, there is a new random salt used.
How to proceed from here?
If you want to crack passwords, then I would recommend you use Hashcat. | https://www.learnpythonwithrune.org/understand-the-password-validation-in-mac-and-implement-the-validation-in-python-in-3-steps/ | CC-MAIN-2021-25 | refinedweb | 989 | 58.58 |
Jan 19, 2007 05:49 PM|kevinof|LINK
hi all,
I have an app (web site) that I have converted from asp 1.1 to 2.0. However its giving me undefined on datarow (type 'datarow' is not defined) , datagrid etd. To fix it I have to preface with data.datarow and data.datagrid and so on. Its not a big problem as I can do a global replace but I'd like to know why this is happening.
thanks -Kevin
All-Star
157774 Points
ASPInsiders
Moderator
Jan 19, 2007 09:28 PM|mbanavige|LINK
It looks like you're missing an import statement. For vb, you can add it to the page or to the project.
Imports System.Data
For C#, i believe you need to add it to every page as required:
using System.Data;Note: This applies just to the Data.Datarow. The Datagrid is actually not part of the Data namespace.
All-Star
157774 Points
ASPInsiders
Moderator
Jan 21, 2007 05:31 PM|mbanavige|LINK
if you are using C#, then remember that it is case-sensitive.
try changing: datarow
to: DataRow
3 replies
Last post Jan 21, 2007 05:31 PM by mbanavige | https://forums.asp.net/t/1065433.aspx?conversion+from+2003+to+2005+datarow+is+not+defined | CC-MAIN-2019-13 | refinedweb | 198 | 75.71 |
Wouter van Marle wrote ..
> Hi all,
>
> Is there a way to catch the html code as produced by the psp module?
>
> I mean: normally one does something like:
> page = psp.PSP(req, "template.psp")
> page.run({"spam": 1, "eggs": 5})
> and the page is converted to html, and displayed in the browser.
>
> Now is it possible to catch this html code in a string, instead of
> sending it to the browser? This as I'd like to use the psp templating
> for creating an e-mail body.
At first glance there seems to be no feature to do it. What you might
try (I haven't), is that presuming that all output is generated through
calls to req.write(), is to substitute the "req" object with a wrapper
class that provides its own req.write() which is actually the write()
method on a StringIO object.
class Wrapper:
def __init__(self,req,write):
self.__req = req
self.write = write
def __getattr__(self,name):
return getattr(self,__req,name)
s = StringIO.StringIO()
page = psp.PSP(Wrapper(req,s.write), "template.psp")
page.run({"spam": 1, "eggs": 5})
req.log_error(s.getvalue())
If you were also stashing stuff in "req" from the PSP page, would also
need a __setattr__() method in the wrapper.
Graham | http://modpython.org/pipermail/mod_python/2005-May/018125.html | CC-MAIN-2020-29 | refinedweb | 211 | 77.03 |
GUI Req: Local hotkeys
#21
Posted 27 March 2005 - 06:31 PM
#22
Posted 01 April 2005 - 01:22 AM
I extended the function HotKeySet a little bit to this:
HotKeySet( "key" [, "function" [, WindowHandle Or WindowTitle [, WindowText [,MustBeActive]]]] )
So you could it use like:
#include <GUIConstants.au3> Dim $cn = 0 $gui = GUICreate("Test") $label = GUICtrlCreateLabel("",10,10,250,20) $label2 = GUICtrlCreateLabel("info",10,50,250,20) GUISetState() ; HotKeySet("key" [, "function" [, WindowHandle Or WindowTitle [, WindowText [,MustBeActive]]]] ) ; ; HotKeySet("^s", "HotkeyCheck", $gui, "sssss", 1) -> OK, cause of special GUI-handle ; HotKeySet("^s", "HotkeyCheck", $gui) -> OK ; HotKeySet("^s", "HotkeyCheck", "Test, "",0) -> OK ; HotKeySet("^s", "HotkeyCheck", "Test, "sssss") -> ERROR ; HotKeySet("^s","HotkeyCheck",$gui,"",1) While 1 $msg = GUIGetMsg() If $msg = -3 Then ExitLoop WEnd GUIDelete() Exit Func HotkeyCheck() $cn = $cn + 1 GUICtrlSetData($label,"Script " & $cn & " times saved.") EndFunc
So what do you think about this?
So long...
Holger
#23
Posted 01 April 2005 - 03:31 AM
#24
Posted 01 April 2005 - 05:21 AM
I don't see what passing an argument does that using a global variable can't do just as well (That's not condoning the use of global variables, however).I don't see what passing an argument does that using a global variable can't do just as well (That's not condoning the use of global variables, however).
Sounds great, any chance we can pass arguments to the Function too?
What I think would be more useful is if the callback function could optionally be passed the hotkey combination that was pressed. What I mean by optional is, if the callback function has a signature like this:
Func Callback($variable) EndFunc
Then the hotkey pressed will be passed in and retrievable through $variable. If the signature is in the current form of:
Func Callback() EndFunc
Then no parameter can be passed so none is passed but the callback is still executed.
I do not like the idea of adding macro's to store this kind of information (A @HotKeySet macro or something similar). I think this sort of information should be passed to the function. There's tons of stuff I would like to see passed to GUI OnEvent callback functions, too, so this has more applications than just this one function.
So what I really am asking for is the capability for callback functions to be passed parameters containing useful (and logical) data for the event they are assigned to.
#25
Posted 01 April 2005 - 06:23 AM
AutoIt gui intuitions would grow in future and one window could easely have three or four menu rows and some controls like 10 or 20 and several child windows also with controls - it's just not posible today to add keyboard accelerator support just to a few child windows without a lot of roundups...
Holger I think that your solution is pointing in the right direction...
Kåre
Edited by kjactive, 01 April 2005 - 06:32 AM.
#26
Posted 01 April 2005 - 12:19 PM
So in my example it would passed: "Test" if I use the declaration with i.e.:
HotKeySet("^s","HotkeyCheck","Test","",1)
So I have to put it then in a separate function.
Instead of going direct to the first the line in the user function we could call the normal "user function parser" you know.
So long...
Holger
#27
Posted 07 May 2005 - 12:37 AM:
#include <GUIConstants.au3> #include 'HotKeySetLocal.au3' Run('notepad.exe') WinWait('Untitled - Notepad') $hwnd = WinGetHandle('Untitled - Notepad') _HotKeySetLocal($hwnd, '^1', 'Notepad') $gui = GuiCreate('press ctrl+t') $button = GuiCtrlCreateButton('toggle GUI hotkey', 20, 20, 100, 30) GuiSetState() $hk_GUIHotKey = _HotKeySetLocal($gui, '^t', 'ControlT') $i_HKOn = 1 While 1 _HotKeyCheckLocal() $msg = GUIGetMsg() If $msg = $GUI_EVENT_CLOSE Then Exit If $msg = $button Then If $i_HKOn Then tooltip('hot key off') _HotKeyToggleLocal($hk_GUIHotKey, 0) $i_HKOn = 0 Else tooltip('hot key on') _HotKeyToggleLocal($hk_GUIHotKey, 1) $i_HKOn = 1 EndIf EndIf WEnd Func Notepad() Send('This is Notepad. You pressed Ctrl+1.', 1) EndFunc Func ControlT() MsgBox(0, 'Message', 'You pressed Ctrl+T in the GUI window!') EndFunc
What do you think?
Attached Files
Edited by Saunders, 22 August 2005 - 05:02 AM.
#28
Posted 07 May 2005 - 01:38 AM
It's great that you made the UDFs for us. Thank you for that.It's great that you made the UDFs for us. Thank you for that.
Okay, so nobody's been to this thread in a while, but I'm resurrecting it anyway. We (Holger that is) have already half-solved the problem of sending parameters with hotkeys, with the @HotKeyPressed macro, but GUI/Window specific hotkeys still haven't been implemented (in beta or otherwise).:
[-code-]
What do you think?
Anyways, I hope you can make this work without UDFs.
Edited by SlimShady, 07 May 2005 - 01:39 AM.
#29
Posted 09 May 2005 - 12:43 AM
Yes, I can't wait until there is built-in support for these kinds of keys.Yes, I can't wait until there is built-in support for these kinds of keys.
It's great that you made the UDFs for us. Thank you for that.
Anyways, I hope you can make this work without UDFs.
#30
Posted 21 August 2005 - 11:09 PM
Sorry for the thread revivification, but I was needing this ability again and the workarounds are awfully slow. Has there been any progress in this area? Is it really very difficult to add these local hotkeys?
#31
Posted 21 August 2005 - 11:58 PM
LOL.
#32
Posted 22 August 2005 - 04:29 AM
#33
Posted 22 August 2005 - 04:38 AM
#34
Posted 22 August 2005 - 04:57 AM
Holger's post: Apr 1 2005, 04:19 AM
w0uter's reference: Apr 1 2005, 03:22 AM
I could see one number being off, but all three?
#35
Posted 22 August 2005 - 05:07 AM
Edit: So Saunders, you want the post showing up on your system as Mar 31 2005, 05:22 PM according to your profile.
Edited by LxP, 22 August 2005 - 05:12 AM.
#36
Posted 22 August 2005 - 05:27 AM
Wow, I managed to totally derail this thread with all my confusion.
Edit: Even so, I'm still confused as to why.
Edited by Saunders, 22 August 2005 - 05:27 AM.
#37
Posted 22 August 2005 - 12:58 PM
#38
Posted 23 August 2005 - 05:11 PM
I still really want this to come through. Hoping SOMEBODY out there on the developer side of things is looking at this and thinking, "Hey now.. that's an idea."
I'm actually half tempted to start a new topic, but I don't want to be too pushy.
*Edit: Views: 1219 (Just keeping track to see if anybody IS actually checking this thread)
Edited by Saunders, 24 August 2005 - 03:36 AM.
#39
Posted 24 August 2005 - 07:47 AM
I started something but since there was an UDF I didn't do anything on this.
I also didn't 'implement' to use the same HotKey on different windows.
Oh, I started a lot of things but nothing is finished
When I have time again I will take a look the next days what I did in march and tell you again how it goes on.
Regards
Holger
#40
Posted 29 November 2005 - 04:31 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | http://www.autoitscript.com/forum/topic/7734-gui-req-local-hotkeys/page-2 | CC-MAIN-2015-06 | refinedweb | 1,227 | 69.21 |
Closed Bug 567945 Opened 12 years ago Closed 12 years ago
Android Agent needs to build in our Makesystem
Categories
(Testing :: General, defect)
Tracking
(Not tracked)
People
(Reporter: cmtalbert, Assigned: cmtalbert)
References
Details
Attachments
(2 files, 2 obsolete files)
The Android agent needs to build in our makesystem so that it is signed with the same key that the Android builds are signed with. This will enable the agent and fennec to be able to read each others files etc. Note that to build this, you'll need to have an android build environment set up and you'll need to have pulled the android2 branch from Vlad's repo:. Attaching two patches - first one is the makefile changes for this support. (needs review) Second patch is the entire android sutagent codebase.
Attachment #447264 - Flags: review?(ted.mielczarek)
Well, the sutagent codebase is slightly more than 2Mb. You can get to a diff for it here:
not sure if this would be of use, but we have /build/mobile/ directory:. That is where devicemanager.py and the c++ proxy tests that blassey worked on live. Would it make sense to put this there instead of testing/sutagent?
(In reply to comment #2) > not sure if this would be of use, but we have /build/mobile/ directory: >. > > That is where devicemanager.py and the c++ proxy tests that blassey worked on > live. Would it make sense to put this there instead of testing/sutagent? That'd be fine. It doesn't really matter much to me. Ted, thoughts? Also, Ted, ping for review. Gracias.
Comment on attachment 447264 [details] [diff] [review] Makefile changes for android agent. >diff --git a/Makefile.in b/Makefile.in >--- a/Makefile.in >+++ b/Makefile.in >@@ -78,7 +78,9 @@ > > # test harnesses > ifdef ENABLE_TESTS >-tier_testharness_dirs += testing/xpcshell >+tier_testharness_dirs += testing/xpcshell \ >+ testing/sutagent/android \ >+ $(NULL) > endif jhammel's patch in bug 516984 moves this whole block to toolkit/toolkit-tiers.mk (alongside testing/mochitest), although I think Joel is right and you should just put this in build/mobile anyway. >diff --git a/testing/sutagent/android/Makefile.in b/testing/sutagent/android/Makefile.in >new file mode 100644 >--- /dev/null >+++ b/testing/sutagent/android/Makefile.in >+JAVA=java >+JAVAC=javac I had mwu change this in his patch on bug 564327, so please follow his lead there. >+ >+DX=$(ANDROID_SDK)/tools/dx >+AAPT=$(ANDROID_SDK)/tools/aapt >+APKBUILDER=$(ANDROID_SDK)/../../tools/apkbuilder >+ZIPALIGN=$(ANDROID_SDK)/../../tools/zipalign >+ >+JAVAFILES = \ >+ AlertLooperThread.java \ >+ ASMozStub.java \ >+ CmdWorkerThread.java \ >+ DataWorkerThread.java \ >+ DoAlert.java \ >+ DoCommand.java \ >+ Power.java \ >+ RedirOutputThread.java \ >+ RunCmdThread.java \ >+ RunDataThread.java \ >+ SUTAgentAndroid.java \ >+ SUTStartupIntentReceiver.java \ >+ WifiConfiguration.java \ >+ R.java \ >+ $(NULL).
Attachment #447264 - Flags: review?(ted.mielczarek) → review-
Assignee: nobody → ctalbert
(In reply to comment #4) >. Hey Ted, thanks for the review. I've got everything done except this bit. I ran into a problem with the crystax ndk (it's needed for --enable-tests on android) when trying to build against mozilla central. I'm running a build now against mozilla-central without the ndk to verify that the make changes themselves will work, and if so, I should have a new patch here. One more question, in build/Makefile.in I'm tying this in right after this line: with the following code: ifdef ANDROID DIRS += mobile/sutagent/android endif I originally was going to put this inside the large ifdef ENABLE_TESTS block at line 103 of that file, but each time I did that, it would fail to generate the directory tree for mobile/sutagent/android and would then fail to call my makefile for the agent. Is that expected? In the short term, the agent-tie-in will have to live after the PGO line because we can't build with --enable-tests on mozilla central (for android). But in the long term, it feels like it should live inside the ifdef ENABLE_TESTS block, but I don't understand why it doesn't allow me to change the DIRS attribute at that point in the makefile. Am I missing something?
You're not supposed to change DIRS after you include rules.mk. Also, I think we got rid of the ANDROID makefile variable, didn't we? You want to use ifeq (Android,$(OS_TARGET))
This patch addresses your comments. I looked through the rules.mk Java stuff but it doesn't really set the things we need for Android. So, I made an config/android-common.mk. All the variables I could feasibly see us overwriting I made overwritable. That said, I'm no Java whiz, so make sure it's right. The one difference between the agent Makefile and the fennec Makefile was the classpath setting so I require that to be set prior to including the file. Let me know what you think. This builds atop m-c using the patch queue here:. You also need the agent code which I've uploaded to:
Attachment #447264 - Attachment is obsolete: true
Attachment #450808 - Flags: review?(ted.mielczarek)
Comment on attachment 450808 [details] [diff] [review] Addresses comments >diff --git a/build/mobile/sutagent/android/Makefile.in b/build/mobile/sutagent/android/Makefile.in >new file mode 100644 >--- /dev/null >+++ b/build/mobile/sutagent/android/Makefile.in >+DX=$(ANDROID_SDK)/tools/dx >+AAPT=$(ANDROID_SDK)/tools/aapt >+APKBUILDER=$(ANDROID_SDK)/../../tools/apkbuilder >+ZIPALIGN=$(ANDROID_SDK)/../../tools/zipalign This stuff can all go in android-common.mk. >+ifdef JARSIGNER >+ APKBUILDER_FLAGS += -u >+endif This too. >+JAVA_CLASSPATH = $(ANDROID_SDK)/android.jar:$(srcdir)/network-libs/commons-net-2.0.jar >+tools:: sutAgentAndroid.apk >+ >+# Note that we're going to set up a dependency directly between embed_android.dex and the java files >+# Instead of on the .class files, since more than one .class file might be produced per .java file >. >diff --git a/config/android-common.mk b/config/android-common.mk >new file mode 100644 >--- /dev/null >+++ b/config/android-common.mk >@@ -0,0 +1,65 @@ >+# ***** Android Java Make Settings. >+# >+# The Initial Developer of the Original Code is >+# Mozilla.org. I think this is supposed to be "The Mozilla. >+JAVAC_FLAGS = \ >+ -target $(JAVA_VERSION) \ >+ -classpath $(JAVA_CLASSPATH) \ >+ -bootclasspath $(JAVA_BOOTCLASSPATH) \ >+ -encoding ascii \ >+ -g \ >+ $(NULL) Two-space indent, not tabs. Thanks for factoring those common bits out. r=me with those fixes.
Attachment #450808 - Flags: review?(ted.mielczarek) → review+
(In reply to comment #8) > >. I tried this, but the embedding/android makefile has two interpolated java files that are specified here which are built by other rules and placed in the objdir and this rule just picks them up. I tried, factoring this out but I couldn't get the dependencies to generate those files properly in a "factored out" version. If you have an idea on a way to do this, then I'm happy to factor this into android-common.mk in a follow on. > Actually, that is exactly the behavior I wanted (took a fair bit of testing to figure this out). I wanted the failure to be as explicit as possible and localized as possible during the make process. That said, if it is more preferable to do this the way you suggest, I'm happy to change it in a follow-on patch. I made all the rest of the suggested fixes. I'll land this as soon as the tree reopens (or tomorrow, whichever happens first).
Attachment #450808 - Attachment is obsolete: true
Attachment #452433 - Flags: review+
Pushed Agent code as: changeset f8e71992be6b Pushed make system changes as changeset da2d8dc257d5 --> FIXED
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Bustage fix for android. The problem is that the buildstep that handles the embedding/android build sets the location of the jdk on the path. The buildstep for the main build of m-c does not set this path string, and therefore the build breaks. However, no amount of attempting to explicitly setting the path in the mozconfig makes this work. I've tried: export JAVA_HOME=<pathtojdk> export PATH=$JAVA_HOME/bin:$PATH and it didn't work I also tried: export JAVA_HOME=<pathtojdk> export PATH=<pathtojdk>/bin:$PATH And it still didn't work. So, I'm out of ideas and we need to get this to quit burning, so I'll check this in as a bustage fix on Android. Bear is going to set the path explicitly in his buildstep for the main android build and we'll be good to go tomorrow, when I'll re-enable this code. Also, I'm no configure expert, but it looks like this code: should handle this case and ensure that the java stuff is properly set. However, I think we pass that code because our *BUILD SYSTEM* understands where all the Java bits are. But the thing that is failing here is the call to the android SDK which is operating its own executable. It may be that using android-sdk/tools/dx simply requires us to set it on the system path. (I also wonder if make and dx are using two different shell environments (so, while the path is actually being set properly in make due to my mozconfig changes above, dx doesn't see that because it is running in its own shell environment where that isn't set). If they are indeed running in two shell environments the only way forward here will be to set the system path variable from buildbot before we call make.
Bear added the java path to the system path in buildbot for the main compile step on android. I've now re-enabled this patch, and we'll see how it goes, but I think it will pass this time. Changeset: 17fcd8fa7e2f | https://bugzilla.mozilla.org/show_bug.cgi?id=567945 | CC-MAIN-2022-27 | refinedweb | 1,590 | 56.76 |
03 June 2011 12:23 [Source: ICIS news]
SINGAPORE (ICIS)--Shell plans to shut its 800,000 tonne/year mixed feed cracker in ?xml:namespace>
“There will be some new equipment arriving in early August so the cracker will need to be shut for a few weeks,” said one source.
A Shell spokesperson told ICIS that it was not the company’s policy to discuss maintenance schedules at its cracker at
The spokesperson also said the cracker is running and that the company will gradually increase its operating rates but did not give specific details. Market sources, however, said the cracker is operating at around 60% of capacity following its restart in mid | http://www.icis.com/Articles/2011/06/03/9466166/shell-plans-singapore-cracker-shutdown-in-august-sources.html | CC-MAIN-2014-35 | refinedweb | 113 | 52.73 |
I would like to make it so that when a character (picture or shape) moves it leaves a trail behind it that keeps getting more transparent. how would i start. If you jump the trail should follow up then down. Can anyone help me? Thanks
I would like to make it so that when a character (picture or shape) moves it leaves a trail behind it that keeps getting more transparent. how would i start. If you jump the trail should follow up then down. Can anyone help me? Thanks
an idea:
say the shape is that of a star. you want more transparent stars to follow behind it. my idea is that you need to specify the number of stars first. say that are five stars, firs (1), second (2), 3, 4 5. when you initialize the app, each star has a position inside your screen. after this, every change that happens, should only be reflected on star one changing its position. star two assumes the previous position of star one, star three assumes the previous position of star two, star four that of star three, and star five that of star four. hope this helps, and if you write the app, share your code please. I would like to see how it turned out.
ok I will and is there a transparency changer in java. so the first star is solid (100) and 2 is 80 and 3 is 60 and 4 is 40 and 5 is 20. so it seems like the star is moving so fast that it leaves a trail. I would like to change the transparency of the stat without actually chancing the whole color.
Edited by sirlink99: n/a
Here is a small tutorial on transparency using the AlphaComposite class:
This is what I have so far. it rust goes right, no up or down. not even a left. I'll keep working on it and I'll keep posting. I couldn't figure out how to use the brighter function. if anyone else has any solutions feel free to post. I just edited the code now the square bounces.
// The "Trail" class. import java.applet.*; import java.awt.*; import java.awt.event.*; public class Trail extends Applet implements KeyListener, Runnable { // Place instance variables here int s1x = 0, s2x = -5, s3x = -10, s4x = -15, s5x = -20; int s1y = 30; private Image dbImage; private Graphics dbg; int dx = 0; Thread thread; Color sq1, sq2, sq3, sq4, sq5; public void init () { resize (700,90); addKeyListener (this); // Place the body of the initialization method here } // init method public void start () { thread = new Thread (this); thread.start (); } public void stop () { thread.stop (); } public void run () { Thread.currentThread ().setPriority (Thread.MIN_PRIORITY); while (true) { // Delay for 20 milliseconds try { Thread.sleep (20); } catch (InterruptedException e) { } starPlacement (); if (s1x == 670 || s1x == 0){ dx = -dx; } repaint (); Thread.currentThread ().setPriority (Thread.MAX_PRIORITY); } } public void paint (Graphics g) { g.setColor (sq5); g.fillRect (s5x, s1y, 30, 30); g.setColor (sq4); g.fillRect (s4x, s1y, 30, 30); g.setColor (sq3); g.fillRect (s3x, s1y, 30, 30); g.setColor (sq2); g.fillRect (s2x, s1y, 30, 30); g.setColor (sq1); g.fillRect (s1x, s1y, 30, 30); // Place the body of the drawing method here } // paint method public void keyPressed (KeyEvent e) { int key = e.getKeyCode (); if (key == KeyEvent.VK_RIGHT) { dx = 5; } } public void keyReleased (KeyEvent e) { } public void keyTyped (KeyEvent e) { } public void starPlacement () { s5x = s4x; s4x = s3x; s3x = s2x; s2x = s1x; s1x += dx; sq1 = new Color (0, 0, 0); sq2 = new Color (50, 50, 50); sq3 = new Color (100, 100, 100); sq4 = new Color (150, 150, 150); sq5 = new Color (200, 200, 200); repaint (); } public void update (Graphics g) { // initialize doublebuffers dbImage = createImage (this.getSize ().width, this.getSize ().height); dbg = dbImage.getGraphics (); // save background dbg.setColor (getBackground ()); dbg.fillRect (0, 0, this.getSize ().width, this.getSize ().height); // draw foreground on background dbg.setColor (getForeground ()); paint (dbg); // Now indicate ready drawn picture Offscreen on the right screen g.drawImage (dbImage, 0, 0, this); } } // Trail class
Edited by sirlink99: new code | https://www.daniweb.com/programming/software-development/threads/343323/blur-effect | CC-MAIN-2018-13 | refinedweb | 674 | 76.42 |
This C Program finds sum of first 50 natural numbers using for loop.
Here is source code of the C program to find the sum of first 50 natural numbers using for loop. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to find the sum of first 50 natural numbers
* using for loop
*/
#include <stdio.h>
void main()
{
int num, sum = 0;
for (num = 1; num <= 50; num++)
{
sum = sum + num;
}
printf("Sum = %4d\n", sum);
}
$ cc pgm73.c $ a.out Sum = 1275 of C, go to C Programming Examples. | http://www.sanfoundry.com/c-program-sum-first-n-natural-numbers/ | CC-MAIN-2017-17 | refinedweb | 103 | 82.65 |
Memory errors are one of the most difficult classes of software errors to analyze and fix, because the source of memory corruption and the manifestation of the error are far apart, making it hard to correlate the cause and the effect. Moreover, the errors often occur in exceptional conditions, making it hard to reproduce them consistently. Typically, these errors result from complex interactions between different components of your program, third-party libraries, and the operating system. It is extremely hard to predict and comprehend these possibilities and scenarios just by inspecting source code. Often, it requires a considerable amount of debugging and investigation to understand mistakes in your program logic and design that are causing memory errors.
IBM® Rational® Purify® is an advanced memory-debugging tool to assist you in quickly locating the cause of memory corruption errors accurately. To use the tool, you first use Purity to instrument your program. Then, when you run the instrumented program, Purify scrutinizes memory accesses and manipulations by your program and identifies memory errors that are about to happen. This considerably reduces the debugging time and complexity.
You can learn about various types of memory errors and how to use Purify to detect them by reading this IBM developerWorks article: Navigating "C" in a "leaky" boat? Try Purify. If you are familiar with Purify and memory errors, you can either skim or skip that article.
Purify has several unique and advanced features to assist you in debugging memory errors. In this article, you will first learn about application programming interfaces (APIs) in Purify and how to use them from the debugger. Then you will learn about APIs specific to memory watch points.
Application programming interface
Purify provides various APIs that you can invoke from your program or debugger
to further assist you in debugging memory problems. For example, you can use the
purify_what_colors function to find out the status of a
memory range:
int purify_what_colors (char *addr, unsigned int size);
Purify keeps track of the status of every byte of memory used by your program and uses four colors to represent the status: red, yellow, green, and blue. Initially, all memory is red, which represents unallocated and uninitialized memory. After you allocate memory, it becomes yellow, which represents allocated but uninitialized memory. After you initialize a memory location, it becomes green, representing allocated and initialized memory. And when you free memory, it becomes blue, representing previously allocated but then freed memory. This lifecycle is shown in Figure 1.
- It is legal to read from or write to memory marked green.
- It is legal to write to yellow memory, but illegal to read from it. Purify reports a Uninitialized Memory Read (UMR) error when you do.
- It is illegal to read or write blue and red memory.
While debugging a memory corruption, you can call the
purify_what_colors API from the debugger to see the
Purify color state for an interesting memory location. This example shows the
color state for the bytes of an eight-byte buffer, where only the first two have been
initialized:
(gdb) print purify_what_colors(buf, sizeof(buf)+1) color codes of 9 bytes at 0xffffe820: GGYYYYYYR
The API prints out the memory state of
sizeof(buf)+1 bytes, starting at memory
address
buf. The memory state of each byte of memory is
represented by one of these letters R, Y, G, or B.
These letters correspond to the colors red, yellow, green, and blue, respectively.
Figure 1. Lifecycle of memory locations
Using Rational Purify with the debugger
Most of the time, Purify provides enough information about a memory error for you to identify the cause and fix it. But sometimes, you may need to use that information as a starting point and debug the program to find the cause. For example, let's say that Purify reports a UMR error that surprises you because you do see a statement in your program that initializes that memory. Clearly, there exists a control path in which either the initialization statement is not executed or, in the case of a pointer to a memory buffer, the pointer is being reassigned to another memory buffer that may not have been initialized yet.
In such situations, you need to debug your program to find the exact cause. The good news is that instead of debugging your program, you can debug the Purify-instrumented program. Purify issues memory error reports just before the error is about to happen. That allows you to examine and analyze relevant variables and memory contents in a debugger. Purify also provides APIs for examining the status of memory locations.
There are two ways that you can engage a debugger for a Purify-instrumented program:
- First, start the instrumented program under the debugger and put a breakpoint at
purify_stop_herePurify API function. The debugger will stop at every Purify error message:
(gdb) break purify_stop_here (dbx) stop in purify_stop_here (xdb) b purify_stop_here
- Alternatively you can configure just-in-time (JIT) debugging through the Options > JIT Debug menu in the Purify GUI (see Figure 2), and select the error types of interest. Whenever a Purify error of the selected type is reported, Purify invokes the debugger and attaches it to your running application.
Figure 2. Purify JIT debugger dialog
When you are inside a debugger, you can use various Purify API functions to investigate the status and type of various memory locations:
purify_what_colors(char *addr, unsigned int size):
Prints the memory state of
sizebytes starting at memory address (
addr), as explained previously in the Application programming interface section.
purify_describe(void *addr):
Prints specific details about the memory at location
addr, including its location (stack, heap, text) and, if it is heap memory, then the call chains of its allocation and free history.
purify_assert_is_readable(const char *addr, int size):
Simulates reading
sizebytes starting at address
addr, generates any Purify errors that read would cause, and calls
purify_stop_hereupon error. Returns 0 if errors are detected, and returns 1 if no errors are detected.
purify_assert_is_writable(const char *addr, int size):
Simulates writing
sizebytes starting at address
addr, generates any Purify errors that write would cause, and calls
purify_stop_hereupon error. Returns 0 if errors are detected, and returns 1 if no errors are detected.
While debugging, you may want to focus on some piece of code, thus not be interested in Purify errors that are reported before the program control reaches that piece of code. Purify provides APIs to turn error reporting off or on (notice that memory use monitoring is not turned off):
- Using a debugger, put a breakpoint at the
mainfunction (or at the program location where you want to turn off Purify error reporting), and run the instrumented program.
- When the debugger stops at that breakpoint, type this command:
(gdb) print purify_stop_checking()
- Put a breakpoint at the program location where you want to resume Purify error reporting, and continue running the program.
- When the debugger stops at that breakpoint, type this command:
(gdb) print purify_start_checking()
Memory watch points
Purify offers a wide set of memory watch point APIs that you can call from the debugger to assist you in debugging the memory corruption problems in your program. The code shown in Listing 1 shows both a memory leak and a dangling pointer.
Listing 1. Code with memory leak and dangling pointer (mem_errors.c)
1 #include <stdio.h> 2 3 char *namestr; 4 5 void foo() { 6 namestr = (char *) strdup("Rational PurifyPlus"); 7 printf("Product = %s\n", namestr); 8 free(namestr); /* free the memory allocated by strdup */ 9 } 10 11 void main() { 12 namestr = (char *)malloc(20 * sizeof(char)); 13 foo(); 14 strcpy(namestr, "IBM"); 15 printf("Company = %s\n", namestr); 16 free(namestr); 17 }
Interestingly, if you look at the method
main() or
foo() independently, both functions look correct. The
method
main() allocates memory, calls
foo(), uses the allocated memory, and then frees it and
exits. The method
foo() calls method
strdup(), which allocates memory, uses that memory, and
then frees it. However, it is the interaction of these two functions and
using a global pointer variable called namestr that causes both the leak
and the dangling pointer. When
strdup() is called in
foo(), the namestr variable value is
overwritten, thereby losing the pointer to the memory allocated in
main(), and that causes the leak. In
main, after returning from
foo(), namestr is actually a dangling pointer,
because
foo() has freed that memory before returning.
It is easy to spot the problems in this simple example by inspecting the code. But that is not possible in large programs with complex control flows where the problematic functions could be in different libraries. That is when Purify and its memory watch point API becomes handy.
Here is how you can purify this program:
$ purify cc -g mem_errors.c -o mem_errors.pure
When you run the purified program, Purify will report the following errors (also see Figure 3):
MLK (Memory Leak)for the memory allocated for namestr in the function
main
FMW (Free Memory Write)at the
strcpy()call in the function
main
FMR (Free Memory Read)at the
printf()call in the function
main
FUM (Freeing Unallocated Memory)at the
free()call in the function
main
Figure 3. Memory errors reported by Purify in mem_errors.c
For this small example, the programming mistake can be easily fixed by using
information provided along with Purify errors. But for a complex program, where a
function such as
foo() might be called from various
locations and possibly in a loop, debugging the program by using the Purify memory
watch point APIs will be very useful.
The watch point feature lets you ask Purify to pay special attention to an area of memory and issue a report any time that memory gets read (WPR: Watch Point Read), written (WPW: Watch Point Write), or freed (WPF: Watch Point Free). This way, you can answer questions such as, "Where in the program does this variable get written?" or "Where does this variable get used?" And for memory in the heap, "Which function frees this memory?"
Here is how you can purify your program and run it under a debugger
(
gdb is used here to illustrate the process, but you
can use any of your favorite debuggers):
$ purify cc -g mem_errors.c -o mem_errors.pure $ gdb mem_errors.pure
Looking at Purify reporting memory leaks and FMR, FMW, or FUM, there are several interesting questions that you may want to ask:
- Given that the namestr variable is a pointer to memory allocated in
main(), why does namestrstart pointing to some other memory?
- Where exactly does namestr get written with another address value, and thereby lose the last pointer to memory allocated in
main(), causing the leak?
- After the memory allocation in
main(), where is the namestr variable used and overwritten?
You can answer these questions by putting a breakpoint just after the
malloc call in the
main()function (Line 13), and then using Purify to set
a memory watch point on
&namestr to track all
write operations happening on the namestr variable. Whenever an address is
written to the namestr variable, the Purify watch point will show a WPW
(Watch Point Write) message in the Purify viewer:
$ gdb mem_errors.pure (gdb) break 13 Breakpoint 1 at 0x10000aec: file mem_errors.c, line 13. (gdb) run Starting program: mem_errors.pure Breakpoint 1, main () at mem_errors.c:13 13 foo(); (gdb) print purify_watch_n(&namestr, 4, "w") $1 = 1 (gdb) continue
The
purify_watch_n() function takes the address of the
memory location to be watched (
&namestr), the
size (4 bytes, the size of a pointer) and the watch mode (r for read,
w for write, rw for read-write). Whenever a new address is stored in
namestr, Purify will show a WPW (Watch Point
Write) result in the viewer. On expanding, it looks like this message:
WPW: Watch point write: * This is occurring while in: foo [mem_errors.c:6] main [mem_errors.c:13] __start [mem_errors.pure] * Watchpoint 1 * Writing 4 bytes to 0x20103b38 in the initialized data section. * Value changing from 537934728 (0x20103b88, " \020;\210") to 537934968 (0x20103c78, " \020") * Address 0x20103b38 is global variable "namestr". This is defined in mem_errors.pure.
This message indicates that in the
foo() method at
Line 6, another address is stored in namestr. Even before freeing the
memory, the value of namestr has been changed, and that is why Purify
reports an MLK (memory leak) error. Let's now debug the cause of the FMR
(Free Memory Read) and FMW (Free Memory Write) errors. When reporting FMR or FMW,
Purify also specifies where the memory allocation happened. For this example,
Purify indicates that the FMW and FMR errors at Lines 14 and 15, respectively, in
method
main() happen because of accesses to
already-freed memory that was allocated by the
strdup()
call at Line 6 in method
foo(). Therefore, you need to
track all reads and writes on the memory block (and not the pointer) allocated at
Line 6. You can do that by putting a read-write watch point after the
strdup()call:
(gdb) break 7 Breakpoint 2 at 0x10000c38: file mem_errors.c, line 7. (gdb) continue Continuing. Breakpoint 2, foo () at mem_errors.c:7 7 printf("Product = %s\n", namestr); (gdb) print purify_watch_n(namestr, 20, "rw") $2 = 2
Because this is a read-write watch point, any attempt to read or modify the
contents of the memory block that namestr points to would trigger a WPR or
WPW message, respectively.
Notice the difference between using
namestr here and
&namestr earlier. The earlier example was
watching the memory that held the
namestr pointer
itself; therefore, the address of the watched area is given by
&namestr. This second example is watching the
memory that namestr points to, instead.
A WPR shows at Line 7 at the
printf() call, and then a
WPF (Watch Point Free) shows on Line 8 at the
free()
call. Both of these are expected, but now, after reporting WPF, you should follow
the control path more carefully.
In fact, you can set a breakpoint at
purify_stop_here
(as explained earlier in the "Using Purify with the debugger"
section) for Purify to stop at every error (or message). Any access to this memory
thereafter should be an error, because the memory pointed to by namestr has
been freed. Therefore, stepping through the code at Line 14 generates a WPW
message, because the memory that has already been freed is being written to that
message by a call to
strcpy(). This WPW explains the
FMW (Free Memory Write) that Purify reported:
(gdb) next main () at mem_errors.c:14 14 strcpy(namestr, "IBM"); (gdb) next 15 printf("Company = %s\n", namestr);
Similarly, stepping through Line 15 generates a WPR message and, thus, a FMR
error. The FUM error reported toward the end of the program at Line 16 is also
obvious now, because the memory for namestr(the new value, allocated by
strdup) was already freed at Line 8 in function
foo (where a WPF message was reported).
Figure 4 shows a sample Purify window with all of these watch point errors. To summarize: Memory watch points help you in tracing the use of a given memory block. Using them along with the debugger helps you track memory use with the program execution which manifests a memory error.
Figure 4. Watch point messages reported by Purify
Purify watch points can report the following messages for the given memory address:
- Reads
- Writes
- Allocation
- De-allocation
- Coming into scope at function entry
- Going out of scope at function exit
There are several watch point API functions for convenience (Table 1). The
simplest is
purify_watch(addr), which sets a read-write
watch point on four bytes starting at the given address. The APIs that set watch
points return an integer value, which is the watch point number that was just set.
You can pass that integer to
watchpoint_remove to
remove it. All of the API convenience functions are equivalent to using
purify_watch_n with the appropriate address, size, and
type.
Table 1. APIs to set watch points
To get information about watch points and remove them, you can use the following APIs:
purify_watch_info(), which shows all active Purify memory watch points
purify_watch_remove(int watchpoint_no), which removes the watch point with the given number
purify_watch_remove_all(), which removes all watch points
Using Purify APIs in your programs
Apart from using Purify APIs in the debugger, you can also embed them from programs for checking errors and reporting extra information. In that case, even if you run your purified program through an automated test suite; when an error occurs, it will dump extra messages into the Purify log that will help you identify the problem.
There are two ways of embedding Purifying APIs in your program:
- Using
#ifdefguards
- Linking with Purify stubs
Using
#ifdef guards
As shown in the example in Listing 2, you can guard Purify API calls in your
program by surrounding them with
#ifdef definition
guards. In this way, you don't need to change the source code to build
the purified executable program that exploits Purify APIs nor to build the
executable program that you ship as a product.
The example has an implementation of
strncpy, where
the source and destination strings are checked first, respectively, to be readable
and writable. If any of the tests fail, an appropriate message is
printed in the Purify console or log by calling
purify_printf. Then
purify_describe is called, which prints specific
details about the memory address, including its location (stack, heap, text)
and, for heap memory, the call chains at its allocation time and its
free() call history.
Finally,
purify_what_colors is called to print the
color of the memory buffer. The copy is performed only if no error is found.
Listing 2: Part of file
mystring.c that uses Purify API with guards
#ifdef PURIFY #include <purify.h> /* * The purify.h file has needed API declaration. */ #endif void mystrncpy(char* dest, const char* src, int length) { #ifdef PURIFY); } else { #endif /* * skip: copy n bytes from src to dest only if safe */ #ifdef PURIFY } #endif } int main() { /* skip: main body that calls mystrncpy */ }
The
makefile shown in Listing 3 demonstrates how you
can turn on Purify API calls by using the
-DPURIFY flag
for building a purified version of the executable file (see the rule for
mystring.pure) and linking it with the Purify API
library.
Listing 3. Part of makefile that builds
mystring and
mystring.pure
# # makefile to build mystring programs, and its purified versions # # ... skip ... # Purify header and API lib locations PURIFYINCLUDE = -I`purify -print-home-dir` # For 64-bit program, replace lib32 by lib64 in following: PURIFYAPILIB = `purify -print-home-dir`/lib32/libpurify_stubs.a # ... skip ... mystring : mystring.c $(CC) $(FLAGS) -o $@ $? mystring.pure : mystring.c purify $(CC) $(FLAGS) -g -DPURIFY $(PURIFYINCLUDE) -o $@ $? \ $(PURIFYAPILIB) # ... skip ...
Linking with Purify stubs
The drawback of using a guard is that you have to recompile the whole program. In this example, only one C file is used, but in large systems, typically various libraries are built and finally linked to build the executable file. In such situations, if you want to purify your program, you must recompile all C files that use the guard.
An alternative is to always link your application with the
libpurify_stubs.a by changing the rule in Listing 3 to
build
mystring:
mystring : mystring.c $(CC) $(FLAGS) -o $@ $? $(PURIFYAPILIB)
The
libpurify_stubs.a is a small library that has empty stubs for all Purify API
functions. When you instrument your program, Purify provides the real definitions
for the API functions, and stubs are ignored.
You can surround multiple Purify APIs with
if(purify_is_running()) to keep Purify function calls
from slowing down your uninstrumented program (see Listing 4).
Listing 4. Part of file
mystring.c that uses Purify API without guards
#include <purify.h> /* * The purify.h file has needed API declaration. */ void mystrncpy(char* dest, const char* src, int length) { if (purify_is_running()) {); } } /* * skip: copy n bytes from src to dest only if safe */ } int main() { /* skip: main body that calls mystrncpy */ }
The tradeoffs between using
#ifdef and Purify stubs
are that the former requires you to recompile your program with the -DPURIFY
flag, while the latter involves a runtime cost of calling
purify_is_running (which is negligible), and linking
your program with Purify's empty stubs even in production code. Use the alternative that suits
your need.
Summary
In this article, you learned about the memory color concept in Purify, the APIs, and the memory watch points. You can use these APIs from the debugger, or you can embed them in your programs, instead. Either way, with the help of Purify APIs and memory watch points, you can debug memory errors in your programs more effectively.
Resources
Learn
- Read Goran Begic's article to get An introduction to runtime analysis with Rational PurifyPlus, (IBM® developerWorks®, November 2003).
- Learn about different types of memory errors, and how to use Rational Purify to detect them.
- Read Anandi Krishnamurthy's article on using PurifyPlus with IBM Rational Systems Developer.
- a trial version of IBM® Rational® PurifyPlus®.
- Download trial versions of other. | http://www.ibm.com/developerworks/rational/library/08/0205_gupta-gaurav/index.html | CC-MAIN-2014-52 | refinedweb | 3,504 | 50.87 |
In this video, we begin discussion of the tkinter module. The tkinter module is a wrapper around tk, which is a wrapper around tcl, which is what is used to create windows and graphical user interfaces. Here, we show how simple it is to create a very basic window in just 8 lines. We get a window that we can resize, minimize, maximize, and close! The tkinter module's purpose is to generate GUIs. Python is not very popularly used for this purpose, but it is more than capable of doing it.
Let's walk through each step to making a tkinter window:
Simple enough, just import everything from tkinter.
from tkinter import *
Here, we are creating our class, Window, and inheriting from the Frame class. Frame is a class from the tkinter module. (see Lib/tkinter/__init__)
Then we define the settings upon initialization. This is the master widget.
class Window(Frame): def __init__(self, master=None): Frame.__init__(self, master) self.master = master
The above is really all we need to do to get a window instance started.
Root window created. Here, that would be the only window, but you can later have windows within windows.
root = Tk()
Then we actually create the instance.
app = Window(root)
Finally, show it and begin the mainloop.
root.mainloop()
The above code put together should spawn you a window that looks like:
Pretty neat, huh? Obviously there is much more to cover. | https://pythonprogramming.net/python-3-tkinter-basics-tutorial/?completed=/parse-website-using-regular-expressions-urllib/ | CC-MAIN-2019-26 | refinedweb | 241 | 68.26 |
MPEG::MP3Play - Perl extension for playing back MPEG music
use MPEG::MP3Play; my $mp3 = new MPEG::MP3Play; $mp3->open ("test.mp3"); $mp3->play; $mp3->message_handler;
This Perl module enables you to playback MPEG music.
This README and the documention cover version 0.15 of the MPEG::MP3Play module.
Xaudio SDK
MPEG::MP3Play is build against the 3.0.8 and 3.2.1 versions of the Xaudio SDK and uses the async interface of the Xaudio library.
The SDK is not part of this distribution, so get and install it first ().
ATTENTION: Xaudio Version 3.2.x SUPPORT IS ACTUALLY BETA
Unfortunately Xaudio changed many internals of the API since version 3.0.0, and many of them are not documented. So I had to hack around, but everything seem to work now. Even so I think 3.2.x support is actually beta. If you have problems with this version, please send me an email (see bug report section below) and downgrade to 3.0.x if you can't get sleep ;)
For Linux Users:
Xaudio removed the 3.0.8 Linux version from their developer page. Please read and agree to the license restrictions under and download the package from here:
Perl
I built and tested this module using Perl 5.6.1, Perl 5.005_03, Perl 5.004_04. It should work also with Perl 5.004_05 and Perl 5.6.0, but I did not test this. If someone builds MPEG::MP3Play successfully with other versions of Perl, please drop me a note.
Optionally used Perl modules
samples/play.pl uses Term::ReadKey if it's installed. samples/handler.pl requires Term::ReadKey. samples/gtk*.pl require Gtk.
You can download MPEG::MP3Play from any CPAN mirror. You will find it in the following directories:
You'll also find recent information and download links on my homepage:
First, generate the Makefile:
perl Makefile.PL
You will be prompted for the location of the Xaudio SDK. The directory must contain the include and lib subdirectories, where the Xaudio header and library files are installed.
make make test cp /a/sample/mp3/file.mp3 test.mp3 ./runsample play.pl ./runsample handler.pl ./runsample gtk.pl ./runsample gtkhandler.pl ./runsample gtkinherit.pl ./runsample synopsis.pl make install
There are some small test scripts in the samples directory. You can run these scripts before 'make install' with the runsample script (or directly with 'perl', after running 'make install'). For runsample usage: see above.
All scripts expect a mp3 file 'test.mp3' in the actual directory.
Textmodus playback. Displays the timecode. Simple volume control with '+' and '-' keys.
Does nearly the same as play.pl, but uses the builtin message handler. You'll see, that this solution is much more elegant. It requires Term::ReadKey.
This script makes use of the debugging facility, the equalizer features and is best documented so far.
This script demonstrates the usage of MPEG::MP3Play with the Gtk module. It produces a simple window with a progress bar while playing back the test.mp3 file.
This script does the same as gtk.pl but uses the builtin message handler concept instead of implementing message handling by itself. Advantage of using the builtin message handler: no global variables are necessary anymore.
Because 'runsample' uses '
perl -w' you'll get a warning message here complaining about a subroutine redefinition. See the section USING THE BUILTIN MESSAGE HANDLER for a discussion about this.
This is 'gtkhandler.pl' but throwing no warnings, because it uses subclassing for implementing messages handlers.
Just proving it ;)
The concept of the Xaudio async API is based on forking an extra process (or thread) for the MPEG decoding and playing. The parent process controls this process by sending and recieving messages. This message passing is asynchronous.
This module interface provides methods for sending common messages to the MPEG process, eg. play, pause, stop. Also it implements a message handler to process the messages sent back. Eg. every message sent to the subprocess will be acknowledged by sending back an XA_MSG_NOTIFY_ACK message (or XA_MSG_NOTIFY_NACK on error). Error handling must be set up by handling this messages.
$mp3 = new MPEG::MP3Play ( [ debug => 'err' | 'all' ] );
This is the constructor of this class. It optionally takes the argument 'debug' to set a debugging level. If debugging is set to 'err', XA_MSG_NOTIFY_NACK messages will be carp'ed. Additionally XA_MSG_NOTIFY_ACK messages will be carp'ed if debugging is set to 'all'.
The debugging is implemented by the methods msg_notify_ack and msg_notify_nack and works only if you use the builtin message handler. You can overload them to set up a private error handling (see chapter USING THE BUILTIN MESSAGE HANDLER for details).
$mp3->debug ( 'err' | 'all' | 'none' | '' );
With this method you can set the debugging level at any time. If you pass an empty string or 'none' debugging will be disabled.
$xaudio_imp = $mp3->get_xaudio_implementation @xaudio_imp = $mp3->get_xaudio_implementation
Returns the internal major/minor/revision numbers of your Xaudio SDK implementation. Returns 0.0.0 if not supported by your Xaudio version.
Prints the implementation number to STDOUT.
The following methods control the audio playback. Internally they send messages to the Xaudio subsystem. This message passing is asynchronous. The result value of these methods indicates only if the message was sent, but not if it was successfully processed. Instead the Xaudio subsystem sends back acknowledge messages. See the chapter MESSAGE HANDLING for details and refer to the Xaudio documentation.
$sent = $mp3->open ($filename);
Opens the MPEG file $filename. No playback is started at this time.
$sent = $mp3->close;
Closes an opened file.
$sent = $mp3->exit;
The Xaudio thread or process will be canceled. Use this with care. If you attempt to read or send messages after using this, you'll get a broken pipe error.
Generally you need not to use $mp3->exit. The DESTROY method of MPEG::MP3play cleans up everything well.
$sent = $mp3->play;
Starts playing back an opened file. Must be called after $mp3->open.
$sent = $mp3->stop;
Stops playing back a playing file. The player rewinds to the beginning.
$sent = $mp3->pause;
Pauses. $mp3->play will go further at the actual position.
$sent = $mp3->seek ($offset, $range);
Sets the play position to a specific value. $offset is the position relative to $range. If $range is 100 and $offset is 50, it will be positioned in the middle of the song.
$sent = $mp3->volume ($pcm_level, $master_level, $balance);
Sets volume parameters. Works only if playing is active. $pcm_level is the level of the actual MPEG audio stream. $master_level is the master level of the sound subsystem. Both values must be set between 0 (silence) and 100 (ear breaking loud).
A $balance of 50 is the middle, smaller is more left, higher is more right.
You can supply undef for any parameter above and the corresponding value will not change.
$sent = $mp3->equalizer ( [ $left_eq_lref, $right_eq_lref ] )
Use this method to control the builtin equalizer codec. If you omit any parameters, the equalizer will be deactivated, which preserves CPU time.
The two array references for left and right channel must contain 32 integer elements between -128 and +127. The method will croak an exception if you pass illegal values.
$sent = $mp3->get_equalizer
This advises the Xaudio subsystem to send us a message back which contains the acual equalizer settings.
The corresponding message handler method to handle the message is named
msg_notify_codec_equalizer
See the chapter about the generic method handler for details about the message handling mechanism.
The passed message hash will contain a key named 'equalizer', which is a hash reference with the following content:
equalizer => { left => $left_eq_lref, right => $right_eq_lref }
The two lref's are arrays of 32 signed char values, see $self->equalizer.
$sent = $mp3->set_player_mode ( $flag, ... )
This method sets flags that modify the player's behavior. It expects a list of XA_PLAYER_MODE_* constants. Currently supported constants are:
XA_PLAYER_MODE_OUTPUT_AUTO_CLOSE_ON_STOP XA_PLAYER_MODE_OUTPUT_AUTO_CLOSE_ON_PAUSE
Refer to the Xaudio documentation for details about this flags.
You can import this constants to your namespace using the ':state' tag (see CONSTANTS section below).
$sent = $mp3->set_input_position_range ( $range )
This method sets the player's position range. This is used by the player to know how frequently to send back XA_MSG_NOTIFY_INPUT_POSITION message (see details about message handling below) to notify of the current stream's position. The default is 400, which means that the input stream has 400 discrete positions that can be notified.
Example: if you wish to display the current position in a display that is 200 pixels wide, you should set the position range to 200, so the player will not send unnecessary notifications.
There are two methods to retrieve messages from the Xaudio subsystem. You can use them to implement your own message handler. Alternatively you can use the builtin message handler, described in the next chapter. Using the builtin message handler is recommended. Your programm looks better if you use it. Also the debugging facilitites of MPEG::MP3Play only work in this case.
$msg_href = $mp3->get_message;
If there is a message in the players message queue, it will be returned as a hash reference immediately. This method will not block if there is no message. It will return undef instead.
$msg_href = $mp3->get_message_wait ( [$timeout] );
This method will wait max. $timeout microseconds, if there is no message in the queue. If $timeout is omitted it will block until the next message appears. The message will be returned as a hash reference.
The message hash
The returned messages are references to hashes. Please refer to the Xaudio SDK documentation for details. The message hashes are build 1:1 out of the structs (in fact a union) documented there, using _ as a seperator for nested structs.
(Simply use Data::Dumper to learn more about the message hashes, e.g. that the name of the internal message handler is stored as $msg_href->{_method_name} ;)
$sent = $mp3->set_notification_mask ($flag, ...);
By default all messages generated by the Xaudio subsystem are sent to you. This method sends a message to block or unblock certain types of notification messages. It expects a list of XA_NOTIFY_MASK_* constants corresponding to the messages you want to recieve. You can import this constants to your namespace using the ':mask' tag (see CONSTANTS section below).
Note:
If debugging is set to 'err' you cannot unset the XA_NOTIFY_MASK_NACK flag. If debugging ist set to 'all' also unsetting XA_NOTIFY_MASK_NACK is impossible.
$read_fd = $mp3->get_command_read_pipe;
This method returns the file descriptor of the internal message pipe as an integer. You can use this to monitor the message pipe for incoming messages, e.g. through Gdk in a Gtk application. See samples/gtk*.pl for an example how to use this feature.
You can implement your own message handler based upon the methods described above. In many cases its easier to use the builtin message handler.
$mp3->message_handler ( [$timeout] );
This method implements a message handler for all messages the Xaudio subsystem sends. It infinitely calls $mp3->get_message_wait and checks if a method according to the recieved message exists. If the method exists it will be invoked with the object instance and the recieved message as parameters. If no method exists, the message will be ignored.
The infinite message loop exits, if a message method returns false. So, all your message methods must return true, otherwise the message_handler will exit very soon ;)
The names of the message methods are derived from message names (a complete list of messages is part of the Xaudio SDK documentation). The prefix XA_ will be removed, the rest of the name will be converted to lower case.
Example: the message handler method for
XA_MSG_INPUT_POSITION
is
$mp3->msg_input_position ($msg_href)
The message handler is called with two parameters: the object instance $mp3 and the $msg_href returned by the get_message_wait method.
Redefining or Subclassing?
It's implicitly said above, but I want to mention it explicitly: you must define your message handlers in the MPEG::MP3Play package, because they are methods of the MPEG::MP3Play class. So say 'package MPEG::MP3Play' before writing your handlers.
Naturally you can subclass the MPEG::MP3Play module and implement your message handlers this way. See 'samples/gtkinherit.pl' as a sample for this.
The disadvantage of simply placing your message handler subroutines into the MPEG::MP3Play package is that '
perl -w' throws warning messages like
Subroutine msg_notify_player_state redefined
if you redefine methods that are already defined by MPEG::MP3Play. Real subclassing is much prettier but connected with a little more effort. It's up to you.
As a sample for the "dirty" approach see 'samples/gtkhandler.pl' It throws the message mentioned above.
Doing some work
If the parameter $timeout is set when calling $mp3->message_handler, $mp3->get_message_wait is called with this timeout value. Additionally the method $mp3->work ist invoked after waiting or processing messages, so you can implement some logic here to control the module. The work method should not spend much time, because it blocks the rest of the control process (not the MPEG audio stream, its processed in its own thread, respectively process).
If the work method returns false, the method handler exits.
$mp3->work;
See explantation in the paragraph above.
$mp3->process_messages_nowait;
This method processes all messages in the queue using the invocation mechanism described above. It returns immediately when there are no messages to process. You can use this as an input handler for the Gtk::Gdk->input_add call, see 'samples/gtkhandler.pl' for an example of this.
Often it is necessary that the message handlers can access some user data, e.g. to manipulate a Gtk widget. There are two methods to set and get user data. The user data will be stored in the MPEG::MP3Play object instance, so it can easily accessed where the instance handle is available.
$mp3->set_user_data ( $data );
This sets the user data of the $mp3 handle to $data. It is a good idea to set $data to a hash reference, so you can easily store a handful parameters.
Example:
$mp3->set_user_data ( { pbar_widget => $pbar, win_widget => $window, gdk_input_tag => $input_tag } );
$data = $mp3->get_user_data;
This returns the data previously set with $mp3->set_user_data or undef, if no user data was set before.
The module provides simple message handlers for some default behavior. You can overload them, if want to implement your own functionality.
If the current file reaches EOF this handler returns false, so the message handler will exit.
If debugging is set to 'all' this handler will print the acknowledged message using carp.
If debugging is set to 'err' or 'all' this handler will print the not acknowledged message plus an error string using carp.
There are many, many constants defined in the Xaudio header files. E.g. the message codes are defined there as constants. MPEG::MP3Play knows all defined constants, but does not export them to the callers namespace by default.
MPEG::MP3Play uses the standard Exporter mechanisms to export symbols to your namespace. There are some tags defined to group the symbols (see Exporter manpage on how to use them):
This exports all symbols you need to do message handling on your own, particularly all message codes are exported here. Refer to the source code for a complete listing.
XA_PLAYER_STATE_*, XA_INPUT_STATE_* and XA_OUTPUT_STATE_*. Use this to check the actual player state in a XA_MSG_NOTIFY_PLAYER_STATE message handler.
This are all notify mask constants. The're needed to specify a notification mask. (see set_notification_mask)
All symbols for Xaudio error handling, incl. success code. I never needed them so far.
Some symbols cannot be assigned to the tags above. They're collected here (look into the source for a complete list).
If you use the builtin message handler mechanism, you need not to import message symbols to your namespace. Alle message handlers are methods of the MPEG::MP3Play class, so they can access all symbols directly.
No import to your namespace at all is needed unless you want to use $mp3->set_notification_mask or $mp3->set_player_mode!
If you want to develop your own MP3 player using MPEG::MP3Play in conjunction with Gtk+ this is generally a really good idea. However, there is one small issue regarding this configuration.
First off you have to connect the Xaudio message queue to the Gtk message handler using something like this (see the example program 'gtkhandler.pl'):
my $input_fd = $mp3->get_command_read_pipe; my $input_tag = Gtk::Gdk->input_add ( $input_fd, 'read', sub { $mp3->process_messages_nowait } );
Through this the Xaudio process is directly connected by a pipe to Gtk+.
I don't know exactly what happens, but if you *first* call Gtk->init and then create a MPEG::MP3Play object you'll get some Gdk warning messages (BadIDChoice) or a 'Broken Pipe' error when your program exits.
Obviously Xaudio and Gtk+ disagree about the correct order of closing the pipe. You're welcome if you know a better explanation for this.
However, if you *first* create a MPEG::MP3Play object and then call Gtk->init everything works well (see the samples/gtk* programs).
- Win32 support - support of the full Xaudio API, with input/output modules, etc. - documentation: more details about the messages hashes
Ideas, code and any help are very appreciated.
- treble control through the equalizer is weak. I checked the sent data several times and cannot see any error on my side, maybe something with my sound setup is strange, or my ears are just broken :) Please tell me, if the treble control is OK for you, or not.
First check if you're using the most recent version of this module, maybe the bug you're about to report is already fixed. Also please read the documentation entirely.
If you find a bug please send me a report. I will fix this as soon as possible. You'll make my life easier if you provide the following information along with your bugreport:
- your OS and Perl version (please send me the output of 'perl -V') - exact version number of the Xaudio development kit you're using (including libc version, if this is relevant for your OS, e.g. Linux) - for bug reports regarding the GTK+ functionality I need the version number of your GTK+ library and the version number of your Perl Gtk module.
If you have a solution to fix the bug you're welcome to send me a unified context diff of your changes, so I can apply them to the trunk. You'll get a credit in the Changes file.
If you have problems with your soundsystem (you hear nothing, or the sound is chopped up) please try to compile the sample programs that are part of the Xaudio development kit. Do they work properly? If not, this is most likely a problem of your sound configuration and not a MPEG::MP3Play issue. Please check the Xaudio documentation in this case, before contacting me. Thanks.
I'm very interested to know, if someone write applications based on MPEG::MP3Play. So don't hesitate to send me an email, if you like (or not like ;) this module.
This section lists the environments where users reported me that this module functions well:
- Perl 5.005_03 and Perl 5.004_04, Linux 2.0.33 and Linux 2.2.10, Xaudio SDK 3.01 glibc6, gtk+ 1.2.3, Perl Gtk 0.5121 - FreeBSD 3.2 and 3.3. See README.FreeBSD for details about building MPEG::MP3Play for this platform. - Irix 6.x, Perl built with -n32
Joern Reder <joern@zyn.de>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The Xaudio SDK is copyright by MpegTV,LLC. Please refer to the LICENSE text published on.
perl(1). | http://search.cpan.org/~jred/MPEG-MP3Play-0.16/MP3Play.pm | CC-MAIN-2016-44 | refinedweb | 3,252 | 58.79 |
As you noticed from my last post regarding functional programming and unit testing, there is a bit to be discussed. Important to any programming language is not only the language, but the frameworks and tooling around it, such is the case with functional languages. Let's focus on the tooling around testing with functional languages.
What kind of options do we have? In the Haskell world just as the F# world, there are several tools at our disposal to do this.
Today we're going to focus on HUnit as part of developing an API in Haskell. Some of these lessons apply well to any functional language, but is told well using Haskell.
HUnit is a fairly simple and yet easy to use xUnit based testing framework for Haskell. It's so bare bones in fact that it only has two main assertion functions that people use, assertEqual and assertBool. The APIs are straight forward and easy to extend. I'll do that in a subsequent post to get some of the functionality on par with that of say xUnit.net.
Let's walk through an example of creating an API for performing calculations on a list. Since I have a background in quantitative methods, I'll start with some of those. The first function we need to create is the average function. This function takes a list and calculates an average over them. In order to do this, let's define a test to set the behavior.
Now that we've defined our criteria for success, now, let's turn our attention to implementing this function.
When running this test from the GHC Interactive (ghci.exe), I get the following results.
But, wait! What happens when we pass an empty list. That would cause an error due to a divide by zero exception. What we need to do is add another pattern to our average function to trap that and report a standard error. Let's define a test case for that.
Now to define that failure pattern in my average function. My test will already succeed because I'm not checking whether it is a divide by zero exception or something else, and I could filter that exception, but I'll do that in another post.
Running our tests again, we find that both of them now pass. Thinking to myself, I think I can generalize this a little bit. Say for example, I have a list of tuples or record types. I can't average them exactly as is, but instead, would have to provide a way to extract that value that I do care about. Let's define a test for that to take a list of tuples and grab the second value and average that one. I'll omit the rest of the file as it stays the same except for adding our test function to the main function's test list.
Now the code to implement this should be rather straight forward. I'll omit the rest of the file and just concentrate on the new averageBy function.
Instead of using the standard sum, I need to add a map projection to this. This allows me to add my own custom function to the mix. Once we get this code implemented, another test then suddenly passes. But once again, we forgot about the empty list yet again. Let's write a test for that case and make it fail.
Now the test succeeds, because once again, not checking whether it is a divide by zero exception or something else. But, let's put the guard in there to feel better about ourselves.
But looking at this code, I think it's time for a refactoring. The average and averageBy are very similar and could be generalized. Why? Because the averageBy takes a function, we can then supply a projection. Let's redo our average function to instead just use the averageBy function with an extra input.
We can use currying to our favor here to only supply the arguments we need to and leave the rest for the system to figure out. Running things once again, we see that all four tests pass nicely still. But, I'm still not satisfied. Why not? Because I don't like dealing with errors sometimes, and would like to give a safe alternative to the error prone average and averageBy functions. Let's use the Maybe type to define failure this time around using a new function called tryAverageBy.
And the implementation will then look like this to get it to pass.
And the dance continues until I have fully flushed out the tryAverageBy with both cases as well as the tryAverage functions. But it looks like I could generalize the averageBy function as well, to call our try instead to see whether to throw an error. We only want to write the algorithm once and use it over and over if we can. Maybe something like this might work.
When all is said and done, we then have 8 passing tests and a nicer code base because we took the time to refactor. Not that these implementations are perfect, but they show you the evolving code base of using HUnit and TDD within the Haskell environment.
Building our systems means caring about design, quality and correctness. When dealing with a language such as Haskell, where purity, polymorphism and an expressive type system helps us write code that is very modular, refactorable and testable. Along the way, there are tools to help such as HUnit and QuickCheck. Next time, I'll be covering type-based property checking using QuickCheck as well as how we can extend HUnit to fit a few more to suit our needs.
[Advertisement]
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #239
In the previous post, we talked about some of the basics of functional programming unit testing. That
Would you mind posting all of the code in runnable state ?
At the moment you have an error in :
HUnitExample.hs:7:18: Not in scope: `l'
and :
HUnitTests.hs:16:4: Not in scope: `handleJust'
HUnitTests.hs:16:15: Not in scope: `errorCalls'
HUnitTests.hs:18:8: Not in scope: `evaluate'
ususally people use literate Haskell to post code, others can copy and run
You need to make sure you have the following imports:
import Control.Exception
import Test.HUnit
It should build and run from there. Sorry about the confusion with the imports.
Matt
In the previous post, we talked about using QuickCheck as opposed to traditional xUnit framework testing
In our previous installment, we talked about bringing together the traditional xUnit tests and QuickCheck | http://codebetter.com/blogs/matthew.podwysocki/archive/2008/12/05/functional-programming-unit-testing.aspx | crawl-002 | refinedweb | 1,114 | 73.07 |
Description
The RSS feed does not validate.
Example
Details
This wiki.
Workaround
Discussion
The feed does work with my RSS reader, but may be broken with others. Its also possible that the validator is broken
- I don't really have much of a clue about RSS, but it seems that the namespace declaration
xmlns:wiki=""
is not really valid for the feed validator (at least there's no such URL). Maybe it should rather point to some existing location at moinmoin.wikiwikiweb.de (but then should rather be namespace "moin")? I had a brief look at a valid WikiPedia feed using XML Spy and this looks rather different (uses RSS 2.0). So maybe redesign of the RSS output should be put on the long-term to-do list (hopefully also get rid of the strange need to install PyXML just to cope with the xmlns issue?).
PyXML is required because Python built in xml is broken. It will be a good idea to check again the status with 2.4.2, and report the bug to the Python bug system.
Plan
Validate that it is really broken, not a false alarm.
- Priority:
- Assigned to: someone with a clue about RSS
- Status: | http://www.moinmo.in/MoinMoinBugs/RSS%20does%20not%20validate | crawl-003 | refinedweb | 202 | 73.37 |
11457/can-a-hyperledger-composer-work-without-fabric-installed
The peers communicate among them through the ...READ MORE
Yes, it is possible. Peers from different ...READ MORE
You can use Composer with the Bluemix ...READ MORE
New nodes doesn't depend on a single ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
To read and add data you can ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
There are three ways you can do ...READ MORE
To deploy your network on multiple nodes, ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/11457/can-a-hyperledger-composer-work-without-fabric-installed | CC-MAIN-2022-40 | refinedweb | 122 | 62.54 |
Hi, On Tue, 8 Jul 2008, Anthony Liguori wrote: > Johannes Schindelin wrote: > > This still only supports the client side, and only the TCP version of > > it, since Windows does not have Unix sockets. > > > > Signed-off-by: Johannes Schindelin <address@hidden> > > --- > > > > This is only compile-tested, since I can only work in an emulated > > environment. > > > > Oh, and feel free to reorder nbd.h so that it has only one #ifndef..#endif. > > > > If I find some time next week, I might try to actually compile qemu-nbd and > > get it to run on Windows. > > > > Makefile | 1 + > > block-nbd.c | 11 ++++++++++- > > nbd.c | 36 +++++++++++++++++++++++++++++++++++- > > nbd.h | 6 ++++++ > > 4 files changed, 52 insertions(+), 2 deletions(-) > > > > diff --git a/Makefile b/Makefile > > index adb36c6..ef55952 100644 > > --- a/Makefile > > +++ b/Makefile > > @@ -70,6 +70,7 @@ endif > > > > ifdef CONFIG_WIN32 > > OBJS+=tap-win32.o > > +LIBS+= -lws2_32 > > endif > > > > AUDIO_OBJS = audio.o noaudio.o wavaudio.o mixeng.o > > diff --git a/block-nbd.c b/block-nbd.c > > index f350050..a2adbde 100644 > > --- a/block-nbd.c > > +++ b/block-nbd.c > > @@ -31,11 +31,17 @@ > > > > #include <sys/types.h> > > #include <unistd.h> > > +#ifdef _WIN32 > > +#include <windows.h> > > +#include <winsock2.h> > > +#include <ws2tcpip.h> > > +#else > > #include <sys/socket.h> > > #include <sys/un.h> > > #include <netinet/in.h> > > #include <arpa/inet.h> > > #include <pthread.h> > > +#endif > > > > qemu_socket.h already does this for you. Thanks! That was actually very useful advice! Note that I will likely have to do different things until later this week, but then I will definitely take this advice into account, and try to get qemu-nbd going (tcp-only) on Windows. Ciao, Dscho | http://lists.gnu.org/archive/html/qemu-devel/2008-07/msg00194.html | CC-MAIN-2015-22 | refinedweb | 264 | 72.42 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 10 September 2002 08:04, David Abrahams wrote: > << > From: "Peter Bienstman" <Peter.Bienstman at rug.ac.be> > > Small question: is there a trick to pollute the global namespace with the > constants, such that one can write just 'red' in Python, rather than > 'choice.red'? > > I know it's less clean, but it's easier for the user and ties in better > with > Python's weak typing. > > > It doesn't tie in well with Python's "strong namespaceing". > There is no global namespace in Python. Do you mean that you'd like the > constants to be in the enclosing scope? Yep, point taken, that's what I mean. For backwards compatibility and ease of use, I'd like my users to be able to write: set_polarisation(TE) rather than set_polarisation(polarisation.TE) Is this possible? I'm perfectly fine with being burnt on the stake for being a namespace heretic ;-) Cheers, Peter -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE9fkfA4dgPAIjyquoRAuZBAKDyiOdbn6QLtJEBf1M1znLOe2AGugCfcOdC cWUZu8uT+Yr/uhSkLg0T6XY= =6vnf -----END PGP SIGNATURE----- | https://mail.python.org/pipermail/cplusplus-sig/2002-September/001728.html | CC-MAIN-2018-05 | refinedweb | 177 | 66.84 |
Hi All,
I know many threads are there for this query, but I am not able to find the exact solution to my problem.
I want to pull the files which starts with A5_Work_Orders_* from SFTP server and I want to pull them from /5060/A5toSAP folder on SFTP server.
In the directory I have given as ~/5060/A5SAP/
in the filename what details I should give so that it will pick all the files that starts with A5????
Also I have ticket the ASMA attributes and I am using the namespace - and I have ticked the filename parameter.
Now my another query is I need to archive the files on PI server, so I have ticked the option and I want to archive the file as it is (with the same name)
so I have added only the archive name - /A5/inbound/5060/A5toSAP/archive directoy. Do I need to enter any other details?
Please let me know on above points.
Thanks in advance. | https://answers.sap.com/questions/10561582/sftp-sender-channel-configuration-query.html | CC-MAIN-2021-21 | refinedweb | 165 | 76.66 |
Issues with drawing in a QGraphicsScene
I'm working on an animation application right now and I have this cursor that is represented by a QGraphics Object. I'm doing it in PyQt4, but the Qt code is pretty much 1:1 with the C++ version. Here is the code for drawing it:
@
def init(self, scene=None, parent=None):
# ...
points = [
QPoint(0, 0),
QPoint(10, 0),
QPoint(10, 2),
QPoint(5, 7),
QPoint(0, 2)
]
self.cursorPoly = QPolygonF(points)
@
@ def paint(self, painter, option, widget=None):
# Draw the cursor
painter.setPen(QPen(Qt.black))
painter.setBrush(QBrush(Qt.white))
painter.drawPolygon(self.cursorPoly)
# Draw the red line painter.setPen(QPen(Qt.red)) painter.setBrush(QBrush(Qt.NoBrush)) painter.drawLine(QPoint(5, 7), QPoint(5, FrameMark.MARK_HEIGHT))
@
When I have a QGraphicsView render the object in a scale of (1,1), it looks like this:
!!
This is what it looks like when the QGraphicsView is in a scale of (2,2):
!!
This is what I'm trying to achive:
!!
I'd prefer to do this without loading up the later image, though that is certainly possible. I also don't have any render hints set for the painter for the QGraphicsView.
- Asperamanca
If I understood you correctly, you want an item that ignores the scaling factor? Try setFlag(QGraphicsItem::ItemIgnoresTransformations) and see if that does what you need.
I should have clarified. The first and third images are completely different. The second is what I want it to look like, except that it's scalled up. The shapes of the while parts are different.
I think it has something to do with the painting coordniate system.
I added the line:
@painter.transte(QPointF(0.5, 0.5))@
and now it looks how I want it. I find this very odd. | https://forum.qt.io/topic/44343/issues-with-drawing-in-a-qgraphicsscene | CC-MAIN-2018-39 | refinedweb | 300 | 68.47 |
nafill() now applies
fill= to the front/back of the vector when
type="locf|nocb", #3594. Thanks to @ben519 for the feature request. It also now returns a named object based on the input names. Note that if you are considering joining and then using
nafill(...,type='locf|nocb') afterwards, please review
roll=/
rollends= which should achieve the same result in one step more efficiently.
nafill() is for when filling-while-joining (i.e.
roll=/
rollends=/
nomatch=) cannot be applied.
mean(na.rm=TRUE) by group is now GForce optimized, #4849. Thanks to the h2oai/db-benchmark project for spotting this issue. The 1 billion row example in the issue shows 48s reduced to 14s. The optimization also applies to type
integer64 resulting in a difference to the
bit64::mean.integer64 method:
data.table returns a
double result whereas
bit64 rounds the mean to the nearest integer.
fwrite() now writes UTF-8 or native csv files by specifying the
encoding= argument, #1770. Thanks to @shrektan for the request and the PR.
data.table() no longer fills empty vectors with
NA with warning. Instead a 0-row
data.table is returned, #3727. Since
data.table() is used internally by
.(), this brings the following examples in line with expectations in most cases. Thanks to @shrektan for the suggestion and PR.
DT = data.table(A=1:3, B=letters[1:3]) DT[A>3, .(ITEM='A>3', A, B)] # (1) DT[A>3][, .(ITEM='A>3', A, B)] # (2) # the above are now equivalent as expected and return: Empty data.table (0 rows and 3 cols): ITEM,A,B # Previously, (2) returned : ITEM A B <char> <int> <char> 1: A>3 NA <NA> Warning messages: 1: In as.data.table.list(jval, .named = NULL) : Item 2 has 0 rows but longest item has 1; filled with NA 2: In as.data.table.list(jval, .named = NULL) : Item 3 has 0 rows but longest item has 1; filled with NA
%like% on factors with a large number of levels is now faster, #4748. The example in the PR shows 2.37s reduced to 0.86s on a factor length 100 million containing 1 million unique 10-character strings. Thanks to @statquant for reporting, and @shrektan for implementing.
keyby= now accepts
TRUE/
FALSE together with
by=, #4307. The primary motivation is benchmarking where
by= vs
keyby= is varied across a set of queries. Thanks to Jan Gorecki for the request and the PR.
fwrite() gains a new
datatable.fwrite.sep option to change the default separator, still
data.table too. However, in this case, the global option affects file output rather than code behaviour. In fact, the very reason the user may wish to change the default separator is that they know a different separator is more appropriate for their data being passed to the package using
fwrite but cannot otherwise change the
fwrite call within that package.
","by default. Thanks to Tony Fischetti for the PR. As is good practice in R in general, we usually resist new global options for the reason that a user changing the option for their own code can inadvertently change the behaviour of any package using
melt() now supports
NA entries when specifying a list of
measure.vars, which translate into runs of missing values in the output. Useful for melting wide data with some missing columns, #4027. Thanks to @vspinu for reporting, and @tdhock for implementing.
melt() now supports multiple output variable columns via the
variable_table attribute of
measure.vars, #3396 #2575 #2551, #4998. It should be a
data.table with one row that describes each element of the
measure.vars vector(s). These data/columns are copied to the output instead of the usual variable column. This is backwards compatible since the previous behavior (one output variable column) is used when there is no
variable_table. New functions
measure() and
measurev() which use either a separator or a regex to create a
measure.vars list/vector with
variable_table attribute; useful for melting data that has several distinct pieces of information encoded in each column name. See new
?measure and new section in reshape vignette. Thanks to Matthias Gomolka, Ananda Mahto, Hugh Parsonage, Mark Fairbanks for reporting, and to Toby Dylan Hocking for implementing. Thanks to @keatingw for testing before release, requesting
measure() accept single groups too #5065, and Toby for implementing.
A new interface for programming on data.table has been added, closing #2655 and many other linked issues. It is built using base R’s
substitute-like interface via a new
env argument to
[.data.table. For details see the new vignette programming on data.table, and the new
?substitute2 manual page. Thanks to numerous users for filing requests, and Jan Gorecki for implementing.
DT = data.table(x = 1:5, y = 5:1) # parameters in_col_name = "x" fun = "sum" fun_arg1 = "na.rm" fun_arg1val = TRUE out_col_name = "sum_x" # parameterized query #DT[, .(out_col_name = fun(in_col_name, fun_arg1=fun_arg1val))] # desired query DT[, .(sum_x = sum(x, na.rm=TRUE))] # new interface DT[, .(out_col_name = fun(in_col_name, fun_arg1=fun_arg1val)), env = list( in_col_name = "x", fun = "sum", fun_arg1 = "na.rm", fun_arg1val = TRUE, out_col_name = "sum_x" )]
DT[, if (...) .(a=1L) else .(a=1L, b=2L), by=group] now returns a 1-column result with warning
j may not evaluate to the same number of columns for each group, rather than error
'names' attribute [2] must be the same length as the vector, #4274. Thanks to @robitalec for reporting, and Michael Chirico for the PR.
Typo checking in
i available since 1.11.4 is extended to work in non-English sessions, #4989. Thanks to Michael Chirico for the PR.
fifelse() now coerces logical
NA to other types and the
na argument supports vectorized input, #4277 #4286 #4287. Thanks to @michaelchirico and @shrektan for reporting, and @shrektan for implementing.
.datatable.aware is now recognized in the calling environment in addition to the namespace of the calling package, dtplyr#184. Thanks to Hadley Wickham for the idea and PR.
New convenience function
%plike% maps to
like(..., perl=TRUE), #3702.
%plike% uses Perl-compatible regular expressions (PCRE) which extend TRE, and may be more efficient in some cases. Thanks @KyleHaynes for the suggestion and PR.
fwrite() now accepts
sep="", #4817. The motivation is an example where the result of
paste0() needs to be written to file but
paste0() takes 40 minutes due to constructing a very large number of unique long strings in R’s global character cache. Allowing
fwrite(, sep="") avoids the
paste0 and saves 40 mins. Thanks to Jan Gorecki for the request, and Ben Schwen for the PR.
data.table printing now supports customizable methods for both columns and list column row items, part of #1523.
format_col is S3-generic for customizing how to print whole columns and by default defers to the S3
format method for the column’s class if one exists; e.g.
format.sfc for geometry columns from the
sf package, #2273. Similarly,
format_list_item is S3-generic for customizing how to print each row of list columns (which lack a format method at a column level) and also by default defers to the S3
format method for that item’s class if one exists. Thanks to @mllg who initially filed #3338 with the seed of the idea, @franknarf1 who earlier suggested the idea of providing custom formatters, @fparages who submitted a patch to improve the printing of timezones for #2842, @RichardRedding for pointing out an error relating to printing wide
expression columns in #3011, @JoshOBrien for improving the output for geometry columns, and @MichaelChirico for implementing. See
?print.data.table for examples.
tstrsplit(,type.convert=) now accepts a named list of functions to apply to each part, #5094. Thanks to @Kamgang-B for the request and implementing.
as.data.table(DF, keep.rownames=key='keyCol') now works, #4468. Thanks to Michael Chirico for the idea and the PR.
dcast() now supports complex values in
value.var, #4855. This extends earlier support for complex values in
formula. Thanks Elio Campitelli for the request, and Michael Chirico for the PR.
melt() was pseudo generic in that
melt(DT) would dispatch to the
melt.data.table method but
melt(not-DT) would explicitly redirect to
reshape2. Now
melt() is standard generic so that methods can be developed in other packages, #4864. Thanks to @odelmarcelle for suggesting and implementing.
DT(i, j, by, ...) has been added, i.e. functional form of a
data.table query, #641 #4872. Thanks to Yike Lu and Elio Campitelli for filing requests, many others for comments and suggestions, and Matt Dowle for the PR. This enables the
data.table general form query to be invoked on a
data.frame without converting it to a
data.table first. The class of the input object is retained. Thanks to Mark Fairbanks and Boniface Kamgang for testing and reporting problems that have been fixed before release, #5106 #5107.
When
data.table queries (either
|> DT(...)) receive a
data.table, the operations maintain
data.table’s attributes such as its key and any indices. For example, if a
data.table is reordered by
data.table, or a key column has a value changed by
data.table, its key and indices will either be dropped or reordered appropriately. Some
data.table operations automatically add and store an index on a
data.table for reuse in future queries, if
options(datatable.auto.index=TRUE), which is
TRUE by default.
data.table’s are also over-allocated, which means there are spare column pointer slots allocated in advance so that a
data.table in the
.GlobalEnv can have a column added to it truly by reference, like an in-memory database with multiple client sessions connecting to one server R process, as a
data.table video has shown in the past. But because R and other packages don’t maintain
data.table’s attributes or over-allocation (e.g. a subset or reorder by R or another package will create invalid
data.table attributes)
data.table cannot use these attributes when it detects that base R or another package has touched the
data.table in the meantime, even if the attributes may sometimes still be valid. So, please realize that,
DT() on a
data.table should realize better speed and memory usage than
DT() on a
data.frame.
DT() on a
data.frame may still be useful to use
data.table’s syntax (e.g. sub-queries within group:
|> DT(i, .SD[sub-query], by=grp)) without needing to convert to a
data.table first.
[...]or
:=in
DT[i, nomatch=NULL] where
i contains row numbers now excludes
NA and any outside the range [1,nrow], #3109 #3666. Before,
NA rows were returned always for such values; i.e.
nomatch=0|NULL was ignored. Thanks Michel Lang and Hadley Wickham for the requests, and Jan Gorecki for the PR. Using
nomatch=0 in this case when
i is row numbers generates the warning
Please use nomatch=NULL instead of nomatch=0; see news item 5 in v1.12.0 (Jan 2019).
DT = data.table(A=1:3) DT[c(1L, NA, 3L, 5L)] # default nomatch=NA # A # <int> # 1: 1 # 2: NA # 3: 3 # 4: NA DT[c(1L, NA, 3L, 5L), nomatch=NULL] # A # <int> # 1: 1 # 2: 3
DT[, head(.SD,n), by=grp] and
tail are now optimized when
n>1, #5060 #523.
n==1 was already optimized. Thanks to Jan Gorecki and Michael Young for requesting, and Benjamin Schwendinger for the PR.
setcolorder() gains
before= and
after=, #4385. Thanks to Matthias Gomolka for the request, and both Benjamin Schwendinger and Xianghui Dong for implementing.
base::droplevels() gains a fast method for
data.table, #647. Thanks to Steve Lianoglou for requesting, Boniface Kamgang and Martin Binder for testing, and Jan Gorecki and Benjamin Schwendinger for the PR.
fdroplevels() for use on vectors has also been added.
shift() now also supports
type="cyclic", #4451. Arguments that are normally pushed out by
type="lag" or
type="lead" are re-introduced at this type at the first/last positions. Thanks to @RicoDiel for requesting, and Benjamin Schwendinger for the PR.
# Usage shift(1:5, n=-1:1, type="cyclic") # [[1]] # [1] 2 3 4 5 1 # # [[2]] # [1] 1 2 3 4 5 # # [[3]] # [1] 5 1 2 3 4 # Benchmark x = sample(1e9) # 3.7 GB microbenchmark::microbenchmark( shift(x, 1, type="cyclic"), c(tail(x, 1), head(x,-1)), times = 10L, unit = "s" ) # Unit: seconds # expr min lq mean median uq max neval # shift(x, 1, type = "cyclic") 1.57 1.67 1.71 1.68 1.70 2.03 10 # c(tail(x, 1), head(x, -1)) 6.96 7.16 7.49 7.32 7.64 8.60 10
fread() now supports “0” and “1” in
na.strings, #2927. Previously this was not permitted since “0” and “1” can be recognized as boolean values. Note that it is still not permitted to use “0” and “1” in
na.strings in combination with
logical01 = TRUE. Thanks to @msgoussi for the request, and Benjamin Schwendinger for the PR.
setkey() now supports type
raw as value columns (not as key columns), #5100. Thanks Hugh Parsonage for requesting, and Benjamin Schwendinger for the PR.
shift() is now optimised by group, #1534. Thanks to Gerhard Nachtmann for requesting, and Benjamin Schwendinger for the PR.
N = 1e7 DT = data.table(x=sample(N), y=sample(1e6,N,TRUE)) shift_no_opt = shift # different name not optimised as a way to compare microbenchmark( DT[, c(NA, head(x,-1)), y], DT[, shift_no_opt(x, 1, type="lag"), y], DT[, shift(x, 1, type="lag"), y], times=10L, unit="s") # Unit: seconds # expr min lq mean median uq max neval # DT[, c(NA, head(x, -1)), y] 8.7620 9.0240 9.1870 9.2800 9.3700 9.4110 10 # DT[, shift_no_opt(x, 1, type = "lag"), y] 20.5500 20.9000 21.1600 21.3200 21.4400 21.5200 10 # DT[, shift(x, 1, type = "lag"), y] 0.4865 0.5238 0.5463 0.5446 0.5725 0.5982 10
Example from stackoverflow
set.seed(1) mg = data.table(expand.grid(year=2012:2016, id=1:1000), value=rnorm(5000)) microbenchmark(v1.9.4 = mg[, c(value[-1], NA), by=id], v1.9.6 = mg[, shift_no_opt(value, n=1, type="lead"), by=id], v1.14.4 = mg[, shift(value, n=1, type="lead"), by=id], unit="ms") # Unit: milliseconds # expr min lq mean median uq max neval # v1.9.4 3.6600 3.8250 4.4930 4.1720 4.9490 11.700 100 # v1.9.6 18.5400 19.1800 21.5100 20.6900 23.4200 29.040 100 # v1.14.4 0.4826 0.5586 0.6586 0.6329 0.7348 1.318 100
rbind() and
rbindlist() now support
fill=TRUE with
use.names=FALSE instead of issuing the warning
use.names= cannot be FALSE when fill is TRUE. Setting use.names=TRUE.
DT1 # A B # <int> <int> # 1: 1 5 # 2: 2 6 DT2 # foo # <int> # 1: 3 # 2: 4 rbind(DT1, DT2, fill=TRUE) # no change # A B foo # <int> <int> <int> # 1: 1 5 NA # 2: 2 6 NA # 3: NA NA 3 # 4: NA NA 4 rbind(DT1, DT2, fill=TRUE, use.names=FALSE) # was: # A B foo # <int> <int> <int> # 1: 1 5 NA # 2: 2 6 NA # 3: NA NA 3 # 4: NA NA 4 # Warning message: # In rbindlist(l, use.names, fill, idcol) : # use.names= cannot be FALSE when fill is TRUE. Setting use.names=TRUE. # now: # A B # <int> <int> # 1: 1 5 # 2: 2 6 # 3: 3 NA # 4: 4 NA
fread() already made a good guess as to whether column names are present by comparing the type of the fields in row 1 to the type of the fields in the sample. This guess is now improved when a column contains a string in row 1 (i.e. a potential column name) but all blank in the sample rows, #2526. Thanks @st-pasha for reporting, and @ben-schwen for the PR.
fread() can now read
.zip and
.tar directly, #3834. Moreover, if a compressed file name is missing its extension,
fread() now attempts to infer the correct filetype from its magic bytes. Thanks to Michael Chirico for the idea, and Benjamin Schwendinger for the PR.
DT[, let(...)] is a new alias for the functional form of
DT[, ':='(...)], #3795. Thanks to Elio Campitelli for requesting, and Benjamin Schwendinger for the PR.
:=; i.e.
DT = data.table(A=1:2) DT[, let(B=3:4, C=letters[1:2])] DT # A B C # <int> <int> <char> # 1: 1 3 a # 2: 2 4 b
weighted.mean() is now optimised by group, #3977. Thanks to @renkun-ken for requesting, and Benjamin Schwendinger for the PR.
as.xts.data.table() now supports non-numeric xts coredata matrixes, 5268. Existing numeric only functionality is supported by a new
numeric.only parameter, which defaults to
TRUE for backward compatability and the most common use case. To convert non-numeric columns, set this parameter to
FALSE. Conversions of
data.table columns to a
matrix now uses
data.table::as.matrix, with all its performance benefits. Thanks to @ethanbsmith for the report and fix.
unique.data.table() gains
cols to specify a subset of columns to include in the resulting
data.table, #5243. This saves the memory overhead of subsetting unneeded columns, and provides a cleaner API for a common operation previously needing more convoluted code. Thanks to @MichaelChirico for the suggestion & implementation.
:=is now optimized by group, #1414. Thanks to Arun Srinivasan for suggesting, and Benjamin Schwendinger for the PR. Thanks to @clerousset, @dcaseykc, @OfekShilon, and @SeanShao98 for testing dev and filing detailed bug reports which were fixed before release and their tests added to the test suite.
.I is now available in
by for rowwise operations, #1732. Thanks to Rafael H. M. Pereira for requesting, and Benjamin Schwendinger for the PR.
New functions
yearmon() and
yearqtr give a combined representation of
year() and
month()/
quarter(). These and also
yday,
wday,
mday,
week,
month and
year are now optimized for memory and compute efficiency by removing the
POSIXlt dependency, #649. Thanks to Matt Dowle for the request, and Benjamin Schwendinger for the PR.
by=.EACHI when
i is keyed but
on= different columns than
i’s key could create an invalidly keyed result, #4603 #4911. Thanks to @myoung3 and @adamaltmejd for reporting, and @ColeMiller1 for the PR. An invalid key is where a
data.table is marked as sorted by the key columns but the data is not sorted by those columns, leading to incorrect results from subsequent queries.
print(DT, trunc.cols=TRUE) and the corresponding
datatable.print.trunc.cols option (new feature 3 in v1.13.0) could incorrectly display an extra column, #4266. Thanks to @tdhock for the bug report and @MichaelChirico for the PR.
fread(..., nrows=0L) now works as intended and the same as
nrows=0; i.e. returning the column names and typed empty columns determined by the large sample, #4686, #4029. Thanks to @hongyuanjia and @michaelpaulhirsch for reporting, and Benjamin Schwendinger for the PR.
Passing
.SD to
frankv() with
ties.method='random' or with
na.last=NA failed with
.SD is locked, #4429. Thanks @smarches for the report.
Filtering data.table using
which=NA to return non-matching indices will now properly work for non-optimized subsetting as well, closes #4411.
When
j returns an object whose class
"X" inherits from
data.table; i.e. class
c("X", "data.table", "data.frame"), the derived class
"X" is no longer incorrectly dropped from the class of the
data.table returned, #4324. Thanks to @HJAllen for reporting and @shrektan for the PR.
as.data.table() failed with
.subset2(x, i, exact = exact): attempt to select less than one element in get1index when passed an object inheriting from
data.table with a different
dfidx from the
dfidx package, #4526. Thanks @RicoDiel for the report, and Michael Chirico for the PR.
[[method, such as the class
rbind() and
rbindlist() of length-0 ordered factors failed with
Internal error: savetl_init checks failed, #4795 #4823. Thanks to @shrektan and @dbart79 for reporting, and @shrektan for fixing.
data.table(NULL)[, firstCol:=1L] created
data.table(firstCol=1L) ok but did not update the internal
row.names attribute, causing
Error in '$<-.data.frame'(x, name, value) : replacement has 1 row, data has 0 when passed to packages like
ggplot which use
DT as if it is a
data.frame, #4597. Thanks to Matthew Son for reporting, and Cole Miller for the PR.
X[Y, .SD, by=] (joining and grouping in the same query) could segfault if i)
by= is supplied custom data (i.e. not simple expressions of columns), and ii) some rows of
Y do not match to any rows in
X, #4892. Thanks to @Kodiologist for reporting, @ColeMiller1 for investigating, and @tlapak for the PR.
Assigning a set of 2 or more all-NA values to a factor column could segfault, #4824. Thanks to @clerousset for reporting and @shrektan for fixing.
as.data.table(table(NULL)) now returns
data.table(NULL) rather than error
attempt to set an attribute on NULL, #4179. The result differs slightly to
as.data.frame(table(NULL)) (0-row, 1-column) because 0-column works better with other
data.table functions like
rbindlist(). Thanks to Michael Chirico for the report and fix.
melt with a list for
measure.vars would output
variable inconsistently between
na.rm=TRUE and
FALSE, #4455. Thanks to @tdhock for reporting and fixing.
by=...get()... could fail with
object not found, #4873 #4981. Thanks to @sindribaldur for reporting, and @OfekShilon for fixing.
print(x, col.names='none') now removes the column names as intended for wide
data.tables whose column names don’t fit on a single line, #4270. Thanks to @tdhock for the report, and Michael Chirico for fixing.
DT[, min(colB), by=colA] when
colB is type
character would miss blank strings (
"") at the beginning of a group and return the smallest non-blank instead of blank, #4848. Thanks to Vadim Khotilovich for reporting and for the PR fixing it.
Assigning a wrong-length or non-list vector to a list column could segfault, #4166 #4667 #4678 #4729. Thanks to @fklirono, Kun Ren, @kevinvzandvoort and @peterlittlejohn for reporting, and to Václav Tlapák for the PR.
as.data.table() on
xts objects containing a column named
x would return an
index of type plain
integer rather than
POSIXct, #4897. Thanks to Emil Sjørup for reporting, and Jan Gorecki for the PR.
A fix to
as.Date(c("", ...)) in R 4.0.3, 17909, has been backported to
data.table::as.IDate() so that it too now returns
NA for the first item when it is blank, even in older versions of R back to 3.1.0, rather than the incorrect error
character string is not in a standard unambiguous format, #4676. Thanks to Arun Srinivasan for reporting, and Michael Chirico both for the
data.table PR and for submitting the patch to R that was accepted and included in R 4.0.3.
uniqueN(DT, by=character()) is now equivalent to
uniqueN(DT) rather than internal error
'by' is either not integer or is length 0, #4594. Thanks Marco Colombo for the report, and Michael Chirico for the PR. Similarly for
unique(),
duplicated() and
anyDuplicated().
melt() on a
data.table with
list columns for
measure.vars would silently ignore
na.rm=TRUE, #5044. Now the same logic as
is.na() from base R is used; i.e. if list element is scalar NA then it is considered missing and removed. Thanks to Toby Dylan Hocking for the PRs.
fread(fill=TRUE) could segfault if the input contained an improperly quoted character field, #4774 #5041. Thanks to @AndeolEvain and @e-nascimento for reporting and to Václav Tlapák for the PR.
fread(fill=TRUE, verbose=TRUE) would segfault on the out-of-sample type bump verbose output if the input did not contain column names, 5046. Thanks to Václav Tlapák for the PR.
.SDcols=-V2:-V1 and
.SDcols=(-1) could error with
xcolAns does not pass checks and
argument specifying columns specify non existing column(s), #4231. Thanks to Jan Gorecki for reporting and the PR.
.SDcols=<logical vector> is now documented in
?data.table and it is now an error if the logical vector’s length is not equal to the number of columns (consistent with
data.table’s no-recycling policy; see new feature 1 in v1.12.2 Apr 2019), #4115. Thanks to @Henrik-P for reporting and Jan Gorecki for the PR.
melt() now outputs scalar logical
NA instead of
NULL in rows corresponding to missing list columns, for consistency with non-list columns when using
na.rm=TRUE, #5053. Thanks to Toby Dylan Hocking for the PR.
as.data.frame(DT),
setDF(DT) and
as.list(DT) now remove the
"index" attribute which contains any indices (a.k.a. secondary keys), as they already did for other
data.table-only attributes such as the primary key stored in the
"sorted" attribute. When indices were left intact, a subsequent subset, assign, or reorder of the
data.frame by
data.frame-code in base R or other packages would not update the indices, causing incorrect results if then converted back to
data.table, #4889. Thanks @OfekShilon for the report and the PR.
dplyr::arrange(DT) uses
vctrs::vec_slice which retains
data.table’s class but uses C to bypass
data.table’s attributes containing the index row numbers, #5042.
data.table’s long-standing
.internal.selfref mechanism to detect such operations by other packages was not being checked by
data.table when using indexes, causing
data.table filters and joins to use invalid indexes and return incorrect results after a
dplyr::arrange(DT). Thanks to @Waldi73 for reporting; @avimallu, @tlapak, @MichaelChirico, @jangorecki and @hadley for investigating and suggestions; and @mattdowle for the PR. The intended way to use
data.table is
data.table::setkey(DT, col1, col2, ...) which reorders
DT by reference in parallel, sets the primary key for automatic use by subsequent
data.table queries, and permits rowname-like usage such as
DT["foo",] which returns the now-contiguous-in-memory block of rows where the first column of
DT’s key contains
"foo". Multi-column-rownames (i.e. a primary key of more than one column) can be looked up using
DT[.("foo",20210728L), ]. Using
i is also optimized to use the key or indices, if you prefer using column names explicitly and
setkey(DT) is returning a new ordered result using
DT[order(col1, col2, ...), ].
[method dispatch and does not adjust
==in
==. An alternative to
A segfault occurred when
nrow/throttle < nthread, #5077. With the default throttle of 1024 rows (see
?setDTthreads), at least 64 threads would be needed to trigger the segfault since there needed to be more than 65,535 rows too. It occurred on a server with 256 logical cores where
data.table uses 128 threads by default. Thanks to Bennet Becker for reporting, debugging at C level, and fixing. It also occurred when the throttle was increased so as to use fewer threads; e.g. at the limit
setDTthreads(throttle=nrow(DT)).
fread(file=URL) now works rather than error
does not exist or is non-readable, #4952.
fread(URL) and
fread(input=URL) worked before and continue to work. Thanks to @pnacht for reporting and @ben-schwen for the PR.
fwrite(DF, row.names=TRUE) where
DF has specific integer rownames (e.g. using
rownames(DF) <- c(10L,20L,30L)) would ignore the integer rownames and write the row numbers instead, #4957. Thanks to @dgarrimar for reporting and @ColeMiller1 for the PR. Further, when
quote='auto' (default) and the rownames are integers (either default or specific), they are no longer quoted.
test.data.table() would fail on test 1894 if the variable
z was defined by the user, #3705. The test suite already ran in its own separate environment. That environment’s parent is no longer
.GlobalEnv to isolate it further. Thanks to Michael Chirico for reporting, and Matt Dowle for the PR.
fread(text="a,b,c") (where input data contains no
\n but
text= has been used) now works instead of error
file not found: a,b,c, #4689. Thanks to @trainormg for reporting, and @ben-schwen for the PR.
na.omit(DT) did not remove
NA in
nanotime columns, #4744. Thanks Jean-Mathieu Vermosen for reporting, and Michael Chirico for the PR.
DT[, min(intCol, na.rm=TRUE), by=grp] would return
Inf for any groups containing all NAs, with a type change from
integer to
numeric to hold the
Inf, and with warning. Similarly
max would return
-Inf. Now
NA is returned for such all-NA groups, without warning or type change. This is almost-surely less surprising, more convenient, consistent, and efficient. There was no user request for this, likely because our desire to be consistent with base R in this regard was known (
base::min(x, na.rm=TRUE) returns
Inf with warning for all-NA input). Matt Dowle made this change when reworking internals, #5105. The old behavior seemed so bad, and since there was a warning too, it seemed appropriate to treat it as a bug.
DT # A B # <char> <int> # 1: a 1 # 2: a NA # 3: b 2 # 4: b NA DT[, min(B,na.rm=TRUE), by=A] # no change in behavior (no all-NA groups yet) # A V1 # <char> <int> # 1: a 1 # 2: b 2 DT[3, B:=NA] # make an all-NA group DT # A B # <char> <int> # 1: a 1 # 2: a NA # 3: b NA # 4: b NA DT[, min(B,na.rm=TRUE), by=A] # old result # A V1 # <char> <num> # V1's type changed to numeric (inconsistent) # 1: a 1 # 2: b Inf # Inf surprising # Warning message: # warning inconvenient # In gmin(B, na.rm = TRUE) : # No non-missing values found in at least one group. Coercing to numeric # type and returning 'Inf' for such groups to be consistent with base DT[, min(B,na.rm=TRUE), by=A] # new result # A V1 # <char> <int> # V1's type remains integer (consistent) # 1: a 1 # 2: b NA # NA because there are no non-NA, naturally # no inconvenient warning
On the same basis,
min and
max methods for empty
IDate input now return
NA_integer_ of class
IDate, rather than
NA_double_ of class
IDate together with base R’s warning
no non-missing arguments to min; returning Inf, #2256. The type change and warning would cause an error in grouping, see example below. Since
NA was returned before it seems clear that still returning
NA but of the correct type and with no warning is appropriate, backwards compatible, and a bug fix. Thanks to Frank Narf for reporting, and Matt Dowle for fixing.
DT # d g # <IDat> <char> # 1: 2020-01-01 a # 2: 2020-01-02 a # 3: 2019-12-31 b DT[, min(d[d>"2020-01-01"]), by=g] # was: # Error in `[.data.table`(DT, , min(d[d > "2020-01-01"]), by = g) : # Column 1 of result for group 2 is type 'double' but expecting type # 'integer'. Column types must be consistent for each group. # In addition: Warning message: # In min.default(integer(0), na.rm = FALSE) : # no non-missing arguments to min; returning Inf # now : # g V1 # <char> <IDat> # 1: a 2020-01-02 # 2: b <NA>
DT[, min(int64Col), by=grp] (and
max) would return incorrect results for
bit64::integer64 columns, #4444. Thanks to @go-see for reporting, and Michael Chirico for the PR.
fread(dec=',') was able to guess
sep=',' and return an incorrect result, #4483. Thanks to Michael Chirico for reporting and fixing. It was already an error to provide both
sep=',' and
dec=',' manually.
fread('A|B|C\n1|0,4|a\n2|0,5|b\n', dec=',') # no problem # A B C # <int> <num> <char> # 1: 1 0.4 a # 2: 2 0.5 b fread('A|B,C\n1|0,4\n2|0,5\n', dec=',') # A|B C # old result guessed sep=',' despite dec=',' # <char> <int> # 1: 1|0 4 # 2: 2|0 5 # A B,C # now detects sep='|' correctly # <int> <num> # 1: 1 0.4 # 2: 2 0.5
IDateTime() ignored the
tz= and
format= arguments because
...was not passed through to submethods, #2402. Thanks to Frank Narf for reporting, and Jens Peder Meldgaard for the PR.
IDateTime("20171002095500", format="%Y%m%d%H%M%S") # was : # Error in charToDate(x) : # character string is not in a standard unambiguous format # now : # idate itime # <IDat> <ITime> # 1: 2017-10-02 09:55:00
DT[i, sum(b), by=grp] (and other optimized-by-group aggregates:
mean,
var,
sd,
median,
prod,
min,
max,
first,
head and
tail) could segfault if
i contained row numbers and one or more were NA, #1994. Thanks to Arun Srinivasan for reporting, and Benjamin Schwendinger for the PR.
identical(fread(text="A\n0.8060667366\n")$A, 0.8060667366) is now TRUE, #4461. This is one of 13 numbers in the set of 100,000 between 0.80606 and 0.80607 in 0.0000000001 increments that were not already identical. In all 13 cases R’s parser (same as
read.table) and
fread straddled the true value by a very similar small amount.
fread now uses
/10^n rather than
*10^-n to match R identically in all cases. Thanks to Gabe Becker for requesting consistency, and Michael Chirico for the PR.
for (i in 0:99999) { s = sprintf("0.80606%05d", i) r = eval(parse(text=s)) f = fread(text=paste0("A\n",s,"\n"))$A if (!identical(r, f)) cat(s, sprintf("%1.18f", c(r, f, r)), "\n") } # input eval & read.table fread before fread now # 0.8060603509 0.806060350899999944 0.806060350900000055 0.806060350899999944 # 0.8060614740 0.806061473999999945 0.806061474000000056 0.806061473999999945 # 0.8060623757 0.806062375699999945 0.806062375700000056 0.806062375699999945 # 0.8060629084 0.806062908399999944 0.806062908400000055 0.806062908399999944 # 0.8060632774 0.806063277399999945 0.806063277400000056 0.806063277399999945 # 0.8060638101 0.806063810099999944 0.806063810100000055 0.806063810099999944 # 0.8060647118 0.806064711799999944 0.806064711800000055 0.806064711799999944 # 0.8060658349 0.806065834899999945 0.806065834900000056 0.806065834899999945 # 0.8060667366 0.806066736599999945 0.806066736600000056 0.806066736599999945 # 0.8060672693 0.806067269299999944 0.806067269300000055 0.806067269299999944 # 0.8060676383 0.806067638299999945 0.806067638300000056 0.806067638299999945 # 0.8060681710 0.806068170999999944 0.806068171000000055 0.806068170999999944 # 0.8060690727 0.806069072699999944 0.806069072700000055 0.806069072699999944 # # remaining 99,987 of these 100,000 were already identical
dcast(empty-DT) now returns an empty
data.table rather than error
Cannot cast an empty data.table, #1215. Thanks to Damian Betebenner for reporting, and Matt Dowle for fixing.
DT[factor("id")] now works rather than error
i has evaluated to type integer. Expecting logical, integer or double, #1632.
DT["id"] has worked forever by automatically converting to
DT[.("id")] for convenience, and joins have worked forever between char/fact, fact/char and fact/fact even when levels mismatch, so it was unfortunate that
DT[factor("id")] managed to escape the simple automatic conversion to
DT[.(factor("id"))] which is now in place. Thanks to @aushev for reporting, and Matt Dowle for the fix.
All-NA character key columns could segfault, #5070. Thanks to @JorisChau for reporting and Benjamin Schwendinger for the fix.
In v1.13.2 a version of an old bug was reintroduced where during a grouping operation list columns could retain a pointer to the last group. This affected only attributes of list elements and only if those were updated during the grouping operation, #4963. Thanks to @fujiaxiang for reporting and @avimallu and Václav Tlapák for investigating and the PR.
shift(xInt64, fill=0) and
shift(xInt64, fill=as.integer64(0)) (but not
shift(xInt64, fill=0L)) would error with
INTEGER() can only be applied to a 'integer', not a 'double' where
xInt64 conveys
bit64::integer64,
0 is type
double and
0L is type integer, #4865. Thanks to @peterlittlejohn for reporting and Benjamin Schwendinger for the PR.
DT[i, strCol:=classVal] did not coerce using the
as.character method for the class, resulting in either an unexpected string value or an error such as
To assign integer64 to a target of type character, please use as.character() for clarity. Discovered during work on the previous issue, #5189.
tables() failed with
argument "..." is missing when called from within a function taking
function(...) { tables() }, #5197. Thanks @greg-minshall for the report and @michaelchirico for the fix.
...; e.g.
DT[, prod(int64Col), by=grp] produced wrong results for
bit64::integer64 due to incorrect optimization, #5225. Thanks to Benjamin Schwendinger for reporting and fixing.
fintersect(..., all=TRUE) and
fsetdiff(..., all=TRUE) could return incorrect results when the inputs had columns named
x and
y, #5255. Thanks @Fpadt for the report, and @ben-schwen for the fix.
fwrite() could produce not-ISO-compliant timestamps such as
2023-03-08T17:22:32.:00Z when under a whole second by less than numerical tolerance of one microsecond, #5238. Thanks to @avraam-inside for the report and Václav Tlapák for the fix.
merge.data.table() silently ignored the
incomparables argument, #2587. It is now implemented and any other ignored arguments (e.g. misspellings) are now warned about. Thanks to @GBsuperman for the report and @ben-schwen for the fix.
DT[, c('z','x') := {x=NULL; list(2,NULL)}] now removes column
x as expected rather than incorrectly assigning
2 to
x as well as
z, #5284. The
x=NULL is superfluous while the
list(2,NULL) is the final value of
c('z','x'). Thanks @eutwt for the report, and @ben-schwen for the fix.
{}whose items correspond to
as.data.frame(DT, row.names=) no longer silently ignores
row.names, #5319. Thanks to @dereckdemezquita for the fix and PR, and @ben-schwen for guidance.
New feature 29 in v1.12.4 (Oct 2019) introduced zero-copy coercion. Our thinking is that requiring you to get the type right in the case of
0 (type double) vs
0L (type integer) is too inconvenient for you the user. So such coercions happen in
data.table automatically without warning. Thanks to zero-copy coercion there is no speed penalty, even when calling
set() many times in a loop, so there’s no speed penalty to warn you about either. However, we believe that assigning a character value such as
"2" into an integer column is more likely to be a user mistake that you would like to be warned about. The type difference (character vs integer) may be the only clue that you have selected the wrong column, or typed the wrong variable to be assigned to that column. For this reason we view character to numeric-like coercion differently and will warn about it. If it is correct, then the warning is intended to nudge you to wrap the RHS with
as.<type>() so that it is clear to readers of your code that a coercion from character to that type is intended. For example :
CsubsetDT exported C function has been renamed to
DT_subsetDT. This requires
R_GetCCallable("data.table", "CsubsetDT") to be updated to
R_GetCCallable("data.table", "DT_subsetDT"). Additionally there is now a dedicated header file for data.table C exports
include/datatableAPI.h, #4643, thanks to @eddelbuettel, which makes it easier to import data.table C functions.
In v1.12.4, fractional
fread(..., stringsAsFactors=) was added. For example if
stringsAsFactors=0.2, any character column with fewer than 20% unique strings would be cast as
factor. This is now documented in
?fread as well, #4706. Thanks to @markderry for the PR.
cube(DT, by="a") now gives a more helpful error that
j is missing, #4282.
v1.13.0 (July 2020) fixed a segfault/corruption/error (depending on version of R and circumstances) in
dcast() when
fun.aggregate returned
NA (type
logical) in an otherwise
character result, #2394. This fix was the result of other internal rework and there was no news item at the time. A new test to cover this case has now been added. Thanks Vadim Khotilovich for reporting, and Michael Chirico for investigating, pinpointing when the fix occurred and adding the test.
DT[subset] where
DT[(subset)] or
DT[subset==TRUE] was intended; i.e., subsetting by a logical column whose name conflicts with an existing function, now gives a friendlier error message, #5014. Thanks @michaelchirico for the suggestion and PR, and @ColeMiller1 for helping with the fix.
Grouping by a
list column has its error message improved stating this is unsupported, #4308. Thanks @sindribaldur for filing, and @michaelchirico for the PR. Please add your vote and especially use cases to the #1597 feature request.
OpenBSD 6.9 released May 2021 uses a 16 year old version of zlib (v1.2.3 from 2005) plus cherry-picked bug fixes (i.e. a semi-fork of zlib) which induces
Compress gzip error: -9 from
fwrite(), #5048. Thanks to Philippe Chataignon for investigating and fixing. Matt asked on OpenBSD’s mailing list if zlib could be upgraded to 4 year old zlib 1.2.11 but forgot his tin hat:
?data.table, #4385 #4407. To help users find the documentation for these convenience features available inside
DT[...]. Recall that
list, and
..var tells
data.table to look for
var in the calling environment as opposed to a column of the table.
?".",
?"..",
?".(", and
?".()"now point to
.is an alias for
DT[, lhs:=rhs] and
set(DT, , lhs, rhs) no longer raise a warning on zero length
lhs, #4086. Thanks to Jan Gorecki for the suggestion and PR. For example,
DT[, grep("foo", names(dt)) := NULL] no longer warns if there are no column names containing
"foo".
melt()’s internal C code is now more memory efficient, #5054. Thanks to Toby Dylan Hocking for the PR.
?merge and
?setkey have been updated to clarify that the row order is retained when
sort=FALSE, and why
NAs are always first when
sort=TRUE, #2574 #2594. Thanks to Davor Josipovic and Markus Bonsch for the reports, and Jan Gorecki for the PR.
datatable.[dll|so] has changed name to
data_table.[dll|so], #4442. Thanks to Jan Gorecki for the PR. We had previously removed the
_ instead now seems more consistent with the last sentence.
.since
.is not allowed by the following paragraph in the Writing-R-Extensions manual. Replacing
_’.
For nearly two years, since v1.12.4 (Oct 2019) (note 11 below in this NEWS file), using
options(datatable.nomatch=0) has produced the following message :
The option 'datatable.nomatch' is being used and is not set to the default NA. This option is still honored for now but will be deprecated in future. Please see NEWS for 1.12.4 for detailed information and motivation. To specify inner join, please specify `nomatch=NULL` explicitly in your calls rather than changing the default using this option.
The message is now upgraded to warning that the option is now ignored.
Many thanks to Kurt Hornik for investigating potential impact of a possible future change to
base::intersect() on empty input, providing a patch so that
data.table won’t break if the change is made to R, and giving us plenty of notice, #5183.
The options
datatable.print.class and
datatable.print.keys are now
TRUE by default. They have been available since v1.9.8 (Nov 2016) and v1.11.0 (May 2018) respectively.
In v1.13.0 (July 2020) native parsing of datetime was added to
fread by Michael Chirico which dramatically improved performance. Before then datetime was read as type character by default which was slow. Since v1.13.0, UTC-marked datetime (e.g.
2020-07-24T10:11:12.134Z where the final
Z is present) has been read automatically as POSIXct and quickly. We provided the migration option
datatable.old.fread.datetime.character to revert to the previous slow character behavior. We also added the
tz= argument to control unmarked datetime; i.e. where the
Z (or equivalent UTC postfix) is missing in the data. The default
tz="" reads unmarked datetime as character as before, slowly. We gave you the ability to set
tz="UTC" to turn on the new behavior and read unmarked datetime as UTC, quickly. R sessions that are running in UTC by setting the TZ environment variable, as is good practice and common in production, have also been reading unmarked datetime as UTC since v1.13.0, much faster. Note 1 of v1.13.0 (below in this file) ended
In addition to convenience, fread is now significantly faster in the presence of dates, UTC-marked datetimes, and unmarked datetime when tz="UTC" is provided..
At
rstudio::global(2021), Neal Richardson, Director of Engineering at Ursa Labs, compared Arrow CSV performance to
data.table CSV performance, Bigger Data With Ease Using Apache Arrow. He opened by comparing to
data.table as his main point. Arrow was presented as 3 times faster than
data.table. He talked at length about this result. However, no reproducible code was provided and we were not contacted in advance in case we had any comments. He mentioned New York Taxi data in his talk which is a dataset known to us as containing unmarked datetime. Rebuttal.
tz=’s default is now changed from
"UTC". If you have been using
tz= explicitly then there should be no change. The change to read UTC-marked datetime as POSIXct rather than character already happened in v1.13.0. The change now is that unmarked datetimes are now read as UTC too by default without needing to set
tz="UTC". None of the 1,017 CRAN packages directly using
data.table are affected. As before, the migration option
datatable.old.fread.datetime.character can still be set to TRUE to revert to the old character behavior. This migration option is temporary and will be removed in the near future.
""to
The community was consulted in this tweet before release.
If
fread() discards a single line footer, the warning message which includes the discarded text now displays any non-ASCII characters correctly on Windows, #4747. Thanks to @shrektan for reporting and the PR.
fintersect() now retains the order of the first argument as reasonably expected, rather than retaining the order of the second argument, #4716. Thanks to Michel Lang for reporting, and Ben Schwen for the PR.
Compiling from source no longer requires
zlib header files to be available, #4844. The output suggests installing
zlib headers, and how (e.g.
zlib1g-dev on Ubuntu) as before, but now proceeds with
gzip compression disabled in
fwrite. Upon calling
fwrite(DT, "file.csv.gz") at runtime, an error message suggests to reinstall
data.table with
zlib headers available. This does not apply to users on Windows or Mac who install the pre-compiled binary package from CRAN.
r-datatable.com continues to be the short, canonical and long-standing URL which forwards to the current homepage. The homepage domain has changed a few times over the years but those using
r-datatable.com did not need to change their links. For example, we use
r-datatable.com in messages (and translated messages) in preference to the word ‘homepage’ to save users time in searching for the current homepage. The web forwarding was provided by Domain Monster but they do not support
only
despite the homepage being forwarded to being
for many years. Meanwhile, CRAN submission checks now require all URLs to be
rejecting
Therefore we have moved to gandi.net who do support
web forwarding and so now forwards correctly. Thanks to Dirk Eddelbuettel for suggesting Gandi. Further, Gandi allows the web-forward to be marked 301 (permanent) or 302 (temporary). Since the very point of
is to be a forward, 302 is appropriate in this case. This enables us to link to it in DESCRIPTION, README, and this NEWS item. Otherwise, CRAN submission checks would require the 301 forward to be followed; i.e. the forward replaced with where it points to and the package resubmitted. Thanks to Uwe Ligges for explaining this distinction.
Grouping could throw an error
Failed to allocate counts or TMP with more than 1e9 rows even with sufficient RAM due to an integer overflow, #4295 #4818. Thanks to @renkun-ken and @jangorecki for reporting, and @shrektan for fixing.
fwrite()’s mutithreaded
gzip compression failed on Solaris with Z_STREAM_ERROR, #4099. Since this feature was released in Oct 2019 (see item 3 in v1.12.4 below in this news file) there have been no known problems with it on Linux, Windows or Mac. For Solaris, we have been successively adding more and more detailed tracing to the output in each release, culminating in tracing
zlib internals at byte level by reading
zlib’s source. The problem did not manifest itself on R-hub’s Solaris instances, so we had to work via CRAN output. If
zlib’s
z_stream structure is declared inside a parallel region but before a parallel for, it appears that the particular OpenMP implementation used by CRAN’s Solaris moves the structure to a new address on entering the parallel for. Ordinarily this memory move would not matter, however,
zlib internals have a self reference pointer to the parent, and check that the pointers match. This mismatch caused the -2 (Z_STREAM_ERROR). Allocating an array of structures, one for each thread, before the parallel region avoids the memory move with no cost.
It should be carefully noted that we cannot be sure it really is a problem unique to CRAN’s Solaris. Even if it seems that way after one year of observations. For example, it could be compiler flags, or particular memory circumstances, either of which could occur on other operating systems too. However, we are unaware of why it would make sense for the OpenMP implementation to move the structure at that point. Any optimizations such as aligning the set of structures to cache line boundaries could be performed at the start of the parallel region, not after the parallel for. If anyone reading this knows more, please let us know.
environments=FALSEto our
all.equalcall. Then about 4 hours after 1.13.4 was accepted, the
swas dropped and we now need to resubmit with
environment=FALSE. In any case, we have suggested that the default should be FALSE first to give packages some notice, as opposed to generating errors in the CRAN submissions process within hours. Then the default for
environment=could be TRUE in 6 months time after packages have had some time to update in advance of the default change. Readers of this NEWS file will be familiar with
data.table’s approach to change control and know that we do this ourselves.
as.matrix(<empty DT>) now retains the column type for the empty matrix result, #4762. Thus, for example,
min(DT[0]) where DT’s columns are numeric, is now consistent with non-empty all-NA input and returns
Inf with R’s warning
no non-missing arguments to min; returning Inf rather than R’s error
only defined on a data frame with all numeric[-alike] variables. Thanks to @mb706 for reporting.
fsort() could crash when compiled using
clang-11 (Oct 2020), #4786. Multithreaded debugging revealed that threads are no longer assigned iterations monotonically by the dynamic schedule. Although never guaranteed by the OpenMP standard, in practice monotonicity could be relied on as far as we knew, until now. We rely on monotonicity in the
fsort implementation. Happily, a schedule modifier
monotonic:dynamic was added in OpenMP 4.5 (Nov 2015) which we now use if available (e.g. gcc 6+, clang 3.9+). If you have an old compiler which does not support OpenMP 4.5, it’s probably the case that the unmodified dynamic schedule is monotonic anyway, so
fsort now checks that threads are receiving iterations monotonically and emits a graceful error if not. It may be that
clang prior to version 11, and
gcc too, exhibit the same crash. It was just that
clang-11 was the first report. To know which version of OpenMP
data.table is using,
getDTthreads(verbose=TRUE) now reports the
YYYYMM value
_OPENMP; e.g. 201511 corresponds to v4.5, and 201811 corresponds to v5.0. Oddly, the
x.y version number is not provided by the OpenMP API. OpenMP 4.5 may be enabled in some compilers using
-fopenmp-version=45. Otherwise, if you need to upgrade compiler, may be helpful.
Columns containing functions that don’t inherit the class
'function' would fail to group, #4814. Thanks @mb706 for reporting, @ecoRoland2 for helping investigate, and @Coorsaa for a follow-up example involving environments.
Continuous daily testing by CRAN using latest daily R-devel revealed, within one day of the change to R-devel, that a future version of R would break one of our tests, #4769. The characters “-alike” were added into one of R’s error messages, so our too-strict test which expected the error
only defined on a data frame with all numeric variables will fail when it sees the new error message
only defined on a data frame with all numeric-alike variables. We have relaxed the pattern the test looks for to
data.*frame.*numeric well in advance of the future version of R being released. Readers are reminded that CRAN is not just a host for packages. It is also a giant test suite for R-devel. For more information, behind the scenes of cran, 2016.
as.Date.IDate is no longer exported as a function to solve a new error in R-devel
S3 method lookup found 'as.Date.IDate' on search path, #4777. The S3 method is still exported; i.e.
as.Date(x) will still invoke the
as.Date.IDate method when
x is class
IDate. The function had been exported, in addition to exporting the method, to solve a compatibility issue with
zoo (and
xts which uses
zoo) because
zoo exports
as.Date which masks
base::as.Date. Happily, since zoo 1.8-1 (Jan 2018) made a change to its
as.IDate, the workaround is no longer needed.
Thanks to @fredguinog for testing
fcase in development before 1.13.0 was released and finding a segfault, #4378. It was found separately by the
rchk tool (which uses static code analysis) in release procedures and fixed before
fcase was released, but the reproducible example has now been added to the test suite for completeness. Thanks also to @shrektan for investigating, proposing a very similar fix at C level, and a different reproducible example which has also been added to the test suite.
test.data.table() could fail the 2nd time it is run by a user in the same R session on Windows due to not resetting locale properly after testing Chinese translation, #4630. Thanks to Cole Miller for investigating and fixing.
A regression in v1.13.0 resulted in installation on Mac often failing with
shared object 'datatable.so' not found, and FreeBSD always failing with
expr: illegal option -- l, #4652 #4640 #4650. Thanks to many for assistance including Simon Urbanek, Brian Ripley, Wes Morgan, and @ale07alvarez. There were no installation problems on Windows or Linux.
Operating on columns of type
list, e.g.
dt[, listCol[[1]], by=id], suffered a performance regression in v1.13.0, #4646 #4658. Thanks to @fabiocs8 and @sandoronodi for the detailed reports, and to Cole Miller for substantial debugging, investigation and proposals at C level which enabled the root cause to be fixed. Related, and also fixed, was a segfault revealed by package POUMM, #4746, when grouping a list column where each item has an attribute; e.g.,
coda::mcmc.list. Detected thanks to CRAN’s ASAN checks, and thanks to Venelin Mitov for assistance in tracing the memory fault. Thanks also to Hongyuan Jia and @ben-schwen for assistance in debugging the fix in dev to pass reverse dependency testing which highlighted, before release, that package
eplusr would fail. Its good usage has been added to
data.table’s test suite.
fread("1.2\n", colClasses='integer') (note no columns names in the data) would segfault when creating a warning message, #4644. It now warns with
Attempt to override column 1 of inherent type 'float64' down to 'int32' ignored. When column names are present however, the warning message includes the name as before; i.e.,
fread("A\n1.2\n", colClasses='integer') produces
Attempt to override column 1 <<A>> of inherent type 'float64' down to 'int32' ignored.. Thanks to Kun Ren for reporting.
dplyr::mutate(setDT(as.list(1:64)), V1=11) threw error
can't set ALTREP truelength, #4734. Thanks to @etryn for the reproducible example, and to Cole Miller for refinements.
bit64 v4.0.2 and
bit v4.0.3, both released on 30th July, correctly broke
data.table’s tests. Like other packages on our
Suggest list, we check
data.table works with
bit64 in our tests. The first break was because
all.equal always returned
TRUE in previous versions of
bit64. Now that
all.equal works for
integer64, the incorrect test comparison was revealed. If you use
bit64, or
nanotime which uses
bit64, it is highly recommended to upgrade to the latest
bit64 version. Thanks to Cole Miller for the PR to accommodate
bit64’s update.
The second break caused by
bit was the addition of a
copy function. We did not ask, but the
bit package kindly offered to change to a different name since
data.table::copy is long standing.
bit v4.0.4 released 4th August renamed
copy to
copy_vector. Otherwise, users of
data.table would have needed to prefix every occurrence of
copy with
data.table::copy if they use
bit64 too, since
bit64 depends on (rather than importing)
bit. Again, this impacted
data.table’s tests which mimic a user’s environment; not
data.table itself per se.
We have requested that CRAN policy be modified to require that reverse dependency testing include packages which
Suggest the package. Had this been the case, reverse dependency testing of
bit64 would have caught the impact on
data.table before release.
?.NGRP now displays the help page as intended, #4946. Thanks to @KyleHaynes for posting the issue, and Cole Miller for the fix.
.NGRP is a symbol new in v1.13.0; see below in this file.
test.data.table() failed in non-English locales such as
LC_TIME=fr_FR.UTF-8 due to
Jan vs
janv. in tests 168 and 2042, #3450. Thanks to @shrektan for reporting, and @tdhock for making the tests locale-aware.
User-supplied
PKG_LIBS and
PKG_CFLAGS are now retained and the suggestion in i.e.,
PKG_CPPFLAGS='-Xclang -fopenmp' PKG_LIBS=-lomp R CMD INSTALL data.table_<ver>.tar.gz has a better chance of working on Mac.
fread now supports native parsing of
%Y-%m-%d, and ISO 8601
%Y-%m-%dT%H:%M:%OS%z, #4464. Dates are returned as
data.table’s
integer-backed
IDate class (see
?IDate), and datetimes are returned as
POSIXct provided either
Z or the offset from
UTC is present; e.g.
fwrite() outputs UTC by default including the final
Z. Reminder that
IDate inherits from R’s
Date and is identical other than it uses the
integer type where (oddly) R uses the
double type for dates (8 bytes instead of 4).
fread() gains a
tz argument to control datetime values that are missing a Z or UTC-offset (now referred to as unmarked datetimes); e.g. as written by
write.csv. By default
tz="" means, as in R, read the unmarked datetime in local time. Unless the timezone of the R session is UTC (e.g. the TZ environment variable is set to
"UTC", or
fread as character, as before. If you have been using
colClasses="POSIXct" that will still work using R’s
as.POSIXct() which will interpret the unmarked datetime in local time, as before, and still slowly. You can tell
fread to read unmarked datetime as UTC, and quickly, by passing
tz="UTC" which may be appropriate in many circumstances. Note that the default behaviour of R to read and write csv using unmarked datetime can lead to different research results when the csv file has been saved in one timezone and read in another due to observations being shifted to a different date. If you have been using
colClasses="POSIXct" for UTC-marked datetime (e.g. as written by
fwrite including the final
Z) then it will automatically speed up with no changes needed.
""on non-Windows), unmarked datetime will then by read by
Since this is a potentially breaking change, i.e. existing code may depend on dates and datetimes being read as type character as before, a temporary option is provided to restore the old behaviour:
options(datatable.old.fread.datetime.character=TRUE). However, in most cases, we expect existing code to still work with no changes.
The minor version number is bumped from 12 to 13, i.e.
v1.13.0, where the
.0 conveys ‘be-aware’ as is common practice. As with any new feature, there may be bugs to fix and changes to defaults required in future. In addition to convenience,
fread is now significantly faster in the presence of dates, UTC-marked datetimes, and unmarked datetime when tz=“UTC” is provided.
%chin% and
chmatch(x, table) are faster when
x is length 1,
table is long, and
x occurs near the start of
table. Thanks to Michael Chirico for the suggestion, #4117.
CsubsetDT C function is now exported for use by other packages, #3751. Thanks to Leonardo Silvestri for the request and the PR. This uses R’s
R_RegisterCCallable and
R_GetCCallable mechanism, R-exts§5.4.3 and
?cdt. Note that organization of our C interface will be changed in future.
data.table gains
trunc.cols argument (and corresponding option
datatable.print.trunc.cols, default
FALSE), #1497, part of #1523. This prints only as many columns as fit in the console without wrapping to new lines (e.g., the first 5 of 80 columns) and a message that states the count and names of the variables not shown. When
class=TRUE the message also contains the classes of the variables.
data.table has always automatically truncated rows of a table for efficiency (e.g. printing 10 rows instead of 10 million); in the future, we may do the same for columns (e.g., 10 columns instead of 20,000) by changing the default for this argument. Thanks to @nverno for the initial suggestion and to @TysonStanley for the PR.
setnames(DT, new=new_names) (i.e. explicitly named
new= argument) now works as expected rather than an error message requesting that
old= be supplied too, #4041. Thanks @Kodiologist for the suggestion.
nafill and
setnafill gain
nan argument to say whether
NaN should be considered the same as
NA for filling purposes, #4020. Prior versions had an implicit value of
nan=NaN; the default is now
nan=NA, i.e.,
NaN is treated as if it’s missing. Thanks @AnonymousBoba for the suggestion. Also, while
nafill still respects
getOption('datatable.verbose'), the
verbose argument has been removed.
New function
fcase(...,default) implemented in C by Morgan Jacob, #3823, is inspired by SQL
CASE WHEN which is a common tool in SQL for e.g. building labels or cutting age groups based on conditions.
fcase is comparable to R function
dplyr::case_when however it evaluates its arguments in a lazy way (i.e. only when needed) as shown below. Please see
?fcase for more details.
# Lazy evaluation x = 1:10 data.table::fcase( x < 5L, 1L, x >= 5L, 3L, x == 5L, stop("provided value is an unexpected one!") ) # [1] 1 1 1 1 3 3 3 3 3 3 dplyr::case_when( x < 5L ~ 1L, x >= 5L ~ 3L, x == 5L ~ stop("provided value is an unexpected one!") ) # Error in eval_tidy(pair$rhs, env = default_env) : # provided value is an unexpected one! # Benchmark x = sample(1:100, 3e7, replace = TRUE) # 114 MB microbenchmark::microbenchmark( dplyr::case_when( x < 10L ~ 0L, x < 20L ~ 10L, x < 30L ~ 20L, x < 40L ~ 30L, x < 50L ~ 40L, x < 60L ~ 50L, x > 60L ~ 60L ), data.table::fcase( x < 10L, 0L, x < 20L, 10L, x < 30L, 20L, x < 40L, 30L, x < 50L, 40L, x < 60L, 50L, x > 60L, 60L ), times = 5L, unit = "s") # Unit: seconds # expr min lq mean median uq max neval # dplyr::case_when 11.57 11.71 12.22 11.82 12.00 14.02 5 # data.table::fcase 1.49 1.55 1.67 1.71 1.73 1.86 5
.SDcols=is.numeric now works; i.e.,
SDcols= accepts a function which is used to select the columns of
.SD, #3950. Any function (even ad hoc) that returns scalar
TRUE/
FALSE for each column will do; e.g.,
.SDcols=!is.character will return non-character columns (a la
Negate()). Note that
.SDcols=patterns(...) can still be used for filtering based on the column names.
Compiler support for OpenMP is now detected during installation, which allows
data.table to compile from source (in single threaded mode) on macOS which, frustratingly, does not include OpenMP support by default, #2161, unlike Windows and Linux. A helpful message is emitted during installation from source, and on package startup as before. Many thanks to @jimhester for the PR.
rbindlist now supports columns of type
expression, #546. Thanks @jangorecki for the report.
The dimensions of objects in a
list column are now displayed, #3671. Thanks to @randomgambit for the request, and Tyson Barrett for the PR.
frank gains
ties.method='last', paralleling the same in
base::order which has been available since R 3.3.0 (April 2016), #1689. Thanks @abudis for the encouragement to accommodate this.
The
keep.rownames argument in
as.data.table.xts now accepts a string, which can be used for specifying the column name of the index of the xts input, #4232. Thanks to @shrektan for the request and the PR.
New symbol
.NGRP available in
j, #1206.
.GRP (the group number) was already available taking values from
1 to
.NGRP. The number of groups,
.NGRP, might be useful in
j to calculate a percentage of groups processed so far, or to do something different for the last or penultimate group, for example.
Added support for
round() and
trunc() to extend functionality of
ITime.
round() and
trunc() can be used with argument units: “hours” or “minutes”. Thanks to @JensPederM for the suggestion and PR.
A new throttle feature has been introduced to speed up small data tasks that are repeated in a loop, #3175 #3438 #3205 #3735 #3739 #4284 #4527 #4294 #1120. The default throttle of 1024 means that a single thread will be used when nrow<=1024, two threads when nrow<=2048, etc. To change the default, use
setDTthreads(throttle=). Or use the new environment variable
R_DATATABLE_THROTTLE. If you use
Sys.setenv() in a running R session to change this environment variable, be sure to run an empty
setDTthreads() call afterwards for the change to take effect; see
?setDTthreads. The word throttle is used to convey that the number of threads is restricted (throttled) for small data tasks. Reducing throttle to 1 will turn off throttling and should revert behaviour to past versions (i.e. using many threads even for small data). Increasing throttle to, say, 65536 will utilize multi-threading only for larger datasets. The value 1024 is a guess. We welcome feedback and test results indicating what the best default should be.
A NULL timezone on POSIXct was interpreted by
as.IDate and
as.ITime as UTC rather than the session’s default timezone (
tz="") , #4085.
DT[i] could segfault when
i is a zero-column
data.table, #4060. Thanks @shrektan for reporting and fixing.
Dispatch of
first and
last functions now properly works again for
xts objects, #4053. Thanks to @ethanbsmith for reporting.
If
.SD is returned as-is during grouping, it is now unlocked for downstream usage, part of #4159. Thanks also to @mllg for detecting a problem with the initial fix here during the dev release #4173.
GForce is deactivated for
[[on non-atomic input, part of #4159. Thanks @hongyuanjia and @ColeMiller1 for helping debug an issue in dev with the original fix before release, #4612.
all.equal(DT, y) no longer errors when
y is not a data.table, #4042. Thanks to @d-sci for reporting and the PR.
A length 1
colClasses=NA_character_ would cause
fread to incorrectly coerce all columns to character, #4237.
An
fwrite error message could include a garbled number and cause test 1737.5 to fail, #3492. Thanks to @QuLogic for debugging the issue on ARMv7hl, and the PR fixing it.
fread improves handling of very small (<1e-300) or very large (>1e+300) floating point numbers on non-x86 architectures (specifically ppc64le and armv7hl). Thanks to @QuLogic for reporting and fixing, PR#4165.
When updating by reference, the use of
get could result in columns being re-ordered silently, #4089. Thanks to @dmongin for reporting and Cole Miller for the fix.
copy() now overallocates deeply nested lists of
data.tables, #4205. Thanks to @d-sci for reporting and the PR.
rbindlist no longer errors when coercing complex vectors to character vectors, #4202. Thanks to @sritchie73 for reporting and the PR.
A relatively rare case of segfault when combining non-equi joins with
by=.EACHI is now fixed, closes #4388.
Selecting key columns could incur a large speed penalty, #4498. Thanks to @Jesper on Stack Overflow for the report.
all.equal(DT1, DT2, ignore.row.order=TRUE) could return TRUE incorrectly in the presence of NAs, #4422.
Non-equi joins now automatically set
allow.cartesian=TRUE, 4489. Thanks to @Henrik-P for reporting.
X[Y, on=character(0)] and
merge(X, Y, by.x=character(0), by.y=character(0)) no longer crash, #4272. Thanks to @tlapak for the PR.
by=col1:col4 gave an incorrect result if
key(DT)==c("col1","col4"), #4285. Thanks to @cbilot for reporting, and Cole Miller for the PR.
Matrices resulting from logical operators or comparisons on
data.tables, e.g. in
dta == dtb, can no longer have their colnames changed by reference later, #4323. Thanks to @eyherabh for reporting and @tlapak for the PR.
The environment variable
R_DATATABLE_NUM_THREADS was being limited by
R_DATATABLE_NUM_PROCS_PERCENT (by default 50%), #4514. It is now consistent with
setDTthreads() and only limited by the full number of logical CPUs. For example, on a machine with 8 logical CPUs,
R_DATATABLE_NUM_THREADS=6 now results in 6 threads rather than 4 (50% of 8).
Retrospective license change permission was sought from and granted by 4 contributors who were missed in PR#2456, #4140. We had used GitHub’s contributor page which omits 3 of these due to invalid email addresses, unlike GitLab’s contributor page which includes the ids. The 4th omission was a PR to a script which should not have been excluded; a script is code too. We are sorry these contributors were not properly credited before. They have now been added to the contributors list as displayed on CRAN. All the contributors of code to data.table hold its copyright jointly; your contributions belong to you. You contributed to data.table when it had a particular license at that time, and you contributed on that basis. This is why in the last license change, all contributors of code were consulted and each had a veto.
as.IDate,
as.ITime,
second,
minute, and
hour now recognize UTC equivalents for speed: GMT, GMT-0, GMT+0, GMT0, Etc/GMT, and Etc/UTC, #4116.
set2key,
set2keyv, and
key2 have been removed, as they have been warning since v1.9.8 (Nov 2016) and halting with helpful message since v1.11.0 (May 2018). When they were introduced in version 1.9.4 (Oct 2014) they were marked as ‘experimental’ and quickly superseded by
setindex and
indices.
data.table now supports messaging in simplified Chinese (locale
zh_CN). This was the result of a monumental collaboration to translate
data.table’s roughly 1400 warnings, errors, and verbose messages (about 16,000 words/100,000 characters) over the course of two months from volunteer translators in at least 4 time zones, most of whom are first-time
data.table contributors and many of whom are first-time OSS contributors!
A big thanks goes out to @fengqifang, @hongyuanjia, @biobai, @zhiiiyang, @Leo-Lee15, @soappp9527, @amy17519, @Zachary-Wu, @caiquanyou, @dracodoc, @JulianYlli12, @renkun-ken, @Xueliang24, @koohoko, @KingdaShi, @gaospecial, @shrektan, @sunshine1126, @shawnchen1996, @yc0802, @HesperusArcher, and @Emberwhirl, all of whom took time from their busy schedules to translate and review others’ translations. Especial thanks goes to @zhiiiyang and @hongyuanjia who went above and beyond in helping to push the project over the finish line, and to @GuangchuangYu who helped to organize the volunteer pool.
data.table joins
lubridate and
nlme as the only of the top 200 most-downloaded community packages on CRAN to offer non-English messaging, and is the only of the top 50 packages to offer complete support of all messaging. We hope this is a first step in broadening the reach and accessibility of the R ecosystem to more users globally and look forward to working with other maintainers looking to bolster the portability of their packages by offering advice on learnings from this undertaking.
We would be remiss not to mention the laudable lengths to which the R core team goes to maintain the much larger repository (about 6,000 messages in more than 10 languages) of translations for R itself.
We will evaluate the feasibility (in terms of maintenance difficulty and CRAN package size limits) of offering support for other languages in later releases.
fifelse and
fcase now notify users that S4 objects (except
nanotime) are not supported #4135. Thanks to @torema-ed for bringing it to our attention and Morgan Jacob for the PR.
frank(..., ties.method="random", na.last=NA) now returns the same random ordering that
base::rank does, #4243.
The error message when mistakenly using
i instead of
j has been much improved, #4227. Thanks to Hugh Parsonage for the detailed suggestion.
:=in
> DT = data.table(A=1:2) > DT[B:=3] Error: Operator := detected in i, the first argument inside DT[...], but is only valid in the second argument, j. Most often, this happens when forgetting the first comma (e.g. DT[newvar:=5] instead of DT[, new_var:=5]). Please double-check the syntax. Run traceback(), and debugger() to get a line number. > DT[, B:=3] > DT A B <int> <num> 1: 1 3 2: 2 3
Added more explanation/examples to
?data.table for how to use
.BY, #1363.
Changes upstream in R have been accomodated; e.g.
c.POSIXct now raises
'origin' must be supplied which impacted
foverlaps, #4428.
data.table::update.dev.pkg() now unloads the
data.table namespace to alleviate a DLL lock issue on Windows, #4403. Thanks to @drag5 for reporting.
data.table packages binaries built by R version 3 (R3) should only be installed in R3, and similarly
data.table package binaries built by R4 should only be installed in R4. Otherwise,
package ‘data.table’ was built under R version... warning will occur which should not be ignored. This is due to a very welcome change to
rbind and
cbind in R 4.0.0 which enabled us to remove workarounds, see news item in v1.12.6 below in this file. To continue to support both R3 and R4,
data.table’s NAMESPACE file contains a condition on the R major version (3 or 4) and this is what gives rise to the requirement that the major version used to build
data.table must match the major version used to install it. Thanks to @vinhdizzo for reporting, #4528.
Internal function
shallow() no longer makes a deep copy of secondary indices. This eliminates a relatively small time and memory overhead when indices are present that added up significantly when performing many operations, such as joins, in a loop or when joining in
j by group, #4311. Many thanks to @renkun-ken for the report, and @tlapak for the investigation and PR.
The
datatable.old.unique.by.key option has been removed as per the 4 year schedule detailed in note 10 of v1.12.4 (Oct 2019), note 10 of v1.11.0 (May 2018), and note 1 of v1.9.8 (Nov 2016). It has been generating a helpful warning for 2 years, and helpful error for 1 year.
DT[, {...; .(A,B)}](i.e. when
.()is the final item of a multi-statement
{...}) now auto-names the columns
Aand
B(just like
DT[, .(A,B)]) rather than
V1and
V2, #2478 #609. Similarly,
DT[, if (.N>1) .(B), by=A]now auto-names the column
Brather than
V1. Explicit names are unaffected; e.g.
DT[, {... y= ...; .(A=C+y)}, by=...]named the column
Abefore, and still does. Thanks also to @renkun-ken for his go-first strong testing which caught an issue not caught by the test suite or by revdep testing, related to NULL being the last item, #4061.
frollapply could segfault and exceed R’s C protect limits, #3993. Thanks to @DavisVaughan for reporting and fixing.
DT[, sum(grp), by=grp] (i.e. aggregating the same column being grouped) could error with
object 'grp' not found, #3103. Thanks to @cbailiss for reporting.
Links in the manual were creating warnings when installing HTML, #4000. Thanks to Morgan Jacob.
Adjustments for R-devel (R 4.0.0) which now has reference counting turned on, #4058 #4093. This motivated early release to CRAN because every day CRAN tests every package using the previous day’s changes in R-devel; a much valued feature of the R ecosystem. It helps R-core if packages can pass changes in R-devel as soon as possible. Thanks to Luke Tierney for the notice, and for implementing reference counting which we look forward to very much.
C internals have been standardized to use
PRI[u|d]64 to print
[u]int64_t. This solves new warnings from
gcc-8 on Windows with
%lld, #4062, in many cases already working around
snprintf on Windows not supporting
%zu. Release procedures have been augmented to prevent any internal use of
llu,
lld,
zu or
zd.
test.data.table() gains
showProgress=interactive() to suppress the thousands of
Running test id <num> ... lines displayed by CRAN checks when there are warnings or errors.
shift() on a
nanotime with the default
fill=NA now fills a
nanotime missing value correctly, #3945. Thanks to @mschubmehl for reporting and fixing in PR #3942.
Compilation failed on CRAN’s MacOS due to an older version of
zlib.h/zconf.h which did not have
z_const defined, #3939. Other open-source projects unrelated to R have experienced this problem on MacOS too. We have followed the common practice of removing
z_const to support the older
zlib versions, and data.table’s release procedures have gained a
grep to ensure
z_const isn’t used again by accident in future. The library
zlib is used for
fwrite’s new feature of multithreaded compression on-the-fly; see item 3 of 1.12.4 below.
A runtime error in
fwrite’s compression, but only observed so far on Solaris 10 32bit with zlib 1.2.8 (Apr 2013), #3931:
Error -2: one or more threads failed to allocate buffers or there was a compression error. In case it happens again, this area has been made more robust and the error more detailed. As is often the case, investigating the Solaris problem revealed secondary issues in the same area of the code. In this case, some
%d in verbose output should have been
%lld. This obliquity that CRAN’s Solaris provides is greatly appreciated.
A leak could occur in the event of an unsupported column type error, or if working memory could only partially be allocated; #3940. Found thanks to
clang’s Leak Sanitizer (prompted by CRAN’s diligent use of latest tools), and two tests in the test suite which tested the unsupported-type error.
rleid() functions now support long vectors (length > 2 billion).
fread():
NUL(
\0), #3400. Thanks to Marcus Davy for reporting with examples, Roy Storey for the initial PR, and Bingjie Qian for testing this feature on a very complicated real-world file.
colClassesnow supports
'complex',
'raw',
'Date',
'POSIXct', and user-defined classes (so long as an
as.method exists), #491 #1634 #2610. Any error during coercion results in a warning and the column is left as the default type (probably
"character"). Thanks to @hughparsonage for the PR.
stringsAsFactors=0.10will factorize any character column containing under
0.10*nrowunique strings, #2025. Thanks to @hughparsonage for the PR.
colClasses=list(numeric=20:30, numeric="ID")will apply the
numerictype to column numbers
20:30as before and now also column name
"ID"; i.e. all duplicate class names are now respected rather than only the first. This need may arise when specifying some columns by name and others by number, as in this example. Thanks to @hughparsonage for the PR.
yaml(default
FALSE) and the ability to parse CSVY-formatted input files; i.e., csv files with metadata in a header formatted as YAML ( #1701. See
?freadand files in
/inst/tests/csvy/for sample formats. Please provide feedback if you find this feature useful and would like extended capabilities. For now, consider it experimental, meaning the API/arguments may change. Thanks to @leeper at
riofor the inspiration and @MichaelChirico for implementing.
selectcan now be used to specify types for just the columns selected, #1426. Just like
colClassesit can be a named vector of
colname=typepairs, or a named
listof
type=col(s)pairs. For example:
fread(file, select=c(colD="character", # returns 2 columns: colD,colA colA="integer64")) fread(file, select=list(character="colD", # returns 5 columns: colD,8,9,10,colA integer= 8:10, character="colA"))
tmpdir=argument which is passed to
tempfile()whenever a temporary file is needed. Thanks to @mschubmehl for the PR. As before, setting
TMPDIR(to
/dev/shmfor example) before starting the R session still works too; see
?base::tempdir.
fwrite():
.gzfiles directly, #2016. Compression, like
fwrite(), is multithreaded and compresses each chunk on-the-fly (a full size intermediate file is not created). Use a “.gz” extension, or the new
compress=option. Many thanks to Philippe Chataignon for the significant PR. For example:
DT = data.table(A=rep(1:2, 100e6), B=rep(1:4, 50e6)) fwrite(DT, "data.csv") # 763MB; 1.3s fwrite(DT, "data.csv.gz") # 2MB; 1.6s identical(fread("data.csv.gz"), DT)
Note that compression is handled using
zlib library. In the unlikely event of missing
zlib.h, on a machine that is compiling
data.table from sources, one may get
fwrite.c compilation error
zlib.h: No such file or directory. As of now, the easiest solution is to install missing library using
sudo apt install zlib1g-dev (Debian/Ubuntu). Installing R (
r-base-dev) depends on
zlib1g-dev so this should be rather uncommon. If it happens to you please upvote related issue #3872.
Gains
yaml argument matching that of
fread, #3534. See the item in
fread for a bit more detail; here, we’d like to reiterate that feedback is appreciated in the initial phase of rollout for this feature.
Gains
bom argument to add a byte order mark (BOM) at the beginning of the file to signal that the file is encoded in UTF-8, #3488. Thanks to Stefan Fleck for requesting and Philippe Chataignon for implementing.
Now supports type
complex, #3690.
Gains
scipen #2020, the number 1 most-requested feature #3189. The default is
getOption("scipen") so that
fwrite will now respect R’s option in the same way as
base::write.csv and
base::format, as expected. The parameter and option name have been kept the same as base R’s
scipen for consistency and to aid online search. It stands for ‘scientific penalty’; i.e., the number of characters to add to the width within which non-scientific number format is used if it will fit. A high penalty essentially turns off scientific format. We believe that common practice is to use a value of 999, however, if you do use 999, because your data might include very long numbers such as
10^300,
fwrite needs to account for the worst case field width in its buffer allocation per thread. This may impact space or time. If you experience slowdowns or unacceptable memory usage, please pass
verbose=TRUE to
fwrite, inspect the output, and report the issue. A workaround, until we can determine the best strategy, may be to pass a smaller value to
scipen, such as 50. We have observed that
fwrite(DT, scipen=50) appears to write
10^50 accurately, unlike base R. However, this may be a happy accident and not apply generally. Further work may be needed in this area.
DT = data.table(a=0.0001, b=1000000) fwrite(DT) # a,b # 1e-04,1e+06 fwrite(DT,scipen=1) # a,b # 0.0001,1e+06 fwrite(DT,scipen=2) # a,b # 0.0001,1000000 10^50 # [1] 1e+50 options(scipen=50) 10^50 # [1] 100000000000000007629769841091887003294964970946560 fwrite(data.table(A=10^50)) # A # 100000000000000000000000000000000000000000000000000
Assigning to one item of a list column no longer requires the RHS to be wrapped with
list or
.(), #950.
> DT = data.table(A=1:3, B=list(1:2,"foo",3:5)) > DT A B <int> <list> 1: 1 1,2 2: 2 foo 3: 3 3,4,5 > # The following all accomplish the same assignment: > DT[2, B:=letters[9:13]] # was error, now works > DT[2, B:=.(letters[9:13])] # was error, now works > DT[2, B:=.(list(letters[9:13]))] # .(list()) was needed, still works > DT A B <int> <list> 1: 1 1,2 2: 2 i,j,k,l,m 3: 3 3,4,5
print.data.table() gains an option to display the timezone of
POSIXct columns when available, #2842. Thanks to Michael Chirico for reporting and Felipe Parages for the PR.
New functions
nafill and
setnafill, #854. Thanks to Matthieu Gomez for the request and Jan Gorecki for implementing.
DT = setDT(lapply(1:100, function(i) sample(c(rnorm(9e6), rep(NA_real_, 1e6))))) format(object.size(DT), units="GB") ## 7.5 Gb zoo::na.locf(DT, na.rm=FALSE) ## zoo 53.518s setDTthreads(1L) nafill(DT, "locf") ## DT 1 thread 7.562s setDTthreads(0L) nafill(DT, "locf") ## DT 40 threads 0.605s setnafill(DT, "locf") ## DT in-place 0.367s
New variable
.Last.updated (similar to R’s
.Last.value) contains the number of rows affected by the most recent
set(), #1885. For details see
?.Last.updated.
:=or
between() and
%between% are faster for
POSIXct, #3519, and now support the
bit64’s
integer64 class and more robust coercion of types, #3517.
between() gains
check= which checks
any(lower>upper); off by default for speed in particular for type character.
.()alias, #2315. Thanks to @Henrik-P for the reports. There is now also support for
New convenience functions
%ilike% and
%flike% which map to new
like() arguments
ignore.case and
fixed respectively, #3333.
%ilike% is for case-insensitive pattern matching.
%flike% is for more efficient matching of fixed strings. Thanks to @andreasLD for providing most of the core code.
on=.NATURAL (or alternatively
X[on=Y] #3621) joins two tables on their common column names, so called natural join, #629. Thanks to David Kulp for request. As before, when
on= is not provided,
X must have a key and the key columns are used to join (like rownames, but multi-column and multi-type).
as.data.table gains
key argument mirroring its use in
setDT and
data.table, #890. As a byproduct, the arguments of
as.data.table.array have changed order, which could affect code relying on positional arguments to this method. Thanks @cooldome for the suggestion and @MichaelChirico for implementation.
merge.data.table is now exported, #2618. We realize that S3 methods should not ordinarily be exported. Rather, the method should be invoked via S3 dispatch. But users continue to request its export, perhaps because of intricacies relating to the fact that data.table inherits from data.frame, there are two arguments to
merge() but S3 dispatch applies just to the first, and a desire to explicitly call
data.table::merge.data.table from package code. Thanks to @AndreMikulec for the most recent request.
New rolling function to calculate rolling sum has been implemented and exported, see
?frollsum, #2778.
setkey to an existing index now uses the index, #2889. Thanks to @MichaelChirico for suggesting and @saraswatmks for the PR.
DT[order(col)[1:5], ...] (i.e. where
i is a compound expression involving
order()) is now optimized to use
data.table’s multithreaded
forder, #1921. This example is not a fully optimal top-N query since the full ordering is still computed. The improvement is that the call to
order() is computed faster for any
i expression using
order.
as.data.table now unpacks columns in a
data.frame which are themselves a
data.frame or
matrix. This need arises when parsing JSON, a corollary in #3369. Bug fix 19 in v1.12.2 (see below) added a helpful error (rather than segfault) to detect such invalid
data.table, and promised that
as.data.table() would unpack these columns in the next release (i.e. this release) so that the invalid
data.table is not created in the first place. Further,
setDT now warns if it observes such columns and suggests using
as.data.table instead, #3760.
CJ has been ported to C and parallelized, thanks to a PR by Michael Chirico, #3596. All types benefit, but, as in many
data.table operations, factors benefit more than character.
# default 4 threads on a laptop with 16GB RAM and 8 logical CPU ids = as.vector(outer(LETTERS, LETTERS, paste0)) system.time( CJ(ids, 1:500000) ) # 3.9GB; 340m rows # user system elapsed (seconds) # 3.000 0.817 3.798 # was # 1.800 0.832 2.190 # now # ids = as.factor(ids) system.time( CJ(ids, 1:500000) ) # 2.6GB; 340m rows # user system elapsed (seconds) # 1.779 0.534 2.293 # was # 0.357 0.763 0.292 # now
New function
fcoalesce(...) has been written in C, and is multithreaded for
numeric and
factor. It replaces missing values according to a prioritized list of candidates (as per SQL COALESCE,
dplyr::coalesce, and
hutils::coalesce), #3424. It accepts any number of vectors in several forms. For example, given three vectors
x,
y, and
z, where each
NA in
x is to be replaced by the corresponding value in
y if that is non-NA, else the corresponding value in
z, the following equivalent forms are all accepted:
fcoalesce(x,y,z),
fcoalesce(x,list(y,z)), and
fcoalesce(list(x,y,z)). Being a new function, its behaviour is subject to change particularly for type
list, #3712.
# default 4 threads on a laptop with 16GB RAM and 8 logical CPU N = 100e6 x = replicate(5, {x=sample(N); x[sample(N, N/2)]=NA; x}, simplify=FALSE) # 2GB y1 = do.call(dplyr::coalesce, x)) y2 = do.call(hutils::coalesce, x)) y3 = do.call(data.table::fcoalesce, x)) # user system elapsed (seconds) # 4.935 1.876 6.810 # dplyr::coalesce # 3.122 0.831 3.956 # hutils::coalesce # 0.915 0.099 0.379 # data.table::fcoalesce identical(y1,y2) && identical(y1,y3) # TRUE
Type
complex is now supported by
setkey,
setorder,
by=,
keyby=,
shift,
dcast,
frank,
rowid,
rleid,
CJ,
fcoalesce,
unique, and
uniqueN, #3690. Thanks to Gareth Ward and Elio Campitelli for their reports and input. Sorting
complex is achieved the same way as base R; i.e., first by the real part then by the imaginary part (as if the
complex column were two separate columns of
double). There is no plan to support joining/merging on
complex columns until a user demonstrates a need for that.
:=,
setkey,
[key]by= and
on= in verbose mode (
options(datatable.verbose=TRUE)) now detect any columns inheriting from
Date which are stored as 8 byte double, test if any fractions are present, and if not suggest using a 4 byte integer instead (such as
data.table::IDate) to save space and time, #1738. In future this could be upgraded to
message or
warning depending on feedback.
New function
fifelse(test, yes, no, na) has been implemented in C by Morgan Jacob, #3657 and #3753. It is comparable to
base::ifelse,
dplyr::if_else,
hutils::if_else, and (forthcoming)
vctrs::if_else(). It returns a vector of the same length as
test but unlike
base::ifelse the output type is consistent with those of
yes and
no. Please see
?data.table::fifelse for more details.
# default 4 threads on a laptop with 16GB RAM and 8 logical CPU x = sample(c(TRUE,FALSE), 3e8, replace=TRUE) # 1GB microbenchmark::microbenchmark( base::ifelse(x, 7L, 11L), dplyr::if_else(x, 7L, 11L), hutils::if_else(x, 7L, 11L), data.table::fifelse(x, 7L, 11L), times = 5L, unit="s" ) # Unit: seconds # expr min med max neval # base::ifelse(x, 7L, 11L) 8.5 8.6 8.8 5 # dplyr::if_else(x, 7L, 11L) 9.4 9.5 9.7 5 # hutils::if_else(x, 7L, 11L) 2.6 2.6 2.7 5 # data.table::fifelse(x, 7L, 11L) 1.5 1.5 1.6 5 # setDTthreads(1) # data.table::fifelse(x, 7L, 11L) 0.8 0.8 0.9 5 # setDTthreads(2) # data.table::fifelse(x, 7L, 11L) 0.4 0.4 0.5 5 # setDTthreads(4)
transpose gains
keep.names= and
make.names= arguments, #1886. Previously, column names were dropped and there was no way to keep them.
keep.names="rn" keeps the column names and puts them in the
"rn" column of the result. Similarly,
make.names="rn" uses column
"rn" as the column names of the result. Both arguments are
NULL by default for backwards compatibility. As these new arguments are new, they are subject to change in future according to community feedback. Thanks to @ghost for the request.
Added a
data.table method for
utils::edit to ensure a
data.table is returned, for convenience, #593.
More efficient optimization of many columns in
j (e.g. from
.SD), #1470. Thanks @Jorges1000 for the report.
setnames(DT, old, new) now omits any
old==new to save redundant key and index name updates, #3783.
setnames(DT, new) (i.e. not providing
old) already omitted any column name updates where
names(DT)==new; e.g.
setnames(DT, gsub('^_', '', names(DT))) exits early if no columns start with
_.
[[by group is now optimized for regular vectors (not type list), #3209. Thanks @renkun-ken for the suggestion.
[by group was already optimized. Please file a feature request if you would like this optimization for list columns.
New function
frollapply for rolling computation of arbitrary R functions (caveat: input
x is coerced to numeric beforehand, and the function must return a scalar numeric value). The API is consistent to extant rolling functions
frollmean and
frollsum; note that it will generally be slower than those functions because (1) the known functions use our optimized internal C implementation and (2) there is no thread-safe API to R’s C
eval. Nevertheless
frollapply is faster than corresponding
base-only and
zoo versions:
set.seed(108) x = rnorm(1e6); n = 1e3 base_rollapply = function(x, n, FUN) { nx = length(x) ans = rep(NA_real_, nx) for (i in n:nx) ans[i] = FUN(x[(i-n+1):i]) ans } system.time(base_rollapply(x, n, mean)) system.time(zoo::rollapplyr(x, n, function(x) mean(x), fill=NA)) system.time(zoo::rollmeanr(x, n, fill=NA)) system.time(frollapply(x, n, mean)) system.time(frollmean(x, n)) ### fun mean sum median # base_rollapply 8.815 5.151 60.175 # zoo::rollapply 34.373 27.837 88.552 # zoo::roll[fun] 0.215 0.185 NA ## median not fully supported # frollapply 5.404 1.419 56.475 # froll[fun] 0.003 0.002 NA ## median not yet supported
setnames() now accepts functions in
old= and
new=, #3703. Thanks @smingerson for the feature request and @shrektan for the PR.
set() now use zero-copy type coercion. Accordingly,
DT[..., integerColumn:=0] and
set(DT,i,j,0) no longer warn about the
0 (‘numeric’) needing to be
0L (‘integer’) because there is no longer any time or space used for this coercion. The old long warning was off-putting to new users (“what and why L?”), whereas advanced users appreciated the old warning so they could avoid the coercion. Although the time and space for one coercion in a single call is unmeasurably small, when placed in a loop the small overhead of any allocation on R’s heap could start to become noticeable (more so for
set() whose purpose is low-overhead looping). Further, when assigning a value across columns of varying types, it could be inconvenient to supply the correct type for every column. Hence, zero-copy coercion was introduced to satisfy all these requirements. A warning is still issued, as before, when fractional data is discarded; e.g. when 3.14 is assigned to an integer column. Zero-copy coercion applies to length>1 vectors as well as length-1 vectors.
:=and
first,
head and
tail by group no longer error in some cases, #2030 #3462. Thanks to @franknarf1 for reporting.
keyby=colName could use the wrong index and return incorrect results if both
colName and
colNameExtra (where
colName is a leading subset of characters of
colNameExtra) are column names and an index exists on
colNameExtra, #3498. Thanks to Xianying Tan for the detailed report and pinpointing the source line at fault.
A missing item in
j such as
j=.(colA, ) now gives a helpful error (
Item 2 of the .() or list() passed to j is missing) rather than the unhelpful error
argument "this_jsub" is missing, with no default (v1.12.2) or
argument 2 is empty (v1.12.0 and before), #3507. Thanks to @eddelbuettel for the report.
fwrite() could crash when writing very long strings such as 30 million characters, #2974, and could be unstable in memory constrained environments, #2612. Thanks to @logworthy and @zachokeeffe for reporting and Philippe Chataignon for fixing in PR #3288.
fread() could crash if
quote="" (i.e. ignore quotes), the last line is too short, and
fill=TRUE, #3524. Thanks to Jiucang Hao for the report and reproducible example.
Printing could occur unexpectedly when code is run with
source, #2369. Thanks to @jan-glx for the report and reproducible example.
Grouping by
NULL on zero rows
data.table now behaves consistently to non-zero rows
data.table, #3530. Thanks to @SymbolixAU for the report and reproducible example.
GForce optimization of
median did not retain the class; e.g.
median of
Date or
POSIXct would return a raw number rather than retain the date class, #3079. Thanks to @Henrik-P for reporting.
DT[, format(mean(date,""%b-%Y")), by=group] could fail with
invalid 'trim' argument, #1876. Thanks to Ross Holmberg for reporting.
externalVar=1:5; DT[, mean(externalVar), by=group] could return incorrect results rather than a constant (
3 in this example) for each group, #875. GForce optimization was being applied incorrectly to the
mean without realizing
externalVar was not a column.
test.data.table() now passes in non-English R sessions, #630 #3039. Each test still checks that the number of warnings and/or errors produced is correct. However, a message is displayed suggesting to restart R with
LANGUAGE=en in order to test that the text of the warning and/or error messages are as expected, too.
Joining a double column in
i containing say 1.3, with an integer column in
x containing say 1, would result in the 1.3 matching to 1, #2592, and joining a factor column to an integer column would match the factor’s integers rather than error. The type coercion logic has been revised and strengthened. Many thanks to @MarkusBonsch for reporting and fixing. Joining a character column in
i to a factor column in
x is now faster and retains the character column in the result rather than coercing it to factor. Joining an integer column in
i to a double column in
x now retains the integer type in the result rather than coercing the integers into the double type. Logical columns may now only be joined to logical columns, other than all-NA columns which are coerced to the matching column’s type. All coercions are reported in verbose mode:
options(datatable.verbose=TRUE).
Attempting to recycle 2 or more items into an existing
list column now gives the intended helpful error rather than
Internal error: recycle length error not caught earlier., #3543. Thanks to @MichaelChirico for finding and reporting.
Subassigning using
data.table embedded in a list column of a single-row
data.table could fail, #3474. Note that
$<-to a
$<-is not recommended; please use
:=instead which already worked in this case. Thanks to Jakob Richter for reporting.
rbind and
rbindlist of zero-row items now retain (again) the unused levels of any (zero-length) factor columns, #3508. This was a regression in v1.12.2 just for zero-row items. Unused factor levels were already retained for items having
nrow>=1. Thanks to Gregory Demin for reporting.
rbind and
rbindlist of an item containing an ordered factor with levels containing an
NA (as opposed to an NA integer) could segfault, #3601. This was a a regression in v1.12.2. Thanks to Damian Betebenner for reporting. Also a related segfault when recycling a length-1 factor column, #3662.
example(":=", local=TRUE) now works rather than error, #2972. Thanks @vlulla for the report.
rbind.data.frame on
IDate columns changed the column from
integer to
double, #2008. Thanks to @rmcgehee for reporting.
merge.data.table now retains any custom classes of the first argument, #1378. Thanks to @michaelquinn32 for reopening.
c,
seq and
mean of
ITime objects now retain the
ITime class via new
ITime methods, #3628. Thanks @UweBlock for reporting. The
cut and
split methods for
ITime have been removed since the default methods work, #3630.
as.data.table.array now handles the case when some of the array’s dimension names are
NULL, #3636.
Adding a
list column using
cbind,
as.data.table, or
data.table now works rather than treating the
list as if it were a set of columns and introducing an invalid NA column name, #3471. However, please note that using
:=to add columns is preferred.
cbind( data.table(1:2), list(c("a","b"),"a") ) # V1 V2 NA # v1.12.2 and before # <int> <char> <char> # 1: 1 a a # 2: 2 b a # # V1 V2 # v1.12.4+ # <int> <list> # 1: 1 a,b # 2: 2 a
Incorrect sorting/grouping results due to a bug in Intel’s
icc compiler 2019 (Version 19.0.4.243 Build 20190416) has been worked around thanks to a report and fix by Sebastian Freundt, #3647. Please run
data.table::test.data.table(). If that passes, your installation does not have the problem.
column not found could incorrectly occur in rare non-equi-join cases, #3635. Thanks to @UweBlock for the report.
Slight fix to the logic for auto-naming the
by clause for using a custom function like
evaluate to now be named
evaluate instead of the name of the first symbolic argument, #3758.
Column binding of zero column
data.table will now work as expected, #3334. Thanks to @kzenstratus for the report.
integer64 sum-by-group is now properly optimized, #1647, #3464. Thanks to @mlandry22-h2o for the report.
From v1.12.0
between() and
%between% interpret missing values in
lower= or
upper= as unlimited bounds. A new parameter
NAbounds has been added to achieve the old behaviour of returning
NA, #3522. Thanks @cguill95 for reporting. This is now consistent for character input, #3667 (thanks @AnonymousBoba), and class
nanotime is now supported too.
integer64 defined on a subset of a new column would leave “gibberish” on the remaining rows, #3723. A bug in
rbindlist with the same root cause was also fixed, #1459. Thanks @shrektan and @jangorecki for the reports.
groupingsets functions now properly handle alone special symbols when using an empty set to group by, #3653. Thanks to @Henrik-P for the report.
A
data.table created using
setDT() on a
data.frame containing identical columns referencing each other would cause
setkey() to return incorrect results, #3496 and #3766. Thanks @kirillmayantsev and @alex46015 for reporting, and @jaapwalhout and @Atrebas for helping to debug and isolate the issue.
x[, round(.SD, 1)] and similar operations on the whole of
.SD could return a locked result, incorrectly preventing
:=on the result, #2245. Thanks @grayskripko for raising.
Using
get/
mget in
j could cause
.SDcols to be ignored or reordered, #1744, #1965, and #2036. Thanks @franknarf1, @MichaelChirico, and @TonyBonen, for the reports.
DT[, i-1L, with=FALSE] would misinterpret the minus sign and return an incorrect result, #2019. Thanks @cguill95 for the report.
DT[id==1, DT2[.SD, on="id"]] (i.e. joining from
.SD in
j) could incorrectly fail in some cases due to
.SD being locked, #1926, and when updating-on-join with factors #3559 #2099. Thanks @franknarf1 and @Henrik-P for the reports and for diligently tracking use cases for almost 3 years!
as.IDate.POSIXct returned
NA for UTC times before Dec 1901 and after Jan 2038, #3780. Thanks @gschett for the report.
rbindlist now returns correct idcols for lists with different length vectors, #3785, #3786. Thanks to @shrektan for the report and fix.
DT[ , !rep(FALSE, ncol(DT)), with=FALSE] correctly returns the full table, #3013 and #2917. Thanks @alexnss and @DavidArenburg for the reports.
shift(x, 0:1, type='lead', give.names=TRUE) uses
lead in all returned column names, #3832. Thanks @daynefiler for the report.
Subtracting two
POSIXt objects by group could lead to incorrect results because the
base method internally calls
difftime with
units='auto';
data.table does not notice if the chosen units differ by group and only the last group’s
units attribute was retained, #3694 and #761. To surmount this, we now internally force
units='secs' on all
POSIXt-POSIXt calls (reported when
verbose=TRUE); generally we recommend calling
difftime directly instead. Thanks @oliver-oliver and @boethian for the reports.
Using
get/
mget in
j could cause
.SDcols to be ignored or reordered, #1744, #1965, #2036, and #2946. Thanks @franknarf1, @MichaelChirico, @TonyBonen, and Steffen J. (StackOverflow) for the reports.
DT[...,by={...}] now handles expressions in
{, #3156. Thanks to @tdhock for the report.
data.table creation statement in the body of the function calling it, or a variable in calling scope, #3890. Many thanks to @kirillmayantsev for the detailed reports.
:=could change a
Grouping could create a
malformed factor and/or segfault when the factors returned by each group did not have identical levels, #2199 and #2522. Thanks to Václav Hausenblas, @franknarf1, @ben519, and @Henrik-P for reporting.
rbindlist (and printing a
data.table with over 100 rows because that uses
rbindlist(head, tail)) could error with
malformed factor for unordered factor columns containing a used
NA_character_ level, #3915. This is an unusual input for unordered factors because NA_integer_ is recommended by default in R. Thanks to @sindribaldur for reporting.
Adding a
list column containing an item of type
list to a one row
data.table could fail, #3626. Thanks to Jakob Richter for reporting.
rbindlist’s
use.names="check" now emits its message for automatic column names (
"V[0-9]+") too, #3484. See news item 5 of v1.12.2 below.
Adding a new column by reference using
set() on a
data.table loaded from binary file now give a more helpful error message, #2996. Thanks to Joseph Burling for reporting.
This data.table has either been loaded from disk (e.g. using readRDS()/load()) or constructed manually (e.g. using structure()). Please run setDT() or alloc.col() on it first (to pre-allocate space for new columns) before adding new columns by reference to it.
setorder on a superset of a keyed
data.table’s key now retains its key, #3456. For example, if
a is the key of
DT,
setorder(DT, a, -v) will leave
DT keyed by
a.
New option
options(datatable.quiet = TRUE) turns off the package startup message, #3489.
suppressPackageStartupMessages() continues to work too. Thanks to @leobarlach for the suggestion inspired by
options(tidyverse.quiet = TRUE). We don’t know of a way to make a package respect the
quietly= option of
library() and
require() because the
quietly= isn’t passed through for use by the package’s own
.onAttach. If you can see how to do that, please submit a patch to R.
When loading a
data.table from disk (e.g. with
readRDS), best practice is to run
setDT() on the new object to assure it is correctly allocated memory for new column pointers. Barring this, unexpected behavior can follow; for example, if you assign a new column to
DT from a function
f, the new columns will only be assigned within
f and
DT will be unchanged. The
verbose messaging in this situation is now more helpful, #1729. Thanks @vspinu for sharing his experience to spur this.
New vignette Using
.SD for Data Analysis, a deep dive into use cases for the
.SD variable to help illuminate this topic which we’ve found to be a sticking point for beginning and intermediate
data.table users, #3412.
Added a note to
?frank clarifying that ranking is being done according to C sorting (i.e., like
forder), #2328. Thanks to @cguill95 for the request.
Historically,
dcast and
melt were built as enhancements to
reshape2’s own
dcast/
melt. We removed dependency on
reshape2 in v1.9.6 but maintained some backward compatibility. As that package has been superseded since December 2017, we will begin to formally complete the split from
reshape2 by removing some last vestiges. In particular we now warn when redirecting to
reshape2 methods and will later error before ultimately completing the split; see #3549 and #3633. We thank the
reshape2 authors for their original inspiration for these functions, and @ProfFancyPants for testing and reporting regressions in dev which have been fixed before release.
DT[col] where
col is a column containing row numbers of itself to select, now suggests the correct syntax (
DT[(col)] or
DT[DT$col]), #697. This expands the message introduced in #1884 for the case where
col is type
logical and
DT[col==TRUE] is suggested.
The
datatable.old.unique.by.key option has been warning for 1 year that it is deprecated:
... Please stop using it and pass by=key(DT) instead for clarity .... This warning is now upgraded to error as per the schedule in note 10 of v1.11.0 (May 2018), and note 1 of v1.9.8 (Nov 2016). In June 2020 the option will be removed.
We intend to deprecate the
datatable.nomatch option, more info. A message is now printed upon use of the option (once per session) as a first step. It asks you to please stop using the option and to pass
nomatch=NULL explicitly if you require inner join. Outer join (
nomatch=NA) has always been the default because it is safer; it does not drop missing data silently. The problem is that the option is global; i.e., if a user changes the default using this option for their own use, that can change the behavior of joins inside packages that use
data.table too. This is the only
data.table option with this concern.
The test suite of 9k tests now runs with three R options on:
warnPartialMatchArgs,
warnPartialMatchAttr, and
warnPartialMatchDollar. This ensures that we don’t rely on partial argument matching in internal code, for robustness and efficiency, and so that users can turn these options on for their code in production, #3664. Thanks to Vijay Lulla for the suggestion, and Michael Chirico for fixing 48 internal calls to
attr() which were missing
exact=TRUE, for example. Thanks to R-core for adding these options to R 2.6.0 (Oct 2007).
test.data.table() could fail if the
datatable.integer64 user option was set, #3683. Thanks @xiaguoxin for reporting.
The warning message when using
keyby= together with
:=is clearer, #2763. Thanks to @eliocamp.
first and
last gain an explicit
n=1L argument so that it’s clear the default is 1, and their almost identical manual pages have been merged into one.
Rolling functions (
?froll) coerce
logical input to
numeric (instead of failing) to mimic the behavior of
integer input.
The warning message when using
strptime in
j has been improved, #2068. Thanks to @tdhock for the report.
Added a note to
?setkey clarifying that
setkey always uses C-locale sorting (as has been noted in
?setorder). Thanks @JBreidaks for the report in #2114.
hour()/
minute()/
second() are much faster for
ITime input, #3518.
New alias
setalloccol for
alloc.col, #3475. For consistency with
set* prefixes for functions that operate in-place (like
setkey,
setorder, etc.).
alloc.col is not going to be deprecated but we recommend using
setalloccol.
dcast no longer emits a message when
value.var is missing but
fun.aggregate is explicitly set to
length (since
value.var is arbitrary in this case), #2980.
Optimized
mean of
integer columns no longer warns about a coercion to numeric, #986. Thanks @dgrtwo for his YouTube tutorial at 3:01 where the warning occurs.
Using
first and
last function on
POSIXct object no longer loads
xts namespace, #3857.
first on empty
data.table returns empty
data.table now #3858.
Added some clarifying details about what happens when a shell command is used in
fread, #3877. Thanks Brian for the StackOverflow question which highlighted the lack of explanation here.
We continue to encourage packages to
Import rather than
Depend on
data.table, #3076. To prevent the growth rate in new packages using
Depend, we have requested that CRAN apply a small patch we provided to prevent new submissions using
Depend. If this is accepted, the error under
--as-cran will be as follows. The existing 73 packages using
Depend will continue to pass OK until they next update, at which point they will be required to change from
Depend to
Import.
R CMD check <pkg> --as-cran ... * checking package dependencies ... ERROR data.table should be in Imports not Depends. Please contact its maintainer for more information.
rep() explicitly. Single values are still recycled silently as before. Early warning was given in this tweet. The 774 CRAN and Bioconductor packages using
data.table were tested and the maintainers of the 16 packages affected (2%) were consulted before going ahead, #3310. Upon agreement we went ahead. Many thanks to all those maintainers for already updating on CRAN, #3347.
:=no longer recycles length>1 RHS vectors. There was a warning when recycling left a remainder but no warning when the LHS length was an exact multiple of the RHS length (the same behaviour as base R). Consistent feedback for several years has been that recycling is more often a bug. In rare cases where you need to recycle a length>1 vector, please use
foverlaps now supports
type="equal", #3416 and part of #3002.
The number of logical CPUs used by default has been reduced from 100% to 50%. The previous 100% default was reported to cause significant slow downs when other non-trivial processes were also running, #3395 #3298. Two new optional environment variables (
R_DATATABLE_NUM_PROCS_PERCENT &
R_DATATABLE_NUM_THREADS) control this default.
setDTthreads() gains
percent= and
?setDTthreads has been significantly revised. The output of
getDTthreads(verbose=TRUE) has been expanded. The environment variable
OMP_THREAD_LIMIT is now respected (#3300) in addition to
OMP_NUM_THREADS as before.
rbind and
rbindlist now retain the position of duplicate column names rather than grouping them together #3373, fill length 0 columns (including NULL) with NA with warning #1871, and recycle length-1 columns #524. Thanks to Kun Ren for the requests which arose when parsing JSON.
rbindlist’s
use.names= default has changed from
FALSE to
"check". This emits a message if the column names of each item are not identical and then proceeds as if
use.names=FALSE for backwards compatibility; i.e., bind by column position not by column name. The
rbind method for
data.table already sets
use.names=TRUE so this change affects
rbindlist only and not
rbind.data.table. To stack differently named columns together silently (the previous default behavior of
rbindlist), it is now necessary to specify
use.names=FALSE for clarity to readers of your code. Thanks to Clayton Stanley who first raised the issue here. To aid pinpointing the calls to
rbindlist that need attention, the message can be turned to error using
options(datatable.rbindlist.check="error"). This option also accepts
"warning",
"message" and
"none". In this release the message is suppressed for default column names (
"V[0-9]+"); the next release will emit the message for those too. In 6 months the default will be upgraded from message to warning. There are two slightly different messages. They are helpful, include context and point to this news item :
Column %d ['%s'] of item %d is missing in item %d. Use fill=TRUE to fill with NA (NULL for list columns), or use.names=FALSE to ignore column names. See news item 5 in v1.12.2 for options to control this message. Column %d ['%s'] of item %d appears in position %d in item %d. Set use.names=TRUE to match by column name, or use.names=FALSE to ignore column names. See news item 5 in v1.12.2 for options to control this message.
fread gains
keepLeadingZeros, #2999. By default
FALSE so that, as before, a field containing
001 is interpreted as the integer 1, otherwise the character string
"001". The default may be changed using
options(datatable.keepLeadingZeros=TRUE). Many thanks to @marc-outins for the PR.
rbindlist() of a malformed factor which is missing a levels attribute is now a helpful error rather than a cryptic error about
STRING_ELT, #3315. Thanks to Michael Chirico for reporting.
Forgetting
type= in
shift(val, "lead") would segfault, #3354. A helpful error is now produced to indicate
"lead" is being passed to
n= rather than the intended
type= argument. Thanks to @SymbolixAU for reporting.
The default print output (top 5 and bottom 5 rows) when ncol>255 could display the columns in the wrong order, #3306. Thanks to Kun Ren for reporting.
Grouping by unusual column names such as
by='string_with_\\' and
keyby="x y" could fail, #3319 #3378. Thanks to @HughParsonage for reporting and @MichaelChirico for the fixes.
foverlaps() could return incorrect results for
POSIXct <= 1970-01-01, #3349. Thanks to @lux5 for reporting.
dcast.data.table now handles functions passed to
fun.aggregate= via a variable; e.g.,
funs <- list(sum, mean); dcast(..., fun.aggregate=funs, #1974 #1369 #2064 #2949. Thanks to @sunbee, @Ping2016, @smidelius and @d0rg0ld for reporting.
Some non-equijoin cases could segfault, #3401. Thanks to @Gayyam for reporting.
dcast.data.table could sort rows containing
NA incorrectly, #2202. Thanks to @Galileo-Galilei for the report.
Sorting, grouping and finding unique values of a numeric column containing at most one finite value (such as
c(Inf,0,-Inf)) could return incorrect results, #3372 #3381; e.g.,
data.table(A=c(Inf,0,-Inf), V=1:3)[,sum(V),by=A] would treat the 3 rows as one group. This was a regression in 1.12.0. Thanks to Nicolas Ampuero for reporting.
:=with quoted expression and dot alias now works as expected, #3425. Thanks to @franknarf1 for raising and @jangorecki for the PR.
A join’s result could be incorrectly keyed when a single nomatch occurred at the very beginning while all other values matched, #3441. The incorrect key would cause incorrect results in subsequent queries. Thanks to @symbalex for reporting and @franknarf1 for pinpointing the root cause.
rbind and
rbindlist(..., use.names=TRUE) with over 255 columns could return the columns in a random order, #3373. The contents and name of each column was correct but the order that the columns appeared in the result might not have matched the original input.
rbind and
rbindlist now combine
integer64 columns together with non-
integer64 columns correctly #1349, and support
raw columns #2819.
NULL columns are caught and error appropriately rather than segfault in some cases, #2303 #2305. Thanks to Hugh Parsonage and @franknarf1 for reporting.
melt would error with ‘factor malformed’ or segfault in the presence of duplicate column names, #1754. Many thanks to @franknarf1, William Marble, wligtenberg and Toby Dylan Hocking for reproducible examples. All examples have been added to the test suite.
Removing a column from a null (0-column) data.table is now a (standard and simpler) warning rather than error, #2335. It is no longer an error to add a column to a null (0-column) data.table.
Non-UTF8 strings were not always sorted correctly on Windows (a regression in v1.12.0), #3397 #3451. Many thanks to @shrektan for reporting and fixing.
cbind with a null (0-column)
data.table now works as expected, #3445. Thanks to @mb706 for reporting.
Subsetting does a better job of catching a malformed
data.table with error rather than segfault. A column may not be NULL, nor may a column be an object which has columns (such as a
data.frame or
matrix). Thanks to a comment and reproducible example in #3369 from Drew Abbot which demonstrated the issue which arose from parsing JSON. The next release will enable
as.data.table to unpack columns which are
data.frame to support this use case.
When upgrading to 1.12.0 some Windows users might have seen
CdllVersion not found in some circumstances. We found a way to catch that so the helpful message now occurs for those upgrading from versions prior to 1.12.0 too, as well as those upgrading from 1.12.0 to a later version. See item 1 in notes section of 1.12.0 below for more background.
v1.12.0 checked itself on loading using
tools::checkMD5sums("data.table") but this check failed under the
packrat package manager on Windows because
packrat appears to modify the DESCRIPTION file of packages it has snapshot, #3329. This check is now removed. The
CdllVersion check was introduced after the
checkMD5sums() attempt and is better; e.g., reliable on all platforms.
As promised in new feature 6 of v1.11.6 Sep 2018 (see below in this news file), the
datatable.CJ.names option’s default is now
TRUE. In v1.13.0 it will be removed.
Travis CI gains OSX using homebrew llvm for OpenMP support, #3326. Thanks @marcusklik for the PR.
Calling
data.table:::print.data.table() directly (i.e. bypassing method dispatch by using 3 colons) and passing it a 0-column
data.frame (not
data.table) now works, #3363. Thanks @heavywatal for the PR.
v1.12.0 did not compile on Solaris 10 using Oracle Developer Studio 12.6, #3285. Many thanks to Prof Ripley for providing and testing a patch. For future reference and other package developers, a
const variable should not be passed to OpenMP’s
num_threads() directive otherwise
left operand must be modifiable lvalue occurs. This appears to be a compiler bug which is why the specific versions are mentioned in this note.
foverlaps provides clearer error messages w.r.t. factor and POSIXct interval columns, #2645 #3007 #1143. Thanks to @sritchie73, @msummersgill and @DavidArenburg for the reports.
unique(DT) checks up-front the types of all the columns and will fail if any column is type
list even though those
list columns may not be needed to establish uniqueness. Use
unique(DT, by=...) to specify columns that are not type
list. v1.11.8 and before would also correctly fail with the same error, but not when uniqueness had been established in prior columns: it would stop early, not look at the
list column and return the correct result. Checking up-front was necessary for some internal optimizations and it’s probably best to be explicit anyway. Thanks to James Lamb for reporting, #3332. The error message has been embellished :
Column 2 of by= (2) is type 'list', not yet supported. Please use the by= argument to specify columns with types that are supported.
Reminder that note 11 in v1.11.0 (May 2018) warned that
set2key() and
key2() will be removed in May 2019. They have been warning since v1.9.8 (Nov 2016) and their warnings were upgraded to errors in v1.11.0 (May 2018). When they were introduced in version 1.9.4 (Oct 2014) they were marked as ‘experimental’.
The
key(DT)<- form of
setkey() has been warning since at least 2012 to use
setkey(). The warning is now stronger:
key(x)<-value is deprecated and not supported. Please change to use setkey().. This warning will be upgraded to error in one year. ‘go-first’ and runs data.table through his production systems before release. We are looking for a ‘go
list type and is different to
nomatch=0) will fill with
0 to save replacing
NA with
0 afterwards, #857.
.()creates a DT = data.table(ID = sample(LETTERS,N,TRUE), V1 = sample(5,N,TRUE), V2 = runif(N)) w = which(DT$V1 > 3) # select 40% of rows # v1.12.0 v1.11.8 system.time(DT[w]) # 0.8s 2.6s DT[, ID := as.factor(ID)] system.time(DT[w]) # 0.4s 2.3s system
NextMethod().
[subset then the superclass should implement its own
[method to manage those after calling #3143., Henri Ståhl and @kszela24 CI (“Extra” badge
TMPDIRto
/dev/shmbefore starting R; the L
i clause returns no rows to assign to anyway, #2829. Thanks to @cguill95 for reporting and to @MarkusBonsch for fixing.
:=is no longer an error when the getOption("datatable.na.strings", "NA") # this release; i.e. the same; no change yet getOption("datatable.na.strings", "") # future release
This option controls how
NA regardless. We would like
NA for consistency with numeric types,
<NA> in a character columns. The use of R’s
getOption() allows users to move forward now, using
options(datatable.fread.na.strings=""), or restore old behaviour when the default’s default is changed in future, using
options(datatable.fread.na.strings="NA").
,,is read in character columns. It does not affect numeric columns which read
,,as
,,=>
,"",=>empty string to be the standard default for
,,by default for an
fread() and
fwrite()’s
logical01= argument :
logical01 = FALSE # old default getOption("datatable.logical01", FALSE) # this release; i.e. the same; no change yet get. For example:
txt = 'A,B\n1,hello\n2,"howdy" said Joe\n3,bonjour\n' cat(txt) # A,B # 1,hello # 2,"howdy" said Joe # 3,bonjour fread(txt) A B <int> <char> 1: 1 hello 2: 2 "howdy" said Joe 3: 3 bonjour Warning ‘:=’
with= parameter. If this is well received, the
i= and
by=, too. Note that column names should not now start with
..var is used in
j= but
..var exists as a column name, the column still takes precedence, for backwards compatibility. Over the next few years, data.table will start issuing warnings/errors when it sees column names starting with
var is in calling scope and
var is a column name too. Further, we have not forgotten that in the past we recommended prefixing the variable in calling scope with
..var exists in calling scope, that still works, provided neither
var exists in calling scope nor
..var exists as a column name. Please now remove the
..var in calling scope to tidy this up. In future data.table will start to warn/error on such usage.
..prefix and over the next few years we will start to formally deprecate and remove the
..prefix could be expanded to symbols appearing in
... If a symbol
... This affects one CRAN package out of 475 using data.table, so we do not believe this restriction to be unreasonable. Our main focus here which we believe
..achieves is to resolve the more common ambiguity when
..yourself. If you did that and
..prefix on
.SDcols is specified and
get() appears in
j. Thanks @renkun-ken for reporting and the PR, and @ProfFancyPants for reporing a regression introduced in the PR. Closes #2326 and #2338.
:=when.
:=assignment of one vector to two or more columns, e.g.
‘class’
"NA".
""and
list was over-aggressive in applying
list even when
bquote, #1912. Thanks @MichaelChirico for reporting/filing and @ecoRoland for suggesting and testing a fix.
.to
.was intended within ‘experimental’
parallel::mclapplyis used and data.table is merely loaded, #2418. Oddly, all tests including test 1705 (which tests
mclapplywith data.table) passed fine on CRAN. It appears to be some versions of MacOS or some versions of libraries on MacOS, perhaps. Many thanks to Martin Morgan for reporting and confirming this fix works. Thanks also to @asenabouth, Joe Thorley and Danton Noriega for testing, debugging and confirming that automatic parallelism inside data.table (such as
fwrite) works well even on these MacOS installations. See also news items below for 1.10.4-1 and 1.10.4-2.
OpenMP on MacOS is now supported by CRAN and included in CRAN’s package binaries for Mac. But installing v1.10.4-1 from source on MacOS failed when OpenMP was not enabled at compile time, #2409. Thanks to Liz Macfie and @fupangpangpang for reporting. The startup message when OpenMP is not enabled has been updated.
Two rare potential memory faults fixed, thanks to CRAN’s automated use of latest compiler tools; e.g. clang-5 and gcc-7
The
nanotime v0.2.0 update (June 2017) changed from
integer64 to
S4 and broke
fwrite of
nanotime columns. Fixed to work with
nanotime both before and after v0.2.0.
Pass R-devel changes related to
deparse(,backtick=) and
factor().
Internal
NAMED()==2 now
MAYBE_SHARED(), .
When
fread() and
print() see
integer64 columns are present but package
bit64 is not installed, the warning is now displayed as intended. Thanks to a question by Santosh on r-help and forwarded by Bill Dunlap.
nanotimewriter in
fwrite()type punned using
*(long long *)&REAL(column)[i]which, strictly, is undefined behavour under C standards. It passed a plethora of tests on linux (gcc 5.4 and clang 3.8), win-builder and 6 out 10 CRAN flavours using gcc. But failed (wrong data written) with the newest version of clang (3.9.1) as used by CRAN on the failing flavors, and solaris-sparc. Replaced with the union method and added a grep to CRAN_Release.cmd.
When
j is a symbol prefixed with
..it will be looked up in calling scope and its value taken to be column names or numbers.
When you see the
DT[...]. It is intended to be a convenient way to protect your code from accidentally picking up a column name. Similar to how
x. and
i. prefixes (analogous to SQL table aliases) can already be used to disambiguate the same column name present in both
x and
i. A symbol prefix rather than a
..prefix think one-level-up like the directory
..in all operating systems meaning the parent directory. In future the
..prefix could be made to work on all symbols apearing anywhere inside
..()function will be easier for us to optimize internally and more convenient if you have many variables in calling scope that you wish to use in your expressions safely. This feature was first raised in 2012 and long wished for, #633. It is experimental.
When
fread() or
print() see
integer64 columns are present,
bit64’s namespace is now automatically loaded for convenience.
fwrite() now supports the new
nanotime type by Dirk Eddelbuettel, #1982. Aside:
data.table already automatically supported
nanotime in grouping and joining operations via longstanding support of its underlying
integer64 type.
indices() gains a new argument
vectors, default
FALSE. This strsplits the index names by
__ for you, #1589.
Some long-standing potential instability has been discovered and resolved many thanks to a detailed report from Bill Dunlap and Michael Sannella. At C level any call of the form
setAttrib(x, install(), allocVector()) can be unstable in any R package. Despite
setAttrib() PROTECTing its inputs, the 3rd argument (
allocVector) can be executed first only for its result to to be released by
install()’s potential GC before reaching
setAttrib’s PROTECTion of its inputs. Fixed by either PROTECTing or pre-
install()ing. Added to CRAN_Release.cmd procedures: i)
greps to prevent usage of this idiom in future and ii) running data.table’s test suite with
gctorture(TRUE).
A new potential instability introduced in the last release (v1.10.0) in GForce optimized grouping has been fixed by reverting one change from malloc to R_alloc. Thanks again to Michael Sannella for the detailed report.
fwrite() could write floating point values incorrectly, #1968. A thread-local variable was incorrectly thread-global. This variable’s usage lifetime is only a few clock cycles so it needed large data and many threads for several threads to overlap their usage of it and cause the problem. Many thanks to @mgahan and @jmosser for finding and reporting.
fwrite()’s
..turbo option has been removed as the warning message warned. If you’ve found a problem, please report it.
No known issues have arisen due to
DT[,1] and
DT[,c("colA","colB")] now returning columns as introduced in v1.9.8. However, as we’ve moved forward by setting
options('datatable.WhenJisSymbolThenCallingScope'=TRUE) introduced then too, it has become clear a better solution is needed. All 340 CRAN and Bioconductor packages that use data.table have been checked with this option on. 331 lines would need to be changed in 59 packages. Their usage is elegant, correct and recommended, though. Examples are
DT[1, encoding] in quanteda and
DT[winner=="first", freq] in xgboost. These are looking up the columns
encoding and
freq respectively and returning them as vectors. But if, for some reason, those columns are removed from
DT and
encoding or
freq are still variables in calling scope, their values in calling scope would be returned. Which cannot be what was intended and could lead to silent bugs. That was the risk we were trying to avoid.
options('datatable.WhenJisSymbolThenCallingScope') is now removed. A migration timeline is no longer needed. The new strategy needs no code changes and has no breakage. It was proposed and discussed in point 2 here, as follows.
When
j is a symbol (as in the quanteda and xgboost examples above) it will continue to be looked up as a column name and returned as a vector, as has always been the case. If it’s not a column name however, it is now a helpful error explaining that data.table is different to data.frame and what to do instead (use
with=FALSE). The old behaviour of returning the symbol’s value in calling scope can never have been useful to anybody and therefore not depended on. Just as the
DT[,1] change could be made in v1.9.8, this change can be made now. This change increases robustness with no downside. Rerunning all 340 CRAN and Bioconductor package checks reveal 2 packages throwing the new error: partools and simcausal. Their maintainers have been informed that there is a likely bug on those lines due to data.table’s (now remedied) weakness. This is exactly what we wanted to reveal and improve.
..prefix or
As before, and as we can see is in common use in CRAN and Bioconductor packages using data.table,
DT[,myCols,with=FALSE] continues to lookup
myCols in calling scope and take its value as column names or numbers. You can move to the new experimental convenience feature
DT[, ..myCols] if you wish at leisure.
fwrite(..., quote='auto') already quoted a field if it contained a
sep or
\n, or
sep2[2] when
list columns are present. Now it also quotes a field if it contains a double quote (
qmethod tests did test escaping embedded double quotes, but only when
sep or
\n was present in the field as well to trigger the quoting of the field.
") as documented, #1925. Thanks to Aki Matsuo for reporting. Tests added. The
Fixed 3 test failures on Solaris only, #1934. Two were on both sparc and x86 and related to a
tzone attribute difference between
as.POSIXct and
as.POSIXlt even when passed the default
tz="". The third was on sparc only: a minor rounding issue in
fwrite() of 1e-305.
Regression crash fixed when 0’s occur at the end of a non-empty subset of an empty table, #1937. Thanks Arun for tracking down. Tests added. For example, subsetting the empty
DT=data.table(a=character()) with
DT[c(1,0)] should return a 1 row result with one
NA since 1 is past the end of
nrow(DT)==0, the same result as
DT[1].
Fixed newly reported crash that also occurred in old v1.9.6 when
by=.EACHI,
nomatch=0, the first item in
i has no match AND
j has a function call that is passed a key column, #1933. Many thanks to Reino Bruner for finding and reporting with a reproducible example. Tests added.
Fixed
fread() error occurring for a subset of Windows users:
showProgress is not type integer but type 'logical'., #1944 and #1111. Our tests cover this usage (it is just default usage), pass on AppVeyor (Windows), win-builder (Windows) and CRAN’s Windows so perhaps it only occurs on a specific and different version of Windows to all those. Thanks to @demydd for reporting. Fixed by using strictly
logical type at R level and
Rboolean at C level, consistently throughout.
Combining
on= (new in v1.9.6) with
by= or
keyby= gave incorrect results, #1943. Many thanks to Henrik-P for the detailed and reproducible report. Tests added.
New function
rleidv was ignoring its
cols argument, #1942. Thanks Josh O’Brien for reporting. Tests added.
It seems OpenMP is not available on CRAN’s Mac platform; NOTEs appeared in CRAN checks for v1.9.8. Moved
Rprintf from
init.c to
packageStartupMessage to avoid the NOTE as requested urgently by Professor Ripley. Also fixed the bad grammar of the message: ‘single threaded’ now ‘single-threaded’. If you have a Mac and run macOS or OS X on it (I run Ubuntu on mine) please contact CRAN maintainers and/or Apple if you’d like CRAN’s Mac binary to support OpenMP. Otherwise, please follow these instructions for OpenMP on Mac which people have reported success with.
Just to state explicitly: data.table does not now depend on or require OpenMP. If you don’t have it (as on CRAN’s Mac it appears but not in general on Mac) then data.table should build, run and pass all tests just fine.
There are now 5,910 raw tests as reported by
test.data.table(). Tests cover 91% of the 4k lines of R and 89% of the 7k lines of C. These stats are now known thanks to Jim Hester’s Covr package and Codecov.io. If anyone is looking for something to help with, creating tests to hit the missed lines shown by clicking the
R and
src folders at the bottom here would be very much appreciated.
The FAQ vignette has been revised given the changes in v1.9.8. In particular, the very first FAQ.
With hindsight, the last release v1.9.8 should have been named v1.10.0 to convey it wasn’t just a patch release from .6 to .8 owing to the ‘potentially breaking changes’ items. Thanks to @neomantic for correctly pointing out. The best we can do now is now bump to 1.10.0. | https://rdatatable.gitlab.io/data.table/news/index.html | CC-MAIN-2022-21 | refinedweb | 23,287 | 67.96 |
On Apr 09, 2011, at 06:23 PM, Éric Araujo wrote: > Glad to read my review helped :) Indeed, thanks. > I also think that “bundle” is a nice term to name what the docs > currently calls a distribution. At the very least, *bundle* isn't completely overloaded 10x over in Pythonland yet. :) >>>> Another example of version information is the sqlite3 [5]_ library >>> the sqlite3 module > > You overlooked that one. Got it. >>>> #. For modules which are also packages, the top level module >>> namespace >>>> SHOULD include the ``__version__`` attribute. >>> Just a remark: I don’t remember ever reading the term “top-level >> module >>> namespace”. It’s not hard to understand, but it might be helpful to >>> some readers if you add “(i.e. the :file:`{somepackage}/__init__.py` >>> file)”. (The brackets will cause the enclosed text to be marked up >> as >>> replaceable text, just a nicety.) >> How about just removing "top-level"? >> >> #.. > Tarek already ruled last summer that field names in setup.cfg have to > have their PEP 345 name. I proposed to merge author name and email into > the author field, and to have the description field always refer to a > file: author and author_email are still separate, and a new > description_from_file fields has been added. That’s why I think a new > field has to be defined. version-from should be a short enough name. I > also expect most people to use copy-paste or the interactive setup.cfg > creation helper, so field name length should not be that big of an issue. Perhaps. In lieu of a better idea it's fine for now. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: <> | https://mail.python.org/pipermail/distutils-sig/2011-April/017694.html | CC-MAIN-2018-09 | refinedweb | 288 | 74.08 |
parikksit.bBAN USER
How does this approach sound like?).
@Vandita.Chhabaria, Your solution is n^2, that's not very good for a larger data sets. Here's what you can change to make the solution better wrt to time complexity. I believe this can be done in O(n)
//Employee Class:
public class Employee{
public String fName;
public String movingFrom; //Building number;
public String movingTo; //Building number;
...
}
/* Here's how you'd add data to a HasMap:
Map<String, String> map = new HashMap<>();
//Concat strings for from and to, it'd look like this: '1,2', which means
// the employee is moving from building 1 to building 2
map.add(emp.fromBuilding +","+ emp.ToBuilding, emp.fName);
.... //Repeat for all employees.
//Here's how you'd do the matchup
pubic ArrayList<String> matchUp(HashMap map)
{
List<String> list = new ArrayList<>();
Iterator it = mp.entrySet().iterator();
while( it.hasNext() )
{
Map.Entry pair = (Map.Entry) it.next();
String temp = getReversed(pair.getKey());
if(map.containsKey(temp))
list.add(pair.value() + ", " + map.get(temp));
}
return list;
}
private String getReversed(String s)
{
String[] temp = s.split(",");
return temp[1] + "," + temp[0];
}
Number 1 : 25, represented as [2] -> [5] -> null
Number 2: 5, represented as [5] -> null
Bogus solution could be to convert the linkedlist into strings, then parse it back into integers. I'm sure this isn't what they're looking for.
public int multiplyLLNumbers(LinkedList one, LinkedList 2)
{
Node node = one.Head();
String temp = "";
while(onenode.next() != null)
{
temp += node.data.toString();
}
String temp2 = "";
node = two.Head();
while(two.next() != null)
{
temp2 += two.data.toString();
}
return (Integer.parseInt(temp1) * Integer.parseInt(temp2));
}
Go through all the nodes, check if left and right exist, further check if their values are either lesser or greater than the node. Call the function recursively. Here's a java solution:
public boolean isValidBT(Node node)
{
if(node == null)
return true;
if(node.hasLeft() && !(node.left.data < node.data))
return false;
if(node.hasRight() && !(node.right.data > node.data))
return false;
if( !isValidBST(node.left) || !isValidBST(node.right) )
return false;
return true;
}
I believe a value to the 'supervisor' or 'senior' in the employee would solve this problem. An interface that would have a few methods to 'forwardCall(Employee.Supervisor emp)' would make things look better.
public class Employee implements Call{
Employee supervisor;
String fName;
String lName;
...
public Employee(String fName, String lName, Employee supervisor)
{ this.fName = fName; this.lName = lName; this.supervisor = supervisor;}
public transferToSupervisor(Employee sup)
{
//Code to transfer call
}
public interface Call{
public void answerCall();
...
public void transferToSupervisor(Employee supervisor);
}
Not sure about this answer, feedback appreciated.
I guess this question needs more clarification, some example would be good as to what is expected.
A HashMap of MaxHeaps.
Map<String, MaxHeap> map;
where, key = artist name, MaxHeap.getRoot() will give you the song name.
The nodes will be made up of a simple datastructure with a key-value pair:
class SongNode{
int count;
String songName;
public String getRootName(){
return songName()
}
...
...
Somewhat on these lines it could be answered. Feedback appreciated. Thank you.
Description:
1. Find the indexes of all the 'letters' in the given string and only iterate over that, than the entire string, because u may have a string "557747373447abc4757838383", that'll save you added complexity.
2. Use 2 data structures, a Queue and a HashSet.
3. Queue will help you find more combinations, HashSet will help in finding if the combination has been reached before.
4. For each added element in Queue, remove it and make more combinations.
5. This solution also works for already uppercase string like "4AbD2f" as well as duplicates like "3bb"
public static void combinations()
{
String s = "000000abde0000";
//Boundary check 1, checks for string length
if(s.length() == 0)
{
System.out.println("String length too small");
return;
}
//Boundary check 2, checks if there are more than 0 'characters'
int count=0;
//Maintain an arraylist of indexes for characters, so we do not iterate over digits
//in cases like huge string with many digits and less characters
List<Integer> indexes = new ArrayList<Integer>();
for(int i = 0 ; i < s.length() ; i++)
{
if(Character.isLetter(s.charAt(i)))
{
count++;
indexes.add(i);
}
}
System.out.println(indexes);
if(count == 0)
{
System.out.println("No Combinations can be made");
return;
}
//Queue to scan elements over again
Queue<String> q = new LinkedList<>();
//Add the given string to hashset
Set<String> comb = new HashSet<String>();
comb.add(s);
q.add(s);
//Boundary check 3, checks if given string had any uppercases
s = s.toLowerCase();
if(!comb.contains(s))
{
comb.add(s);
q.add(s); //Any new combination must be added to the queue to check for more
}
String temp,lastGood;
Character c;
int iterCount = 0;
while(!q.isEmpty())
{
//Start with the first combination in the queue
temp = q.remove();
for(int i : indexes)//iterate over indexes we found than the entire string
{
iterCount++;
lastGood = temp; //copy of the original string
c = temp.charAt(i);
if(Character.isLowerCase(c))
{
temp = temp.substring(0, i) + Character.toUpperCase(c)+ temp.substring(i+1);
}
else
{
temp = temp.substring(0, i) + Character.toLowerCase(c)+ temp.substring(i+1);
}
//If the change just made is unique, add to the Set and Queue
if(!comb.contains(temp))
{
//System.out.println("Adding: " + temp + " to comb!"); //for debugging
comb.add(temp);
q.add(temp);
}
//restore lastGood and start over
temp = lastGood;
}
}
if(comb.size() == (Math.pow(2, count)))
{
System.out.println("All("+Math.pow(2, count)+") combinations have been found, over "+ iterCount + " loops!");
System.out.println(comb);
}
else
{
System.out.println("Something went wrong!");
System.out.println(comb);
}
}
I'm not yet sure how it works complexity wise, but looks like it's somewhere between n^(n-1), please help out here if you have an answer. Thank you!
If I've understood the problem correctly, we must release Q prisoners in Q days, that means 1 prisoner a day (at a time) could be a constraint.
Given this, I believe this is a divide and conquer technique.
So if we have 100 prisoners and wish to release 4, then we release the 4th prisoner first, followed be half of 4, 2nd prisoner, followed by half of 2, 1st prisoner, then similar approach on the right sub array.
This maximizes the profit or should I say minimizes the loss.
They've asked for an algorithm, so I'm assuming coding isn't required. I'll mention the logic.
Assumptions:
1. Meeting times start from 9 AM through 5 PM = 8 hours.
2. Minimum meeting time = 30 minutes.
3. From above 2, every conference room will have 16 time slots. (You can lower the intervals to 15 or 10 as you like, figure out the total slots accordingly).
4. So if every conference room were to be an array of 16 elements, then each element represents 30 minutes.
Let us say we have 100 conference rooms, then if we would like to book a meeting for time slot 10.00 AM to 10.30 AM, then we only look for the 2nd element of the array (array starts from 0-15) for all conference rooms or until we find an empty slot.
This solution is O(n) in time.
Detailed understanding of how the arrays are structured as per time slots:
[ 0 , 1 , 2 , 3 , .... ]
[ 9-9.30 , 9.30-10 ,10-10.30 , 10.30-11, .... ]
Here's my approach, I think we need some kind of a backtracking. Which is why I'm using recursion. I'm not very good at recursion, so please excuse me if the code isn't written optimally.
What we do is, we pass in the first index to the function Hop(), then we check if its zero, return false. If the number itself is the length of the array, return true. After the assertions, we simply grab the number at the index of the input array and make those many hops. If we get a false reply, we lower the index and continue. Its working for most samples provided in the above thread. Comments are welcome. Thanks!
private static boolean Hop(int value){
if(value >= input.length)
return true;
int hopValue = input[value];
if(hopValue == 0)
return false;
boolean waddup = Hop(hopValue + value);
if(waddup == false && hopValue>1)
waddup = Hop(--hopValue + value);
return waddup;
I am not sure if this is right, but, can we convert the 2d matrix into a 1d array? Then it could be worked as follows:
for (i=1; 1dArray.Length(); i++)
if (a[i] < ( a[i-1] && a[i+1] && a[i-4] && a[i+4]) then we found our element.
We can also use the above logic to skip a few lines of code. I just think it would again be n^2 because converting an NxN (say 4x4) matrix into a 1d would make new N=NxN i.e. N^2 (4^2 = 16 elements in 1d Array).... lol I don't know what I'm doing here. Please help me out a bit.
Isn't the solution kinda simple?
Just go through the array and find where a[ i ]<a[ i+1 ]
Wherever the condition is satisfied, index = ( a.length - i ) + 1
Or am I missing the basic concept of rotating a sorted array? -_-
I like the first answer to simply swap the value. Alternately, if we aren't even allowed to use a second variable, and assuming the singly linked list contains only numbers as data, we can use the typical math operations to swap the values.
a=21, b=5
1. a=a+b (a becomes 26)
2. b=a-b (b becomes 21)
3. a=a-b (a becomes 5)
How does this approach sound like?- parikksit.b March 09, 2017). | https://careercup.com/user?id=14959691 | CC-MAIN-2019-35 | refinedweb | 1,616 | 67.45 |
Hi all,
I would like to use the fmod library to decode streams (mp3, ogg vorbis) and send me the decoded PCM samples, because i want to use my own output (can be waveout, dsound, disk writer, etc etc).
I did that:
[code:2putd279]
FSOUND_SetOutput(FSOUND_OUTPUT_NOSOUND);
FSOUND_SetMixer(FSOUND_MIXER_QUALITY_MMXP5);
FSOUND_Init(44100, 32, FSOUND_INIT_GLOBALFOCUS);
DSP_Unit = FSOUND_DSP_Create(input_fmod_get_pcm_samples, FSOUND_DSP_DEFAULTPRIORITY_USER, (int)input);
if (!DSP_Unit)
return (EXIT_FAILURE);
FSOUND_DSP_SetActive(DSP_Unit, TRUE);
[/code:2putd279]
And I use the DSP callback to get the pcm samples. Is it the good way ?
But FMOD seems to have builtin timer, and doesnot decode stream as fast as possible. By example, if I want to decode 10 s of sound before hearing it (i have a 10s buffer), it will take 10s to decode (but my ahtlon 2000+ can do it in somes ms 😉 ).
Did you understand my pb ? Excuse my english …
Thanks in advance,
Kyser
- kyser asked 14 years ago
- You must login to post comments
You can speed up the decoding by using a stream callback on the stream you want to process, and then increase the frequency with FSOUND_SetFrequency.
- Adion answered 14 years ago | http://www.fmod.org/questions/question/forum-3764/ | CC-MAIN-2016-50 | refinedweb | 186 | 59.84 |
What is RMS ?
What is RMS ? hii,
What is RMS ?
hello,
The Record Management System (RMS) is a simple record-oriented database that allows a MIDlet to persistently store information and retrieve it later
J2ME RMS Sorting Example
J2ME RMS Sorting Example
This example simply shows how to use of RMS package. In this example we...{
private RecordStore record;
static
KeyPressed |
Co-ordinates MIDlet
|
J2ME Record Store
MIDlet | J2ME... | J2ME
Crlf |
J2ME Command Class |
J2ME Record
Store | J2ME Form...
| J2ME Timer MIDlet
| J2ME RMS
Sorting | J2ME
Read File | J2ME
J2ME RMS Read Write
J2ME RMS Read Write
...;Core J2ME Technology");
writeRecord("J2ME ...; rs = RecordStore.openRecordStore(REC_STORE, true
J2ME Record Store Example
J2ME Record Store Example
.... In J2ME a record store consists of a collection of records
and that records remain...;
Output of the Record Store Example..
Source Codeme
RMS & View
RMS & View DIffrence between RMS & VIEW Tutorial
as given below in the figure.
J2ME Record Store MIDlet....
J2ME Record Store Example
In this Midlet, we are going...
J2ME RMS Sorting Example
This example simply shows how
J2ME Audio Record
J2ME Audio Record
This example is used to record the audio sound and play the recorded
sound. In this example we are trying to capture the audio sound and encoded
;
EclipseME
EclipseME is an Eclipse plugin to help develop J2ME MIDlets...;
J2ME Java Editor
Extends Eclipse Java Editor support ing J2ME Polish directives, variables and styles.
Edit java files using
sorting student record - Java Beginners
recording ?
u want to store value in database or in file or opertinng run time.... Insert Record");
System.out.println("2. Delete Record");
System.out.println("3. Display Record");
System.out.println("4. Exit");
System.out.print
j2me - Java Beginners
j2me Hi,
I want to save list in record store.How can i do this. Hi Friend,
Please visit the following link:
Thanks
Java Xml Data Store
comes in by phone or in person, she needs to record their name, address, contact... be followed up and/or purchased.
You will need to store the data in a local binary... the implementation needs to change later on (perhaps they might decide to store the data
store
store i want to store data in (.dat) file format using swing and perform operation on that insertion,deletion and update
store
store hi i want store some information in my program and use them in other method. what shoud i do
How To Store Image Into MySQL Using Java
How To Store Image Into MySQL Using Java
In this section we will discuss about how to store an image into the database
using Java and MySQL.
This example explains you about all the steps that how to store image into
MySQL database
Jsp code for disabling record.
Jsp code for disabling record. I want a Jsp and servlet code for the mentioned scenario.
Q. A cross sign appears in front of each record, click to disable the record.System marks the record as disabled.The record
next and previous record in php
next and previous record in php How to display next and previous records in PHP
j2me application
j2me application code for mobile tracking system using j2me
how to edit a record in hibernate?
how to edit a record in hibernate? how to edit a record in hibernate?
Hi Friend,
Please visit the following link:
Hibernate Tutorials
Thanks
Hi Friend,
Please visit the following link:
Hibernate
Acess Record from database.
Acess Record from database. How to access records from database and how to display it on view page, with the help of hibernate
Batchwise Store
Batchwise Store i want to read the column from excel and store the data as batchwise into database
add record to database - JDBC
add record to database How to create program in java that can save record in database ? Hi friend,
import java.io.*;
import java.sql....);
if(i!=0){
out.println("The record has been inserted
How to Display next Record from Database "On the Click of Next Button" USING ARRAYLIST
How to Display next Record from Database "On the Click of Next Button" USING ARRAYLIST In this code how i will use arraylist(which store all my records in index form) to show next data on button click,so that it will goes
query to fetch the highest record in a table
query to fetch the highest record in a table Write a query to fetch the highest record in a table, based on a record, say salary field in the empsalary table
Record and Save Video using Java
Record and Save Video using Java How to record video(webcam) and save it using Java.??
Its really urgent
Mysql Last Record
Mysql Last Record
Mysql Last Record is used to the return the record using... in set (0.00 sec)
Query to view last record of Table named
record management application - Java Beginners
record management application write a small record management application for a school.Tasks will be Add record, Edit record,Delete record, List records. Each record contains: name(max 100 char), Age,Notes(No Max.Limit
Write a query to insert a record into a table
Write a query to insert a record into a table Write a query to insert a record into a table
Hi,
The query string is as follows-
Insert into employee values ('35','gyan','singh');
Thanks
hibernate record not showing in database - Hibernate
hibernate record not showing in database session =sessionFactory.openSession(); //inserting rocords in Echo Message table...)); //It showing on console Records inserted 21 But not showing in database
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/96274 | CC-MAIN-2015-22 | refinedweb | 943 | 71.14 |
{-# LANGUAGE ScopedTypeVariables ,MultiParamTypeClasses ,FunctionalDependencies ,FlexibleInstances ,BangPatterns ,FlexibleContexts #-} {- Copyright (C) 2007 John Goerzen <jgoerzen@complete.org> All rights reserved. For license and copyright information, see the file COPYRIGHT -} {- | Module : Data.ListLike.Base Copyright : Copyright (C) 2007 John Goerzen License : BSD3 Maintainer : John Lato <jwlato@gmail.com> Stability : provisional Portability: portable Generic operations over list-like structures Written by John Goerzen, jgoerzen\@complete.org -} module Data.ListLike.Base ( ListLike(..), InfiniteListLike(..), zip, zipWith, sequence_ ).FoldableLL import qualified Control.Monad as M import Data.Monoid import Data.Maybe {- | The class implementing list-like functions. It is worth noting that types such as 'Data.Map.Map' can be instances of 'ListLike'. Due to their specific ways of operating, they may not behave in the expected way in some cases. For instance, 'cons' may not increase the size of a map if the key you have given is already in the map; it will just replace the value already there. Implementators must define at least: * singleton * head * tail * null or genericLength -} class (FoldableLL full item, Monoid full) => ListLike full item | full -> item where ------------------------------ Creation {- | The empty list -} empty :: full empty = mempty {- | Creates a single-element list out of an element -} singleton :: item -> full ------------------------------ Basic Functions {- | Like (:) for lists: adds an element to the beginning of a list -} cons :: item -> full -> full cons item l = append (singleton item) l {- | Adds an element to the *end* of a 'ListLike'. -} snoc :: full -> item -> full snoc l item = append l (singleton item) {- | Combines two lists. Like (++). -} append :: full -> full -> full append = mappend {- | Extracts the first element of a 'ListLike'. -} head :: full -> item {- | Extracts the last element of a 'ListLike'. -} last :: full -> item last l = case genericLength l of (0::Integer) -> error "Called last on empty list" 1 -> head l _ -> last (tail l) {- | Gives all elements after the head. -} tail :: full -> full {- | All elements of the list except the last one. See also 'inits'. -} init :: full -> full init l | null l = error "init: empty list" | null xs = empty | otherwise = cons (head l) (init xs) where xs = tail l {- | Tests whether the list is empty. -} null :: full -> Bool null x = genericLength x == (0::Integer) {- | Length of the list. See also 'genericLength'. -} length :: full -> Int length = genericLength ------------------------------ List Transformations {- | Apply a function to each element, returning any other valid 'ListLike'. 'rigidMap' will always be at least as fast, if not faster, than this function and is recommended if it will work for your purposes. See also 'mapM'. -} map :: ListLike full' item' => (item -> item') -> full -> full' map func inp | null inp = empty | otherwise = cons (func (head inp)) (map func (tail inp)) {- | Like 'map', but without the possibility of changing the type of the item. This can have performance benefits for things such as ByteStrings, since it will let the ByteString use its native low-level map implementation. -} rigidMap :: (item -> item) -> full -> full rigidMap = map {- | Reverse the elements in a list. -} reverse :: full -> full reverse l = rev l empty where rev rl a | null rl = a | otherwise = rev (tail rl) (cons (head rl) a) {- | Add an item between each element in the structure -} intersperse :: item -> full -> full intersperse sep l | null l = empty | null xs = singleton x | otherwise = cons x (cons sep (intersperse sep xs)) where x = head l xs = tail l ------------------------------ Reducing Lists (folds) -- See also functions in FoldableLLL ------------------------------ Special folds {- | Flatten the structure. -} concat :: (ListLike full' full, Monoid full) => full' -> full concat = fold {- | Map a function over the items and concatenate the results. See also 'rigidConcatMap'.-} concatMap :: (ListLike full' item') => (item -> full') -> full -> full' concatMap = foldMap {- | Like 'concatMap', but without the possibility of changing the type of the item. This can have performance benefits for some things such as ByteString. -} rigidConcatMap :: (item -> full) -> full -> full rigidConcatMap = concatMap {- | True if any items satisfy the function -} any :: (item -> Bool) -> full -> Bool any p = getAny . foldMap (Any . p) {- | True if all items satisfy the function -} all :: (item -> Bool) -> full -> Bool all p = getAll . foldMap (All . p) {- | The maximum value of the list -} maximum :: Ord item => full -> item maximum = foldr1 max {- | The minimum value of the list -} minimum :: Ord item => full -> item minimum = foldr1 min ------------------------------ Infinite lists {- | Generate a structure with the specified length with every element set to the item passed in. See also 'genericReplicate' -} replicate :: Int -> item -> full replicate = genericReplicate ------------------------------ Sublists {- | Takes the first n elements of the list. See also 'genericTake'. -} take :: Int -> full -> full take = genericTake {- | Drops the first n elements of the list. See also 'genericDrop' -} drop :: Int -> full -> full drop = genericDrop {- | Equivalent to @('take' n xs, 'drop' n xs)@. See also 'genericSplitAt'. -} splitAt :: Int -> full -> (full, full) splitAt = genericSplitAt {- | Returns all elements at start of list that satisfy the function. -} takeWhile :: (item -> Bool) -> full -> full takeWhile func l | null l = empty | func x = cons x (takeWhile func (tail l)) | otherwise = empty where x = head l {- | Drops all elements form the start of the list that satisfy the function. -} dropWhile :: (item -> Bool) -> full -> full dropWhile func l | null l = empty | func (head l) = dropWhile func (tail l) | otherwise = l {- | The equivalent of @('takeWhile' f xs, 'dropWhile' f xs)@ -} span :: (item -> Bool) -> full -> (full, full) span func l | null l = (empty, empty) | func x = (cons x ys, zs) | otherwise = (empty, l) where (ys, zs) = span func (tail l) x = head l {- | The equivalent of @'span' ('not' . f)@ -} break :: (item -> Bool) -> full -> (full, full) break p = span (not . p) {- | Split a list into sublists, each which contains equal arguments. For order-preserving types, concatenating these sublists will produce the original list. See also 'groupBy'. -} group :: (ListLike full' full, Eq item) => full -> full' group = groupBy (==) {- | All initial segments of the list, shortest first -} inits :: (ListLike full' full) => full -> full' inits l | null l = singleton empty | otherwise = append (singleton empty) (map (cons (head l)) theinits) where theinits = asTypeOf (inits (tail l)) [l] {- | All final segnemts, longest first -} tails :: ListLike full' full => full -> full' tails l | null l = singleton empty | otherwise = cons l (tails (tail l)) ------------------------------ Predicates {- | True when the first list is at the beginning of the second. -} isPrefixOf :: Eq item => full -> full -> Bool isPrefixOf needle haystack | null needle = True | null haystack = False | otherwise = (head needle) == (head haystack) && isPrefixOf (tail needle) (tail haystack) {- | True when the first list is at the beginning of the second. -} isSuffixOf :: Eq item => full -> full -> Bool isSuffixOf needle haystack = isPrefixOf (reverse needle) (reverse haystack) {- | True when the first list is wholly containted within the second -} isInfixOf :: Eq item => full -> full -> Bool isInfixOf needle haystack = any (isPrefixOf needle) thetails where thetails = asTypeOf (tails haystack) [haystack] ------------------------------ Searching {- | True if the item occurs in the list -} elem :: Eq item => item -> full -> Bool elem i = any (== i) {- | True if the item does not occur in the list -} notElem :: Eq item => item -> full -> Bool notElem i = all (/= i) {- | Take a function and return the first matching element, or Nothing if there is no such element. -} find :: (item -> Bool) -> full -> Maybe item find f l = case findIndex f l of Nothing -> Nothing Just x -> Just (index l x) {- | Returns only the elements that satisfy the function. -} filter :: (item -> Bool) -> full -> full filter func l | null l = empty | func (head l) = cons (head l) (filter func (tail l)) | otherwise = filter func (tail l) {- | Returns the lists that do and do not satisfy the function. Same as @('filter' p xs, 'filter' ('not' . p) xs)@ -} partition :: (item -> Bool) -> full -> (full, full) partition p xs = (filter p xs, filter (not . p) xs) ------------------------------ Indexing {- | The element at 0-based index i. Raises an exception if i is out of bounds. Like (!!) for lists. -} index :: full -> Int -> item index l i | null l = error "index: index not found" | i < 0 = error "index: index must be >= 0" | i == 0 = head l | otherwise = index (tail l) (i - 1) {- | Returns the index of the element, if it exists. -} elemIndex :: Eq item => item -> full -> Maybe Int elemIndex e l = findIndex (== e) l {- | Returns the indices of the matching elements. See also 'findIndices' -} elemIndices :: (Eq item, ListLike result Int) => item -> full -> result elemIndices i l = findIndices (== i) l {- | Take a function and return the index of the first matching element, or Nothing if no element matches -} findIndex :: (item -> Bool) -> full -> Maybe Int findIndex f = listToMaybe . findIndices f {- | Returns the indices of all elements satisfying the function -} findIndices :: (ListLike result Int) => (item -> Bool) -> full -> result findIndices p xs = map snd $ filter (p . fst) $ thezips where thezips = asTypeOf (zip xs [0..]) [(head xs, 0::Int)] ------------------------------ Monadic operations {- | Evaluate each action in the sequence and collect the results -} sequence :: (Monad m, ListLike fullinp (m item)) => fullinp -> m full sequence l = foldr func (return empty) l where func litem results = do x <- litem xs <- results return (cons x xs) {- | A map in monad space. Same as @'sequence' . 'map'@ See also 'rigidMapM' -} mapM :: (Monad m, ListLike full' item') => (item -> m item') -> full -> m full' mapM func l = sequence mapresult where mapresult = asTypeOf (map func l) [] {- | Like 'mapM', but without the possibility of changing the type of the item. This can have performance benefits with some types. -} rigidMapM :: Monad m => (item -> m item) -> full -> m full rigidMapM = mapM {- | A map in monad space, discarding results. Same as @'sequence_' . 'map'@ -} mapM_ :: (Monad m) => (item -> m b) -> full -> m () mapM_ func l = sequence_ mapresult where mapresult = asTypeOf (map func l) [] ------------------------------ "Set" operations {- | Removes duplicate elements from the list. See also 'nubBy' -} nub :: Eq item => full -> full nub = nubBy (==) {- | Removes the first instance of the element from the list. See also 'deleteBy' -} delete :: Eq item => item -> full -> full delete = deleteBy (==) {- | List difference. Removes from the first list the first instance of each element of the second list. See '(\\)' and 'deleteFirstsBy' -} deleteFirsts :: Eq item => full -> full -> full deleteFirsts = foldl (flip delete) {- | List union: the set of elements that occur in either list. Duplicate elements in the first list will remain duplicate. See also 'unionBy'. -} union :: Eq item => full -> full -> full union = unionBy (==) {- | List intersection: the set of elements that occur in both lists. See also 'intersectBy' -} intersect :: Eq item => full -> full -> full intersect = intersectBy (==) ------------------------------ Ordered lists {- | Sorts the list. On data types that do not preserve ordering, or enforce their own ordering, the result may not be what you expect. See also 'sortBy'. -} sort :: Ord item => full -> full sort = sortBy compare {- | Inserts the element at the last place where it is still less than or equal to the next element. On data types that do not preserve ordering, or enforce their own ordering, the result may not be what you expect. On types such as maps, this may result in changing an existing item. See also 'insertBy'. -} insert :: Ord item => item -> full -> full insert = insertBy compare ------------------------------ Conversions {- | Converts the structure to a list. This is logically equivolent to 'fromListLike', but may have a more optimized implementation. -} toList :: full -> [item] toList = fromListLike {- | Generates the structure from a list. -} fromList :: [item] -> full fromList [] = empty fromList (x:xs) = cons x (fromList xs) {- | Converts one ListLike to another. See also 'toList'. Default implementation is @fromListLike = map id@ -} fromListLike :: ListLike full' item => full -> full' fromListLike = map id ------------------------------ Generalized functions {- | Generic version of 'nub' -} nubBy :: (item -> item -> Bool) -> full -> full nubBy f l = nubBy' l (empty :: full) where nubBy' ys xs | null ys = empty | any (f (head ys)) xs = nubBy' (tail ys) xs | otherwise = let y = head ys in cons y (nubBy' (tail ys) (cons y xs)) {- nubBy f l | null l = empty | otherwise = cons (head l) (nubBy f (filter (\y -> not (f (head l) y)) (tail l))) -} {- | Generic version of 'deleteBy' -} deleteBy :: (item -> item -> Bool) -> item -> full -> full deleteBy func i l | null l = empty | otherwise = if func i (head l) then tail l else cons (head l) (deleteBy func i (tail l)) {- | Generic version of 'deleteFirsts' -} deleteFirstsBy :: (item -> item -> Bool) -> full -> full -> full deleteFirstsBy func = foldl (flip (deleteBy func)) {- | Generic version of 'union' -} unionBy :: (item -> item -> Bool) -> full -> full -> full unionBy func x y = append x $ foldl (flip (deleteBy func)) (nubBy func y) x {- | Generic version of 'intersect' -} intersectBy :: (item -> item -> Bool) -> full -> full -> full intersectBy func xs ys = filter (\x -> any (func x) ys) xs {- | Generic version of 'group'. -} groupBy :: (ListLike full' full, Eq item) => (item -> item -> Bool) -> full -> full' groupBy eq l | null l = empty | otherwise = cons (cons x ys) (groupBy eq zs) where (ys, zs) = span (eq x) xs x = head l xs = tail l {- | Sort function taking a custom comparison function -} sortBy :: (item -> item -> Ordering) -> full -> full sortBy cmp = foldr (insertBy cmp) empty {- | Like 'insert', but with a custom comparison function -} insertBy :: (item -> item -> Ordering) -> item -> full -> full insertBy cmp x ys | null ys = singleton x | otherwise = case cmp x (head ys) of GT -> cons (head ys) (insertBy cmp x (tail ys)) _ -> cons x ys ------------------------------ Generic Operations {- | Length of the list -} genericLength :: Num a => full -> a genericLength l = calclen 0 l where calclen !accum cl = if null cl then accum else calclen (accum + 1) (tail cl) {- | Generic version of 'take' -} genericTake :: Integral a => a -> full -> full genericTake n l | n <= 0 = empty | null l = empty | otherwise = cons (head l) (genericTake (n - 1) (tail l)) {- | Generic version of 'drop' -} genericDrop :: Integral a => a -> full -> full genericDrop n l | n <= 0 = l | null l = l | otherwise = genericDrop (n - 1) (tail l) {- | Generic version of 'splitAt' -} genericSplitAt :: Integral a => a -> full -> (full, full) genericSplitAt n l = (genericTake n l, genericDrop n l) {- | Generic version of 'replicate' -} genericReplicate :: Integral a => a -> item -> full genericReplicate count x | count <= 0 = empty | otherwise = map (\_ -> x) [1..count] {- instance (ListLike full item) => Monad full where m >>= k = foldr (append . k) empty m m >> k = foldr (append . (\_ -> k)) empty m return x = singleton x fail _ = empty instance (ListLike full item) => M.MonadPlus full where mzero = empty mplus = append -} {- | An extension to 'ListLike' for those data types that are capable of dealing with infinite lists. Some 'ListLike' functions are capable of working with finite or infinite lists. The functions here require infinite list capability in order to work at all. -} class (ListLike full item) => InfiniteListLike full item | full -> item where {- | An infinite list of repeated calls of the function to args -} iterate :: (item -> item) -> item -> full iterate f x = cons x (iterate f (f x)) {- | An infinite list where each element is the same -} repeat :: item -> full repeat x = xs where xs = cons x xs {- | Converts a finite list into a circular one -} cycle :: full -> full cycle xs | null xs = error "ListLike.cycle: empty list" | otherwise = xs' where xs' = append xs xs' -------------------------------------------------- -- This instance is here due to some default class functions instance ListLike [a] a where empty = [] singleton x = [x] cons x l = x : l snoc l x = l ++ [x] append = (++) head = L.head last = L.last tail = L.tail init = L.init null = L.null length = L.length map f = fromList . L.map f rigidMap = L.map reverse = L.reverse intersperse = L.intersperse toList = id fromList = id -- fromListLike = toList concat = L.concat . toList -- concatMap func = fromList . L.concatMap func rigidConcatMap = L.concatMap any = L.any all = L.all maximum = L.maximum minimum = L.minimum -- fold -- foldMap replicate = L.replicate take = L.take drop = L.drop splitAt = L.splitAt takeWhile = L.takeWhile dropWhile = L.dropWhile span = L.span break = L.break group = fromList . L.group inits = fromList . L.inits tails = fromList . L.tails isPrefixOf = L.isPrefixOf isSuffixOf = L.isSuffixOf isInfixOf = L.isInfixOf elem = L.elem notElem = L.notElem find = L.find filter = L.filter partition = L.partition index = (L.!!) elemIndex = L.elemIndex elemIndices item = fromList . L.elemIndices item findIndex = L.findIndex sequence = M.sequence . toList -- mapM = M.mapM mapM_ = M.mapM_ nub = L.nub delete = L.delete deleteFirsts = (L.\\) union = L.union intersect = L.intersect sort = L.sort groupBy func = fromList . L.groupBy func unionBy = L.unionBy intersectBy = L.intersectBy sortBy = L.sortBy insert = L.insert genericLength = L.genericLength -------------------------------------------------- -- These utils are here instead of in Utils.hs because they are needed -- by default class functions {- | Takes two lists and returns a list of corresponding pairs. -} zip :: (ListLike full item, ListLike fullb itemb, ListLike result (item, itemb)) => full -> fullb -> result zip = zipWith (\a b -> (a, b)) {- | Takes two lists and combines them with a custom combining function -} zipWith :: (ListLike full item, ListLike fullb itemb, ListLike result resultitem) => (item -> itemb -> resultitem) -> full -> fullb -> result zipWith f a b | null a = empty | null b = empty | otherwise = cons (f (head a) (head b)) (zipWith f (tail a) (tail b)) {- | Evaluate each action, ignoring the results -} sequence_ :: (Monad m, ListLike mfull (m item)) => mfull -> m () sequence_ l = foldr (>>) (return ()) l | http://hackage.haskell.org/package/ListLike-3.1.5/docs/src/Data-ListLike-Base.html | CC-MAIN-2014-52 | refinedweb | 2,714 | 62.88 |
XCTest-Gherkin
XCTest+Gherkin
At net-a-porter we have traditionally done our UI testing using Cucumber and Appium, which has worked fine and did the job. However, it has a few disadvantages; it requires knowing another language (in our case Ruby), it requires more moving parts on our CI stack (cucumber, node, appium, ruby, gems etc), it ran slowly, and it always seemed to lag a bit behind the latest Xcode tech. None of these by themselves are deal breakers but put together it all adds up to make UI testing more of a chore than we think it should be.
The goals of this project are to
- Increase speed and reduce tech overhead of writing UI tests, with the end goal of developers sitting with testers and writing UI tests when they write unit tests. These tests would be run by our CI on each merge so they have to be fast.
- Not lose any of the existing test coverage. We've been using Appium for a while so we've built up a good set of feature files that cover a big chunk of functionality which we don't want to lose.
Goal #1 is easy to achieve; we just use a technology built into Xcode so we have a common technology between the tests and the app, using a common language our developers and testers both know.
Goal #2 is tricker - we will need to keep our .feature files and move them over to the new system somehow. The structure of our tests should be as similar to Cucumber's structure as possible to reduce the learning curve; we're already asking to testers to learn a new language!
The solution was to extend
XCTestCase to allow Gherkin style syntax when writing tests, like this:
Features
import XCTest import XCTest_Gherkin class testAThingThatNeedsTesting: XCTestCase { func testBasicSteps() { Given("A situation that I want to start at") When("I do a thing") And("I do another thing") Then("This value should be 100") And("This condition should be met as well") } }
This is a valid test case that should run inside Xcode, with the failing line highlighted and the tests appearing in the test inspector pane. An important thing to keep is visibility of which test failed and why!
Step definitions
The next step is to write step definitions for each of these steps. Here's two of them:
class SomeStepDefinitions : StepDefiner { override func defineSteps() { step("A situation that I want to start at") { // Your setup code here } step("This value should be ([0-9]*)") { (matches: [String]) in let expectedValue = matches.first! let someValueFromTheUI = /* However you want to get this */ XCTAssertEqual(expectedValue, someValueFromTheUI) } } }
These steps match (via regular expressions, using case insensitive
NSRegularExpression) and return the capture groups (if there are any). The second step will capture the digits from the end of the test and compare it to the current state of the UI.
There are convenience versions of the step method which extract the first match for you:
step("This value should be ([0-9]*)") { (match: String) in XCTAssertEqual(expectedValue, match) } step("This value should be between ([0-9]*) and ([0-9]*)") { (match1: String, match2: String) in let someValue = /* However you want to get this */ XCTAssert(someValue > match1) XCTAssert(someValue < match2) }
Captured value types
In step definition with captured values you can use any type conforming to
MatchedStringRepresentable.
String,
Double,
Int and
Bool types already conform to this protocol. You can also match your custom types by conforming them to
CodableMatchedStringRepresentable. This requires type to implement only
Codable protocol methods,
MatchedStringRepresentable implementation is provided by the library.
struct Person: Codable, Equatable { let name: String } extension Person: CodableMatchedStringRepresentable { } step("User is logged in as (.+)") { (match: Person) in let loggedInUser = ... XCTAssertEqual(loggedInUser, match) } func testLoggedInUser() { let nick = Person(name: "Nick") Given("User is loggeed in as \(nick)") }
Named capture groups
On iOS 11 and macOS 10.13 you can use named capture groups to improve your console and activity logs. The name of the group will be transformed to human readable form and will replace the step expression substring that it captures. This is particularly useful when you use your custom types as step parameters as described in the previous section.
Without named capture groups such test
step("User is logged in as (.+)") { (match: Person) in ... } func testLoggedInUser() { let nick = Person(name: "Nick") Given("User is loggeed in as \(Person(name: "Nick"))") }
will produce following logs:
step User is loggeed in as {"name":"Nick"}
With named capture groups the step definition can look like this (notice that
match is now a
StepMatches<Person>)
step("User is logged in as (?<aRegisteredUser>.+)") { (match: StepMatches<Person>) in ... }
and the same test will produce logs:
step User is logged in as a registered user
In step implementation you will access matched values using the name of the group, i.e.
match["aRegisteredUser"]. You can access all matched values (including matched by unnamed groups) by their index, starting from 0, i.e.
match[0]. So you can have more than one named group and you can mix them with unnamed groups.
Examples and feature outlines
If you want to test the same situation with a set of data, Gherkin allows you to specify example input for your tests. We used this all over our previous tests so we needed to deal with it here too!
func testOutlineTests() { Examples( [ "name", "age" ], [ "Alice", "20" ], [ "Bob", "20" ] ) Outline { Given("I use the example name <name>") Then("The age should be <age>") } }
This will run the tests twice, once with the values
Alice,20 and once with the values
Bob,20.
The easiest way to use
Examples and
Outline functions is to call
Examples before
Outline. But in Gherkin feature files Examples always go after Scenario Outline. If you want to keep this order in native tests (and don't care about little bit funky Xcode indentation) you can provide examples after defining Outline via trailing closure or explicit
Examples parameter:
func testOutlineTests() { Outline({ Given("I use the example name <name>") Then("The age should be <age>") }) { [ [ "name" , "age", "height" ], [ "Alice", "20" , "170" ], [ "Bob" , "20" , "170" ] ] } // or Outline({ Given("I use the example name <name>") Then("The age should be <age>") }, examples: [ [ "name" , "age", "height" ], [ "Alice", "20" , "170" ], [ "Bob" , "20" , "170" ] ] ) }
Background
If you are repeating the same steps in each scenario you can move them to a
Background. A
Background is run before each scenario (effectively just before first scenario step is execuated) or outline pass (but after
setUp()). You can have as many steps in
Background as you want.
class OnboardingTests: XCTestCase { func Background() { Given("I launch the app") } func testOnboardingIsDisplayed() { Then("I see onboarding screen") } func testOnboardingIsDisplayedEachTime() { Examples([""], ["1"], ["2"]) Outline { Then("I see onboarding screen") And("I kill the app") } } }
Page Object
Built in
PageObject type can be used as a base type for your own page objects. It will assert that its
isPresented(), that you should override, returnes
true when instance of it is created. It aslo defines a
name property which by default is the name of the type without
PageObject suffix, if any.
PageObject also comes with some predefined steps, defined by
CommonPageObjectsStepDefiner, which validate that this page object is displayed, with formats
I see %@,
I should see %@ and
it is %@ with optional
the before page object name parameter.
Dealing with errors / debugging tests
Missing steps
If there isn't a step definition found for a step in your feature file then the extensions will output a list of all the available steps and then fail the test, something like:
steps ------------- /I have a working Gherkin environment/ (SanitySteps.swift:17) /I use the example name (?:Alice|Bob)/ (SanitySteps.swift:38) /The age should be ([0-9]*)/ (SanitySteps.swift:44) /This is another step/ (SanitySteps.swift:33) /This step should call another step/ (SanitySteps.swift:28) /This test should not ([a-zA-Z0-9]*)/ (SanitySteps.swift:23) ------------- XCTestCase+Gherkin.swift:165: error: -[XCTest_Gherkin_Tests.ExampleFeatures testBasicSteps] : failed - Step definition not found for 'I have a working Pickle environment'
Ambiguous steps
Sometimes, multiple steps might contain the same text. The library will match with what it thinks is the right step, but it might get it wrong. For example if you have these step definitions:
step("email button") { ... } step("I tap the email button") { ... }
When you try to run this Given
func testStepAnchorMatching() { Given("I tap the email button") }
it might match against the "email button" step, instead of the "I tap the email button" step. To fix this, there are two options.
- You can pass an exact string literal to the step definition instead of using the normal method, which treats everything as a regular expression.
step(exactly: "I tap the email button")
This will match only the exact text "I tap the email button". Any regular expression special characters in this string will be matched exactly.
- You can anchor the regular expression to the start and end of the string using
^and
$, like this:
step("^email button$") { ... } step("I tap the email button") { ... }
Now, "I tap the email button" doesn't match the first step.
This method is useful if you need to match ambiguous steps, but can't use approach (1) because you also need other features of regular expressions (i.e. pattern matching etc)
Screenshots
It's useful to have screenshots of failing UI tests, and this can be configured with the
XCTestCase.setAutomaticScreenshotsBehaviour([.onFailure, .beforeStep, .afterStep], quality: .medium, lifetime: .deleteOnSuccess)
Installation
CocoaPods
XCTest-Gherkin is available through CocoaPods. To install it, simply add the following line to your Podfile:
pod 'XCTest-Gherkin'
and run
pod install
Carthage
XCTest-Gherkin is also available through Carthage. To install it, simply add the following line to your Cartfile:
github "net-a-porter-mobile/XCTest-Gherkin" == 0.13.2
and run
carthage bootstrap --platform iOS. The generated framework is named
XCTest_Gherkin.framework.
Swift Package Manager
In your Xcode project add XCTest-Gherkin via the File -> Swift Packages -> Add package dependency... menu.
Note that Xcode 12 and Swift 5.3 is a minimum requirement for using XCTest-Gherkin in combination with Swift Package Manager.
Configuration
No configuration is needed.
Examples
There are working examples in the pod's Example project - just run the tests and see what happens!
Native feature file parsing
To help with moving from native feature files (we have lots of these from our previous test suite) to working in Swift, it would be handy to be able to parse the current feature files without having to modify them into their Swift counterparts.
This is also useful when they are first being written by product owners who know Given/When/Then syntax but aren't Swift developers :)
If you include the
Native subpod in your podfile
pod 'XCTest-Gherkin/Native'
you will also include the ability to parse true Gherkin syntax feature files and have the libary create runtime tests from them.
There is an example of this in the Example/ project as part of this pod. Look at the
ExampleNativeTest class - all you need to do is specify the containing folder and all the feature files in that folder will be read.
The advantages of this are obvious; you get to quickly run your existing feature files and can get up and running quickly. The disadvanages are beacuse the tests are generated at runtime they can't be run individually from inside Xcode so debugging is tricker. I would use this to start testing inside Xcode but if it gets hairy, convert that feature file into a native Swift test and debug from there.
Localisation of feature files
You can use feature files written in multiple languages. To set the language of a feature file put a
# language: en with appropriate language code at the first line of a feature file. By default English localisation is used. You can see all available localisations in
gherkin-languages.json file or from code using
NativeTestCase.availableLanguages property. Here is an example of a feature file in Russian:
# language: ru Функция: Разбор простого функционального файла Сценарий: Это очень простой пример успешного сценария Допустим Я имею рабочее окружение Gherkin Тогда этот тест не должен завершиться ошибкой
Disclaimer
The Gherkin syntax parser here isn't really production ready - it's certainly not a validator and will probably happily parse malformed Gherkin files quite happily. The feature files it's parsing are assumed to be fairly well constructed. The purpose of this subpod is to help migrate from old feature files into the Swift way of doing things so that's all it does. Feel free to submit pull requests if you want to change this :)
XCTest+Gherkin at net-a-porter
We use this extension along with KIF to do our UI tests. For unit tests we just use XCTest plain. KIF is working really well for us, and is far far faster than our previous test suite.
We put our calls to KIF inside our step definitions, which happens to closely mirror how we worked with our previous Cucumber implementation, making migrating even easier.
Author
Sam Dean, deanWombourne@gmail.com
License
See LICENSE for details - it's the Apache license.
Github
You may find interesting
Dependencies
Used By
Total: 0
Releases
Swift 5 -
Now specifying backwards compatibility with 4 and 4.2
Swift 5 -
Updates the code to compile on Swift 5.
Explicit string matching -
- Add
step(exactly: String)to explicitly exactly match a step instead of using regexes (fixes #142)
- Add regex options to step definitions (thanks @ilyapuchka)
- Backgrounds in native tests
- Add assertion when unknown example key is being used
Xcode 10 -
- Fix for name property on the PageObject
Xcode 10 -
- Xcode 10 support
- Support for named matches
- Add descriptions to feature files
- Improvements to logging
- Feature file localisation support
- Highlight correct line in feature files for failing tests
- Track unused steps
- Introduce PageObject
Arbitrary types in steps -
You can now pass in and match against arbitrary types in step definitions (thanks @ilyapuchka)
Screenshots -
- Automatically take screenshots of failing tests (thanks @ilyapuchka)
- More representative failures on test misconfigurations (thanks @ilyapuchka)
Hotfix -
- Show error step location as well as assertion failure location (thanks @ilyapuchka)
Hotfix -
- Make the
testproperty in step definitions point to the currently running test instance instead of always pointing to the first test which was run. (thanks to @ilyapuchka)
Swift 4 -
Xcode 9.1 -
- Remove error when compiling in Xcode 9.1 (Thanks @kerrmarin)
xcode8 + less crashy -
Quick hot fix for the Xcode 8 0.10.xx releases where we don't crash as much when enumerating all the classes to find steps.
Xcode9 -
- Fix for (another) crash enumerating all classes
- Better output from native scripts (thanks @smaljaar)
This uses xcode9 specific apis, so if you're still using xcode8 then please stick with 0.10.x or lower.
Less crashy -
- Fix for crash enumerating all classes to find steps
- Clearer failure message when step isn't found
- Allow Double and Bool as closure types in step definitions
- Allow mix of closure parameter types in step definitions with two matches
Xcode 8.3 -
Changes in 0.10.2
Fix for native tests not running (nice spot @smaljaar !)
Also, from 0.10.0 onwards
Now doesn't crash in Xcode 8.3, which is nice., 0.10.1 is correctly tagged, so that's nice too., now with a new NativeFeatureRunner by @jacdevos fixing and making the test debugger in Xcode much more useful :)
Also thanks to @smaljaar and @Labun for many other fixes and improvements :)
Swift 3 -
- Added Swift 3 and Xcode 8 support
- XCTestCase setUp and tearDown methods support for NativeTestCase scenarios
- Improved integration with Xcode Test Navigator
-
- Explicitly disable bitcode (thanks @kerrmarin)
- Better newline handling for features created on other systems (thanks @smaljaar)
-
- Added forms of the step definition method with single and double string match parameters
- Added ability to parse Background gherkin keyword (thanks to @smaljaar)
- Added ability to create a native test case from a file instead of a directory (thanks @Rabursky)
- Add ability to specify set up code for native tests (thanks @Rabursky)
-
- Fix for parsing native feature files with comments / whitespace (thanks to @smaljaar) | https://swiftpack.co/package/net-a-porter-mobile/XCTest-Gherkin | CC-MAIN-2020-50 | refinedweb | 2,680 | 57.61 |
If you are using RNGs (Random Number Generators) for cryptography then you need one that has been validated for sufficient randomness. For example the libsodium library.
But if you want a fast one for general purpose use, then xoshiro256++ is a fast one. It’s actually that short that I can list it here. Just call next() and mod it (%) with the highest value+1. E.G. for a dice roll,
int diceroll= (next() %6) +1;
This algorithm is very fast, typically on the order of nano-seconds 10-9 seconds.
#include <stdint.h> uint64_t rngstate[4]; static inline uint64_t rotl(const uint64_t x, int k) { return (x << k) | (x >> (64 - k)); } // Returns a Uint64 random number uint64_t next(void) { const uint64_t result = rotl(rngstate[0] + rngstate[3], 23) + rngstate[0]; const uint64_t t = rngstate[1] << 17; rngstate[2] ^= rngstate[0]; rngstate[3] ^= rngstate[1]; rngstate[1] ^= rngstate[2]; rngstate[0] ^= rngstate[3]; rngstate[2] ^= t; rngstate[3] = rotl(rngstate[3], 45); return result; }
(Visited 88 times, 1 visits today) | https://learncgames.com/a-fast-random-number-generator-in-c/ | CC-MAIN-2021-43 | refinedweb | 170 | 59.03 |
directory(3) BSD Library Functions Manual directory(3)
NAME
closedir, dirfd, opendir, readdir, readdir_r, rewinddir, seekdir, telldir -- directory operations
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <dirent.h> int closedir(DIR *dirp); int dirfd(DIR *dirp); DIR * opendir(const char *dirname); struct dirent * readdir(DIR *dirp); int readdir_r(DIR *restrict dirp, struct dirent *restrict entry, struct dirent **restrict result); void rewinddir(DIR *dirp); void seekdir(DIR *dirp, long loc); long telldir(DIR *dirp);
DESCRIPTION
The opendir() function opens the directory named by dirname, associates a directory stream with it, and returns a pointer to be used to identify the directory stream in subsequent operations. In the event of an error, NULL is returned and errno will be set to reflect if dirname cannot be accessed or if it cannot malloc(3) enough memory to hold the whole thing. The readdir() function returns a pointer to the next directory entry. It returns NULL upon reaching the end of the directory or on error. In the event of an error, errno will() func- tion returns 0 on success or an error number to indicate failure. The telldir() function returns the current location associated with the named directory stream. Values returned by telldir() are good only for the lifetime of the DIR pointer (e.g., on success, see open(2). On failure, -1 is returned and the global variable errno is set to indicate the error.);
LEGACY SYNOPSIS
#include <sys/types.h> #include <dirent.h> <sys/types.h> is necessary for these functions.
SEE ALSO
close(2), lseek(2), open(2), read(2), compat(5), dir(5)
HISTORY
The closedir(), dirfd(), opendir(), readdir(), rewinddir(), seekdir(), and telldir() functions appeared in 4.2BSD. BSD June 4, 1993 BSD
Mac OS X 10.9.1 - Generated Tue Jan 7 09:12:05 CST 2014 | http://www.manpagez.com/man/3/closedir/ | CC-MAIN-2018-09 | refinedweb | 300 | 62.78 |
Cookin' with Ruby on Rails - More Designing for Testability
Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
CB: What we need to do here is check, before we try to destroy a record, that it doesn't have any child records. I'll use one of the standard Rails Callbacks:
before_destroy. If a
before_ Callback returns false, the associated action is canceled. So, first I'm going to add the call to the Callback...
before_destroy :check_for_children
And then I'll add the Callback method itself...
def check_for_children recipes = Recipe.find_all_by_category_id(self.id) if !recipes.empty? return false end end
So we end up with...
Figure 9
CB: And now, before we forget, let's make sure there aren't any recipe records in the database to cause us problems again. We could just rerun our recipe test and let the
teardown method clean things up. But now that we've seen what happens when we do things out of order, I think we ought to address the "order" problem. I put the
teardown in the Unit test for recipes because that's where the records were getting created. But our problem really isn't that recipe records exist in the table at the end of the recipe test. Our problem happens when recipes exist at the beginning of the category test. So, what say we move the code to remove them from the teardown method in recipe_test.rb to a setup method in category_test.rb ?
Paul: Sounds good to me.
CB: Yeah, me too. So, I'll copy the teardown method from recipe_test.rb to category_test.rb and rename it...
def setup recipes = Recipe.find(:all) recipes.each do |this_recipe| this_recipe.destroy end end
We need to add the recipes fixture...
fixtures :categories, :recipes
And then we'll rerun our category test,
ruby test\category_test.rb
and...
Figure 10
CB: Ta daaa!!!
Paul: Hold on, CB. We went down that road before. Why don't we just make sure our app's fixed this time?
CB: No problem, Paul. You still got Mongrel running? OK. Go ahead and browse to our category list page and try deleting one again.
Paul: OK. OK. It's not deleting the category, and it's not crashing, so it looks like we've got it taken care of. But now I'm wondering, if I've got to test it in the browser anyway, why spend the time to create the Unit test?
CB: That's a good question, Paul. We found out that we were missing a test by looking at the app from a higher level. And since we found the problem at the higher level, we wanted to go back there to make sure that what we'd done had actually fixed it. That's pretty typically going to be the case. But now we know that we've got a low-level test that catches the problem. So we don't have to test it again at the higher level. We can if we want to, and I probably will because I think of test suites like a layered defense, but the value of doing it again at that higher level will primarily be in making sure our test suite's not broken rather than making sure the app's not broken. It'll be easier to show you. If you're ready, I think we should get started on our Functional tests.
Paul: I'm ready. Let's do it.
CB: Cool. Let's take a look at the Functional test stub that Rails produced for us. Open up test\functional\category_controller_test.rb.
Figure 11
Paul: That looks a lot more complete than the Unit test scaffolds. More like the scaffolding Rails produces for the app itself.
CB: Yep. I think the reason Rails doesn't do more with the Unit test scaffold is, like I said before, the model is where we put validations and business logic. Rails can't predict a whole lot about what we'll want to do with those. But since it's generating the basic CRUD functionality in scaffold code for the controller, it can generate the tests for that. We've got test methods for all of the default methods in the controller, plus a setup method to prepare for each test method execution.
Paul: Yeah. I can see it's setting up four instance variables. But I only see one of them getting used in any of the methods. How're the other ones getting used?
| http://archive.oreilly.com/pub/a/ruby/2007/07/28/cookin-with-ruby-on-rails-july.html?page=3 | CC-MAIN-2016-44 | refinedweb | 759 | 83.86 |
Ximin Luo: > Hi all, > > I'm thinking of ways to minimise the work needed to fix the build path > issues that we're having. > > The present method is that dpkg-buildflags sets > "-fdebug-prefix-map=$PWD=." into various *FLAGS envvars, and Make (or > another build tool) will pass this to gcc. This works if and only if > the package uses dpkg-buildflags, which granted is most of them. > > However, I am not sure if this is the best approach. At present, the > output of dpkg-buildflags is itself dependent on the build-path, and > some packages *might* save this value somewhere in their output. (This > is the case for perl, although in perl's specific case this seems > fixable.) > > However the question is, do we want to do this every time an upstream > saves CFLAGS somewhere? > > It is after all a part of the "reasonable" build input - i.e. it is > reasonable that whatever is in CFLAGS will affect the build output, so > saving it in arbitrary ways isn't an "unreasonable" thing for upstream > packages to do. We will have to educate everyone "please filter out > debug-prefix-map from CFLAGS before you save it somewhere" and this > adds extra complexity to the whole ecosystem. > > So, here are some other approaches: > >.
Advertising
I think this breaks the mapping to the source file, for example in gdb. So I think this is not a good solution. >). > 2. Define another variable SOURCE_ROOT to be set to the top-level > source dir, and patch GCC to use this as the default value for > debug-prefix-map (and the analogue for other languages / tools). > > This would have the same concrete behaviour as the current situation, > but then we're defining yet another variable... but probably less > tools will need to support this than SOURCE_DATE_EPOCH. And as with > (1), this would not be necessary for the path-is-namespace languages. > > X
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Reproducible-builds mailing list Reproducible-builds@lists.alioth.debian.org | https://www.mail-archive.com/reproducible-builds@lists.alioth.debian.org/msg06048.html | CC-MAIN-2017-30 | refinedweb | 334 | 63.49 |
I would like to parse boolean expressions in PHP. As in:
A and B or C and (D or F or not G)
The terms can be considered simple identifiers. They will have a little structure, but the parser doesn't need to worry about that. It should just recognize the keywords
and or not ( ). Everything else is a term.
I remember we wrote simple arithmetic expression evaluators at school, but I don't remember how it was done anymore. Nor do I know what keywords to look for in Google/SO.
A ready made library would be nice, but as I remember the algorithm was pretty simple so it might be fun and educational to re-implement it myself.
Recursive descent parsers are fun to write and easy to read. The first step is to write your grammar out.
Maybe this is the grammar you want.
expr = and_expr ('or' and_expr)* and_expr = not_expr ('and' not_expr)* not_expr = simple_expr | 'not' not_expr simple_expr = term | '(' expr ')'
Turning this into a recursive descent parser is super easy. Just write one function per nonterminal.
def expr(): x = and_expr() while peek() == 'or': consume('or') y = and_expr() x = OR(x, y) return x def and_expr(): x = not_expr() while peek() == 'and': consume('and') y = not_expr() x = AND(x, y) return x def not_expr(): if peek() == 'not': consume('not') x = not_expr() return NOT(x) else: return simple_expr() def simple_expr(): t = peek() if t == '(': consume('(') result = expr() consume(')') return result elif is_term(t): consume(t) return TERM(t) else: raise SyntaxError("expected term or (")
This isn't complete. You have to provide a little more code:
Input functions.
consume,
peek, and
is_term are functions you provide. They'll be easy to implement using regular expressions.
consume(s) reads the next token of input and throws an error if it doesn't match
s.
peek() simply returns a peek at the next token without consuming it.
is_term(s) returns true if
s is a term.
Output functions.
OR,
AND,
NOT, and
TERM are called each time a piece of the expression is successfully parsed. They can do whatever you want.
Wrapper function. Instead of just calling
expr directly, you'll want to write a little wrapper function that initializes the variables used by
consume and
peek, then calls
expr, and finally checks to make sure there's no leftover input that didn't get consumed.
Even with all this, it's still a tiny amount of code. In Python, the complete program is 84 lines, and that includes a few tests. | https://expressiontree-tutorial.net/knowledge-base/2093138/what-is-the-algorithm-for-parsing-expressions-in-infix-notation- | CC-MAIN-2019-30 | refinedweb | 419 | 63.9 |
When you defined an explicit constructor java wont create one. share|improve this answer answered Nov 30 '10 at 21:47 anon Ok others were faster... –anon Nov 30 '10 at 21:47 add a comment| up vote 1 down vote HourlyWorker's When I try Shipment extends BoxWeight, I get error"cannot find symbol - constructor BoxWeight()". public Employee(String firstName, String lastName) { ... } Define a new constructor, or call the constructor you've defined with the parameters you're missing. check over here
What are those "sticks" on Jyn Erso's back? Then it compiled and ran all right. How can I turn rolled oats into flour without a food processor? Joanne Campbell Ritchie Sheriff Posts: 51662 87 posted 5 years ago Thank you Joanne. try this
I put some of my code below. The error is telling you that it is attempting to find this parameterless constructor and can't find it because you haven't defined it. I hope this makes sense. Player player = new Player("nick","type"); share|improve this answer edited Jan 31 '13 at 22:11 PaulStock 6,87773448 answered Jan 31 '13 at 21:51 user1744361 14 add a comment| Your Answer draft
EDIT: Incidentally, why does the HourlyWorker have a private wage member? I'm too cold, turn up the temperature Are zipped EXE files harmless for Linux servers? don't know if this is a copy/paste error but well.. Java Constructor Example It is just a variable of the other class.
tony –tony Nov 30 '10 at 22:03 I'm glad it helped. –OscarRyz Nov 30 '10 at 22:13 add a comment| up vote 4 down vote In the HourlyWorker constructor Java Cannot Find Symbol Class more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed As already stated above, constructors do not specify a return type because they always return an instance of that class. 0 Discussion Starter joshmo 8 8 Years Ago Oh...I think I asked 4 years ago viewed 4843 times active 4 years ago Visit Chat Related 1287How do I call one constructor from another in Java?0Java compiler error: “cannot find symbol constructor ..”?0Java
My code so far is: #include
What is a real-world metaphor for irrational numbers? this website Why did Tarkin undertake this course of action at the end of Rogue One? Cannot Find Symbol In Java So for each of your classes, you must add an empty parameterless constructor. Cannot Find Symbol Method How to change the schema of stored procedure without recreating it Theorems demoted back to conjectures The difference between 'ping' and 'wget' in relation to hostname resolution Who is this six-armed
Luke_4 10 posts since Dec 2016 Community Member Sponsor What language is this program made in ? Not the answer you're looking for? How does ssh run a command? Again, I'm 100% green at this, so If not making any sense, tell me :) –Jarand Boge Jan 31 '13 at 22:34 @JarandBoge.. Java Constructors
Four Birds + One Did Donald Trump say that "global warming was a hoax invented by the Chinese"? And then use the parameterized constructor, passing those variables as parameter. Does gunlugger AP ammo affects all armor? this content Browse other questions tagged java or ask your own question.
Now, Im guessing this has something to do with the string parameters of the constructor, but I'm all out. How to put a diacritic on top of an i? Unsold Atari videogames dumped in a desert?
This post has been edited by Martyr2: 30 March 2008 - 08:14 AM Is This A Good Question/Topic? 0 Back to top MultiQuote Quote + Reply Replies To: Getting a "cannot Not the answer you're looking for? I just added an empty constructor to both of my classes. Parents disagree on type of music for toddler's listening Does a byte contain 8 bits, or 9?
That or when you go to initialize your class, give them the parameters the class requires. Can you confirm your system stats, and that you have not modified the downloaded source project in any way? –Perception Jan 3 '13 at 1:15 @Perception The CXF version Digits() doesn't exist. Parents disagree on type of music for toddler's listening Law case title convention: Why sometimes not "Plaintiff v.
Too many advisors What is the truth about 1.5V "lithium" cells Who is this six-armed blonde female character? Why did it take longer to go to Rivendell in The Hobbit than in The Fellowship of the Ring? So you can either define one default constructor in your class e.g. My code so far is: #include
and I have commented out your shipment class as incomplete. ... These lines are causing the problem: s1 = new Square(23,87,104,"red",true); // etc Your Square class would need to have a constructor like this: public class Square { public Square(int a, int Law case title convention: Why sometimes not "Plaintiff v. Could large but sparsely populated country control its borders?
Shortest auto-destructive loop Were defendants at the Nuremberg trial allowed to deny the holocaust? Why do Latin nouns (like cena, -ae) include two forms? Is every parallelogram a rectangle ?? any suggestions please?
Created a class Box with no default constructor. I am not using Maven, so I can't even list it as a dependency and hope that this will be magically resolved. You need to pass a third parameter (hourly rate) like this: super(firstName, lastName, 42); share|improve this answer answered Nov 30 '10 at 21:50 snakile 17.3k38112189 i see thanks, oscarryz Use {} after every if, else, do, while, for and catch keyword.
Get out of the transit airport at Schengen area and the counting of Schengen period How much effort (and why) should consumers put into protecting their credit card numbers? Last Post 20 Hours Ago Does anyone have any idea what language is this launcher application made in ? be killed in the war vs be killed by the war Has my macOS Sierra system been infected by unknown users? | http://gsbook.org/not-find/java-could-not-find-symbol-constructor.php | CC-MAIN-2018-39 | refinedweb | 1,038 | 72.05 |
27 Mar 02:50 2013
Re: unable to load xcms package, Rcpp error
Dan Tenenbaum <dtenenba@...>
2013-03-27 01:50:08 GMT
2013-03-27 01:50:08 GMT
On Tue, Mar 26, 2013 at 6:45 PM, lun wang [guest] <guest@...> wrote: > > I want to load the "xcms" package, but I encountered an error that package ‘Rcpp’ was built under R version 2.15.3 How can I fix this problem? I think that's just a warning and you can proceed. Does library(xcms) work? Dan > Thanks! > > > -- output of sessionInfo(): > > R version 2.15.2 (2012-10-26) >] stats graphics grDevices utils datasets methods base > > other attached packages: > [1] Rcpp_0.10.3 > > loaded via a namespace (and not attached): > [1] Biobase_2.18.0 BiocGenerics_0.4.0 > > > -- > Sent via the guest posting facility at bioconductor.org. > > _______________________________________________ > Bioconductor mailing list > Bioconductor@... > > Search the archives: _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives: | http://permalink.gmane.org/gmane.science.biology.informatics.conductor/47244 | CC-MAIN-2014-42 | refinedweb | 155 | 69.89 |
Pulse-width modulation: using PWM to build a breathing nightlight and alarm
Pulse-width modulation is a technique for varying the width of pulses to encode a signal. On the Raspberry Pi and other embedded computers, PWM is available as an output mode on the general-purpose I/O ports, controlled in either hardware or software.
In this article I’ll use PWM to control LED brightness, developing a nightlight with a continuously varying brightness by varying the duty cycle over time, when there is little ambient light as sensed by a photodiode over an SPI-based analog-to-digital converter. When the nightlight toggles, it will momentarily sound a magnetic transducer, also using PWM.
LED Brightness
RPIO and pigpio
First I found the RPIO.PWM library for Python, seemed promising. However it wasn’t installed by default in Raspbian. Installed with
sudo pip install RPIOwith. Attempting to use this module, greeted with an error:
import RPIO._GPIO as _GPIO
SystemError: This module can only be run on a Raspberry Pi!
Found at, there is a newer fork of RPIO for Raspberry Pi version 2 and later, but a developer suggested his alternative, pigpio (note: pi-gpio, not pig-pio). This module comes with Raspbian and can be readily used out of the box, first run the daemon:
sudo pigpiod
then access it via Python:
python
pi=pigpio.pi()
Hardware vs Software PWM
Raspberry Pi’s SoC supports hardware PWM, accessible via pi.hardware_PWM() using pigpio. Only a handful of specific GPIO pins are usable for PWM:
-
and they all share the same channel, so if you try to configure multiple pins for PWM, they’ll all be driven identically.
Testing hardware PWM based on the example from the documentation:
sudo pigpiod
python
import pigpio
pi=pigpio.pi()
pi.hardware_PWM(12, 800, 1e6*0.25) # 800Hz 25% dutycycle
This caused an audible tone to be emitted, with a pitch varying based on the given frequency. An annoying whine, but becomes inaudible (ultrasonic?) at higher frequencies, tested up to 30e6 as the documentation recommends as the upper limit (“Frequencies above 30 MHz are unlikely to work.”).
Software PWM is more flexible: it can be used on any GPIO pin, at the downside of increased CPU usage. I’ll accept this trade-off:
pi.set_PWM_dutycycle(12, 255*0.25)
Both of these calls set GPIO 12 (equivalent to board pin #32) to 25% duty cycle — outputting high 25% of the time (0% = all low, 100% = all high).
Control with PWM
What use is a signal that is high 25% of the cycle and low 75%? One use case is controlling motor speed. However I’ll be using it to control the apparent brightness of an LED.
Incandescent lamps can have their brightness adjusted by lowering voltage, using for example a dimmer:
Although variable-voltage devices are used for various purposes, the term dimmer is generally reserved for those intended to control light output from resistive incandescent, halogen…
e.g. a variable resistor: a potentiometer, aka rheostat.
However, light-emitting diodes as a semiconductor device have a minimum current and forward voltage drop, not lending themselves well to voltage-controlled brightness control. Instead, the LED can be switched off and on faster than human persistence of vision, with a duty cycle varied to control the perceived brightness. This strategy also has the advantage it is easily integrated with the Raspberry Pi, no extra digital-to-analog (DAC) hardware is needed; PWM already works with the Pi.
To drive the LED at 50% brightness:
pi.set_PWM_dutycycle(12, 255*0.50)
I happened to wire my LED active-low, so 255*0.10 will set the LED to 90% brightness, and 255*0.90 to 10% brightness, for example.
Updating the nightlight
RPi.GPIO to pigpio
Remember the “nightlight” built from photodiode, ADC, and LED in SPI interfacing experiments: EEPROMs, Bus Pirate, ADC/OPT101 with Raspberry Pi? The light was set on/off as follows:
def set_light(on):
GPIO.output(LED_Y, not on) # active-low
This can be enhanced to allow variable brightness using PWM:
import pigpio
pi = pigpio.pi()
def set_light(brightness):
pi.set_PWM_dutycycle(LED_Y_BCM, 255 * (1 - brightness))
Then change set_light(False) → set_light(0.0), and set_light(True) → set_light(1.0). The multiplication by 255 is needed since the software PWM library accepts a duty cycle from 0–255 corresponding to 0–100%, and the 1- to account for the active-low wiring of the LED.
Frequency
pi.get_PWM_frequency(12) shows the frequency defaults to 800 Hz. If the duty cycle controls the brightness, then what does the frequency control?
Visible flicker, the “refresh rate”, see: persistence of vision. CRT monitors refersh at ~85 Hz to reduce visible flickering. For LED dimming, Digikey How to Dim a LED recommends 200 Hz or greater.
At 100 Hz, flickering started to become noticeable to me. Note that with software PWM, some jitter may occur as the CPU is busy doing other tasks. 800 Hz should suffice. Much higher has no discernible effect for LED dimming, but will become important later for other purposes.
Edge triggering
The naive initial implementation of nightlight.py simply read the ADC in a loop, and turned the LED on or off if the voltage reached a threshold:
while True:
v = readadc(7)
if v > V_LIGHT: set_light(0.0)
elif v < V_DARK: set_light(1.0)
time.sleep(0.1)
this continuously drives the LED, given the input received by polling. Often a better approach is to trigger on rising/falling edges. This technique was covered using interrupts in Interrupt-driven I/O on Raspberry Pi 3 with LEDs and pushbuttons: rising/falling edge-detection using RPi.GPIO, but the ADC I am using does not provide interrupts.
Nonetheless, edge-triggering is still possible. The ADC is polled and an internal state is kept:
if v > V_LIGHT: is_light = True
elif v < V_DARK: is_light = False
then set_light() is only called on a state transition:
if is_light != was_light:
if is_light: set_light(0.0)
else: set_light(1.0)
This technique has some advantages we’ll see soon.
Breathing, linearly
Some old electronic devices provided a pleasant “breathing” effect, as if the light was snoozing. This is straightforward to implement linearly as follows:
if not is_light:
counter += direction
if counter < min_counter or counter > max_counter: direction = -direction
set_light(counter)
where min_counter = 0.1, max_counter = 0.9, counter is initialized to max_counter, and direction to -0.05 (all these can be adjusted to taste).
But this breathing does not feel very natural, turns out it is a triangle wave:
Notice the sharp edges? It would be nice to smooth them out.
Easing curves
To do so, we could learn about easing curves. Useful references:
Easings.net’s quick reference shows a wealth of available curves:
although it is focused on JavaScript/CSS web development. For hardware you’re on our own, but can use these easings for reference. I particularly like the easeInOutQuad, which can be edited on cubic-bezier.com. Cubic Bézier curves, with control points 0.25, 0.1, 0.25, 1. Simplifying The Math Behind the Bézier Curve, one-dimensional bezier curves, cubic:
y = A*(1-x)³+3*B*(1-x)²*x+3*C*(1-x)*x²+D*x³
But this complexity is not needed for our application. A sine function is sufficient, with appropriate scaling and translation for ±1.0:
Translating to Python:
import math
brightness = (math.sin(counter * math.pi * 1.5) / 2) + 0.5
Video demo showing the brightness varying by a sine wave over time:
Alarm
The breathing sine-wave PWM-controlled nightlight LED is cool and all, but that’s not all you can do with PWM. Previously I salvaged a magnetic transducer from Building an H-Bridge from a salvaged Uninterruptible Power Supply, but lacked the tools and knowledge to use it — until now.
Washable?
The datasheet for the WT-1201 P transducer (powered at voltage 1.5V(1–2V), impedance 16±4.5 Ω, frequency 2.4± 0.2kHz) says “W” in the part number indicates it is washable:
The datasheet also suggests an application of “Washing Machine”, among others. But that’s unrelated. Dave Tweed on StackExchange “Remove After Washing” on Piezo Buzzer explains, washable actually refers to industrial PCB assembly:
The industrial PCB assembly process usually leaves residues — mostly soldering flux — on the circuit board. One step in the process is to wash the board (by dipping or spraying) with a solvent to remove those residues for long-term reliability and for the sake of appearance.
Some devices (such as sound or pressure transducers) have openings for their functioning, and their performance would be adversely affected if the solvent or the residues got washed into the opening and lodged there. Therefore, such devices often have a sticker that covers the opening(s) that should not be removed until after the washing.
The more you know…
Magnetic Transducer vs Piezoelectric Buzzer
You may be familiar with piezoelectric buzzers. The PAC-WT-1202 on the other hand is a magnetic transducer. The same company produces both types. This is the construction per the datasheet:
The vibrating disk is pulled by the magnetic coil, oscillating to generate an audible sound. In contrast, piezoelectric buzzers employ the piezoelectric effect. What’s better? Depends on your use case, but TrippLite decided to use a magnetic transducer in their UPS alarm for some reason.
Transistor driver
As an inductive load (= has a coil), the transducer shouldn’t be powered directly from the Raspberry Pi’s GPIO ports. But we want to control it through the GPIO port. A transistor can be used for this purpose.
I used a D1786R NPN transistor, 10 kΩ resistor on the base wired to GPIO 19 (board pin #35), with a 1N5404 diode on the collector (as a flyback diode) and the transducer in parallel, as shown:
Note: the datasheet said 1–2V, I gave it 5V, it hasn’t blown up yet. YMMV.
PWMing the Transducer
Applying constant voltage to the transducer has no audible effect. Some transducers have built-in drivers, but not this one. To have it emit a tone you need to drive it with alternating current.
Pulse-width modulation is a convenient means to do this. Here the frequency becomes important, as it corresponds to the sound wave frequency. Give the transducer a square wave at 2400 Hz:
import pigpio
pi = pigpio.pi()
pi.set_PWM_frequency(19, 2400)
pi.set_PWM_dutycycle(19, 255/2)
The buzzer emits a loud tone, as expected.
Although the transducer’s frequency is rated at 2400 ± 200 Hz = 2200–2600 Hz, it can be driven at other frequencies, albeit at lower volume. pigpio rounds 2400 down to 2000, and 2600 also down to 2000, no difference. The next value up is 4000 Hz. Highest pigpio allows for software PWM is 8000, hardware PWM can go higher but human hearing frequency ranges from 20–20,000 Hz. Try 2000, 1000, 800, 500, 200, 100, 80, 50, 10 (clicks). Frequency response from datasheet:
Application
An alarm clock is a possible application of this buzzer, correlating with daylight or sunrise time. A full implementation of an alarm clock, with sleep/wake, snooze, etc., will have to wait for another time. For this article I’m going to add a momentary tone on the nightlight’s edge transitions, like so:
if is_light != was_light:
…
buzzer_started = time.time()
set_buzzer(True)
if buzzer_started and time.time() - buzzer_started > BUZZER_DURATION:
set_buzzer(False)
buzzer_stated = None
where set_buzzer() calls pi.set_PWM_frequency() and pi.set_PWM_dutycycle(), with 0 and 0 to turn off, or a reasonable frequency and 50% duty cycle (255*0.50) to turn on. Experimented with 2000 Hz, near the rated frequency of this transducer, but found it annoying; turned down to 10 Hz, a more muted clicking sound, still audible but less obnoxious.
Conclusions
Pulse-width modulation is essential for interfacing software to hardware. In this article we saw how it can be used to vary the brightness of an LED and to emit sound using a magnetic transducer, culminating in a simple example of a soothing nightlight with auditory output on edge transitions.
The complete source code for this nightlight is available on GitHub: | https://medium.com/@rxseger/pulse-width-modulation-using-pwm-to-build-a-breathing-nightlight-and-alarm-6f3ff5682afc?source=post_page-----6f3ff5682afc---------------------- | CC-MAIN-2019-39 | refinedweb | 2,028 | 56.86 |
The information in this article is no longer valid. Please go to the new version found at. The current downloads are removed.
A while ago I was looking for a usable editor in one of the web applications I am working on. There are quite a few alternatives but I ended up picking TinyMCE.TinyMCE is a very easy to use editor. Both for users and developers. It's great to write 1 small line of code to have textareas turned automatically into usable editors!Using TinyMCE would be like:
1: <script type=”text/javascript” src=”/tinymce/jscripts/tiny_mce/tiny_mce.js”></script>
2: <script type=”text/javascript”>tinyMCE.init({mode : ‘textareas’, theme : ’simple’});</script>
1: <asp:TextBox ID=”TextBox1″ runat=”server” Width=”100%” Height=”200″ TextMode=”MultiLine” />
2: <cc:TinyMCETextBoxExtender TargetControlID=”txtTextBox1” ID=”TinyMCETextBoxExtender1″ runat=”server” />
1: this.registerPartialUpdateEvents();
2: this._pageLoadingHandler = Function.createDelegate(this, this._pageLoading);
3: this._pageRequestManager.add_pageLoading(this._pageLoadingHandler);
4:
5: var element = this.get_element();
6: if (tinyMCE.getInstanceById(element.id) == null || tinyMCE.getInstanceById(element.id) == “undefined”)
7: tinyMCE.execCommand(‘mceAddControl’,false,element.id);
8:
9:
10:
11: ...
12:
13: _pageLoading : function(sender, args) {
14: if (this._postBackPending) {
15: this._postBackPending = false;
16:
17: var element = this.get_element();
18: var panels = args.get_panelsUpdating();
19: for (var i = 0; i < panels.length; i++) {
20: var el=element;
21: while(el!=null)
22: {
23: if(el==panels[i]) break;
24: el = el.parentNode;
25: }
26: if(el!=null)
27: {
28: if (tinyMCE.getInstanceById(element.id) != null && tinyMCE.getInstanceById(element.id) != “undefined”)
29: {
30: tinyMCE.execCommand(‘mceFocus’, false, element.id);
31: tinyMCE.execCommand(‘mceRemoveControl’,false,element.id);
32: }
33: break;
34: }
35: }
36: }
37:
1: [Designer(typeof(TinyMCETextBoxDesigner))]
2: [ClientScriptResource(“TinyMCETextBox.TinyMCETextBoxBehavior”, “TinyMCETextBox.TinyMCETextBoxBehavior.js”)]
3: [TargetControlType(typeof(TextBox))]
4: public class TinyMCETextBoxExtender : ExtenderControlBase
5: {
6: protected override void OnLoad(EventArgs e)
7: {
8: // ** load scripts
9: if (Page.Items[“tinymce”] == null)
10: {
11: HtmlGenericControl Include = new HtmlGenericControl(“script”);
12: Include.Attributes.Add(“type”, “text/javascript”);
13: Include.Attributes.Add(“src”, Page.ResolveUrl(“~/tinymce/jscripts/tiny_mce/tiny_mce.js”));
14: this.Page.Header.Controls.Add(Include);
15:
16: //Config MCE
17: HtmlGenericControl Include2 = new HtmlGenericControl(“script”);
18: Include2.Attributes.Add(“type”, “text/javascript”);
19: Include2.InnerHtml = “tinyMCE.init({mode : ’specific_textareas’, theme : ’simple’});”;
20: this.Page.Header.Controls.Add(Include2);
21:
22: Page.Items[“tinymce”] = true;
23:
24: if (!Page.ClientScript.IsOnSubmitStatementRegistered(this.GetType(), “tinymcetriggersave”))
25: {
26: Page.ClientScript.RegisterOnSubmitStatement(this.GetType(), “tinymcetriggersave”, “tinyMCE.triggerSave(false,true);”);
27: }
28: }
29: base.OnLoad(e);
30: }
31: }
It works with the compression module (GZIP) for TinyMCE?
Not yet - but that is a great idea to add to this extender.
This is such a needed extender. I'm having a couple issues using it though. I'm using VS 2008 which may or may not be a factor here. I'm trying to use the extender inside an updatepanel which is inside a modalpopup. The tinyMCE text area shows fine initially, however once I do any sort of postback I get a server 500 error on any subsequent postback attempts.
Any thoughts as to what may be causing this and what I could do to fix it? I'm also using masterpages which I've heard can complicate matters as well.
Thank you for your comment epicNorth.
Unfortunately I am unable to reproduce the errors you receive, but that might be due to the fact that I use VS2005 atm (VS2008 has not yet arrived here). Do you get the error in any specific browser?
I don't think the errors is related to using masterpages - I use them as well and am not getting those errors. I'll try and see if I can get the error to pop up but unfortunately I am a busy man at the moment. Will get back on this though.
Currently on the list is:
1. Use Compression Module
2. Allow configurable editors
As it turns out my issues were totally programmer related. I had some pathing and permissions errors that were unrelated to the use of the extender itself.
I'm definitely looking forward to further customization options in the extender.
Hello epicNorth,
To avoid de 500 error, you must set in the @Page directive the attributte ValidateRequest="false".
Hello,
I have the same scenario than epicNorth:
Use MasterPage with a ModalPopup acosiated to an Panel. This Panel have a FormView with an TinyMCETextBoxExtender inside.
My problem is that when I close the modalpopup, all the textboxes in the page are as 'readonly' apparently.... but if press several times the Tab control I can edit the textboxes (the page loss the focus!!?).
Can anybody help me? Thanks!
Thank you for this extender!!! I have been tearing my hair out trying to get TinyMCE working inside an UpdatePanel.
I am using this in an UpdatePanel on a ModalPopup inside a TabPanel on a page that uses Masterpages, so if any of that complexity causes problems I'm probably going to see it ;-)
The extender control works great for me with one slightly odd exception. When the control first loads the textbox is seemingly "readonly". But when I click a button on the tinymce panel (for example the "B" (bold) button) The cursor shows up and I can use the control.
Any thoughts on what that is about?
Also, a slight tweak to the code:
I replaced the line:
Include2.InnerHtml = "tinyMCE.init({mode : 'specific_textareas', theme : 'simple'});";
with:
Include2.InnerHtml = "tinyMCE.init({mode:'exact', elements:'" + TargetControl.ClientID + "',theme:'advanced'});";
to make it ONLY apply to the TextBox I was intending to work with rather than all Multiline textboxes on the page.
When I use your extender, it brings the control up, but it's in a read-only mode. I can't do any typing.
Ideas?
Small update - I am currently adding styling options as well as looking into the other comments. Might be a while before the update is put online but at least you know I am working on it. ;-)
First off,
You = da man.
I had some custom configuration stuff that varied with where you were in the admin area so I added this to the code to control configuration options.
private string _Configuration = "mode : 'specific_textareas', theme : 'simple'";
public string Configuration
{
get
{
try
{
_Configuration = ViewState["Configuration"].ToString();
}
catch {}
return _Configuration;
}
set
_Configuration = value;
ViewState["Configuration"] = _Configuration;
}
John,
Thank you for your comment.
The version I have worked on the past few days in some spare time will have some great adjustments, like multiple configurations, support for Skins&Theming and autocompletion and am currently looking into hooking into the events.
More on this later.
Cheers,
CJ.
Thanks soo much Siets this extender has helped me alot.
I'm really really looking forward to your update.
Thanks !
For a sneak preview take a look at
Code will be available for download once I get to write a short article.
Hello!
I use this great extender in an UpdatePanel where I placed two text areas:
<asp:UpdatePanel
<ContentTemplate>
<asp:TextBox</asp:TextBox><br />
<asp:TextBox
<cc:TinyMCETextBoxExtender</cc:TinyMCETextBoxExtender>
<asp:Button
<asp:Button
</ContentTemplate>
</asp:UpdatePanel>
The OnClick method switches from a simple text area (first textbox control) to the tinymce editor (second textbox control) and persists the values:
public void Switch_Click(object sender, EventArgs e)
{
if (btSwitchRTE.Visible)
{
tb.Visible = false;
rte.Visible = true;
rte.Text = tb.Text;
else
tb.Visible = true;
rte.Visible = false;
tb.Text = rte.Text;
btSwitchRTE.Visible = tb.Visible;
btSwitchSimple.Visible = rte.Visible;
}
That works fine using IE or Firefox 2.x, but using Firefox 3 it doesn't work correctly. If I click the button to activate tinymce it's shown but I cannot click to the text area to add some text. If I click any icon on the editor (e.g. bold) it's working. Even if I define a default text for the tinymce textbox it's not shown after activating.
If the tiny mce is initially shown (visible) on startup it's working normally, even in Firefox 3. The problem is the switch which activates the tiny mce textarea on click. Why?? Any ideas? Can I get that working?
Best Regards,
Andreas
Please look at the following continuation of the AJAX extender. The version in this article is no longer 'supported' :)
weblogs.asp.net/.../ajax-extender-for-tinymce-continued.aspx
Andreas,
You might want to check the new version. I think it will work due to the adjustment of the initialization. If it still does not work, please let me know! | http://weblogs.asp.net/cjdevos/archive/2008/03/03/ajax-extender-for-tinymce-including-fix-for-updatepanels.aspx | crawl-002 | refinedweb | 1,413 | 52.05 |
IEEE_FUNCTIONS(3M) IEEE_FUNCTIONS(3M)
NAME
ieee_functions, fp_class, finite, ilogb, isinf, isnan, isnormal, issub-
normal, iszero, signbit, copysign, fabs, fmod, nextafter, remainder,
scalbn - appendix and related miscellaneous functions for IEEE arith-
metic
SYNOPSIS
#include <<math.h>>
#include <<stdio.h>>
enum fp_class_type fp_class(x)
double x;
int finite(x)
double x;
int ilogb(x)
double x;
int isinf(x)
double x;
int isnan(x)
double x;
int isnormal(x)
double x;
int issubnormal(x)
double x;
int iszero(x)
double x;
int signbit(x)
double x;
void ieee_retrospective(f)
FILE *f;
void nonstandard_arithmetic()
void standard_arithmetic()
double copysign(x,y)
double x, y;
double fabs(x)
double x;
double fmod(x,y)
double x, y;
double nextafter(x,y)
double x, y;
double remainder(x,y)
double x, y;
double scalbn(x,n)
double x; int n;
DESCRIPTION
Most of these functions provide capabilities required by ANSI/IEEE Std
754-1985 or suggested in its appendix.
fp_class(x) corresponds to the IEEE's class() and classifies x as zero,
subnormal, normal, , or quiet or signaling NaN. <<floatingpoint.h>>
defines enum fp_class_type. The following functions return 0 if the
indicated condition is not satisfied:
finite(x) returns 1 if x is zero, subnormal or normal
isinf(x) returns 1 if x is
isnan(x) returns 1 if x is NaN
isnormal(x) returns 1 if x is normal
issubnormal(x) returns 1 if x is subnormal
iszero(x) returns 1 if x is zero
signbit(x) returns 1 if x's sign bit is set
ilogb(x) returns the unbiased exponent of x in integer format.
ilogb(+-) = +MAXINT and ilogb(0) = -MAXINT; <<values.h>> defines MAXINT
as the largest int. ilogb(x) never generates an exception. When x is
subnormal, ilogb(x) returns an exponent computed as if x were first
normalized.
ieee_retrospective(f) prints a message to the FILE f listing all IEEE
accrued exception-occurred bits currently on, unless no such bits are
on or the only one on is "inexact". It's intended to be used at the
end of a program to indicate whether some IEEE floating-point excep-
tions occurred that might have affected the result.
standard_arithmetic() and nonstandard_arithmetic() are meaningful on
systems that provide an alternative faster mode of floating-point
arithmetic that does not conform to the default IEEE Standard. Non-
standard modes vary among implementations; nonstandard mode may, for
instance, result in setting subnormal results to zero or in treating
subnormal operands as zero, or both, or something else. stan-
dard_arithmetic() reverts to the default standard mode. On systems
that provide only one mode, these functions have no effect.
copysign(x,y) returns x with y's sign bit.
fabs(x) returns the absolute value of x.
nextafter(x,y) returns the next machine representable number from x in
the direction y.
remainder(x, y) and fmod(x, y) return a remainder of x with respect to
y; that is, the result r is one of the numbers that differ from x by an
integral multiple of y. Thus (x - r)/y is an integral value, even
though it might exceed MAXINT if it were explicitly computed as an int.
Both functions return one of the two such r smallest in magnitude.
remainder(x, y) is the operation specified in ANSI/IEEE Std 754-1985;
the result of fmod(x, y) may differ from remainder()'s result by +-y.
The magnitude of remainder's result can not exceed half that of y; its
sign might not agree with either x or y. The magnitude of fmod()'s
result is less than that of y; its sign agrees with that of x. Neither
function can generate an exception as long as both arguments are normal
or subnormal. remainder(x, 0), fmod(x, 0), remainder(, y), and fmod(,
y) are invalid operations that produce a NaN.
scalbn(x, n) returns x* 2**n computed by exponent manipulation rather
than by actually performing an exponentiation or a multiplication.
Thus
1 <<= scalbn(fabs(x),-ilogb(x)) << 2
for every x except 0, infinity, and NaN.
SEE ALSO
floatingpoint(3), ieee_flags(3M), matherr(3M)
18 August 1988 IEEE_FUNCTIONS(3M) | http://modman.unixdev.net/?sektion=3&page=copysign&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 688 | 50.77 |
Please note that this talk covers the previous version of neosemantics, the current version has a lot of new features and capabilities, please check out the documentation and videos on the neosemantics Neo4j Labs page.
Presentation Summary
Jesús Barrasa, Neo4j Sales Engineering Director, presents Neosemantics, NSMNTX. NSMNTX is an extension plugin for Neo4j built to help users work with RDF and linked data.
Starting off, Barrasa discusses different types of data. RDF is a standard defined by W3C and is a standard model for data exchange on the web. As such, RDF offers a number of serialization formats.
Even though RDF is a different model it still has the same underlying abstraction of the world, and sees the world as a graph. With NSMNTX, users will be able to import RDF data into Neo4j. Because they are both graphs the transition is straightforward.
The import and export process for RDF data into and out of Neo4j is exactly the same. We run a series of commands, produce a serialization and export the graph as RDF. These are nonstop procedures. Additionally, users can also run Cypher on the database.
NSMNTX is not just about importing and exporting data. Other features include ontology management and publishing ontologies within your graph. There is .importOntology which will take the ontology from wherever it lives and it will import it. It also includes inferencing which is being able to define as data explicit descriptions or explicit behaviors that you want a general purpose engine to run. Finally, users can clone subgraphs from one Neo4j instance to another.
Full Presentation
Hi, everyone. My name is Jesús Barrasa. I’m going to be presenting Neosemantics, NSMNTX.
I’ll be talking about this extension plugin for Neo4j that will help you work with RDF data, linked data and do some quite interesting things.
I will start with a proper introduction.
I’m Jesús Barrasa. I’m based in London. I’m part of the field team with Neo4j and have been for the last four years. I’ve spent quite some time working with graphs and property graphs with Neo4j.
I also have a background in RDF. I did my PhD in Semantic Technologies. I spent years working on a lot of RDF, ontologies and linked data for some interesting projects. In particular, I focused on mapping relational schemas to ontologies. That was a long time ago.
The reason why I’m talking about NSMNTX , RDF, linked data and Neo4j today is because of my background. Additionally, I’ve been working for the last few years on this extension that I’m going to present.
I’m super happy to announce that the project Neosemantix joins the Neo4j Labs incubator.
This is great news!
Here is the URL with all of the information. I hope that we’ll get a lot more exposure and users. There’s already an involved community around it, but I’d love to amplify the use and get more contributions.
When you go to that URL, what you will get is the page of the project. Here’s what it looks like:
You’ll have a summary of what I’m going to be talking about today. The summary will include what Neoemantics is, what you are able to do with it and share some interesting links.
There is a button at the bottom of the page. You will see some documents and blog posts about the use of Neosemantix and Neo4j. There are also some video presentations and webinars that are useful when getting started.
More importantly, you have a link to the source code, as all the projects in the Neo4j Labs are open source. You are able to have a look at it and contribute. You also have a link to a fantastic documentation that Mark Needham has helped me prepare for you.
Let’s get started.
What is NSMNTX?
Most of you know Neo4j and RDF.
RDF is a standard defined by W3C. RDF is a standard model for data exchange on the web. RDF offers a number of serialization formats that we’ll be looking at.
What’s interesting is that RDF represents models of the world in terms of connected entities. RDF uses the notion of triples and that’s how it represents the world.
A triple is a subject connected to an object through a predicate. In Neo4j, that’s an edge in a graph. Effectively, when you combine triples, you’re forming a graph.
What’s interesting is that RDF, being a different model, has the same underlying abstraction of the world. It sees the world as a graph. One of the things that we’ll be able to do with RDF through NSMNTX is be able to import RDF data into Neo4j. They’re graphs, so they should be straightforward.
You are able to import RDF data, this RDF data could come from services that expose the results as RDF, it could be datasets that exist that are published as RDF or it could be your own files. Any kind of RDF data is able to be imported with NSMNTX into Neo4j. It’s important that we do so in a lossless manner. By doing that, we store the RDF data.
We store the RDF data as a property graph. In a little bit, we will see how that happens because that’s what Neo4j is. It’s a property graph. We store that graph in a way that it is able to be regenerated back again as RDF without loss of any single triple. That’s what I mean by a lossless import/export of data.
These are the two main capabilities of the NSMNTX extension and it’s where it all started.
I’d like to emphasize the idea that we see Neo4j as the mechanism to publish and import data, as defined by the W3C. It’s a model for data exchange and that’s how we use it. Essentially, Neo4j says that you’re free to use the storage that you like and RDF is this layer on top that simplifies and enables interoperability between applications through the standard format. These are the two main points – import and export.
There’s model mapping capabilities, essentially it allows you to map your Neo4j graph to an existing public vocabulary and expose your data according to that vocabulary. Additionally, there’s some basic inferencing capabilities. This is just the beginning of it.
Inferencing is not built into Neo4j, but NSMNTX brings a little bit of these. This is another powerful feature, some of the languages built on top of RDF are OWL or RDFS.
Importing data
Let’s have a look at importing data into Neo4j.
This scheme is a diagram represented in the three basic steps. There is some RDF data that comes from a previous RD, like a SPARQL endpoint producing RDF. Or it could be a static dump of RDF that you got.
You have that source of RDF, and through NSMNTX and by using the import RDF procedure, you are able to import all these triples into Neo4j and have a nice graph, one that you’re familiar with and that you love. Then, you’ll be able to query your data using Cypher. Plus, there are all the extra benefits that you get from having your data in Neo4j. For example, having access to the fantastic library of algorithms and getting all the fast traversals.
How does that happen, effectively?
There’s one main procedure. Because these extensions are mostly implemented as procedures.
The procedure for import is .importRDF and we’re going to see in a minute the parameters that it takes. However, there are two additional ones, which are called .streamRDF and .previewRDF which are utility procedures to analyze the RDF data before you actually persist it in the graph. The import does save the RDF in your Neo4j graph in a persisted way. Whereas the other two just parse the RDF and produce serialization for visualization purposes but do not persist it.
As I was saying, the procedure takes two main parameters. There’s two that have to be always there. One is the URL which is the point to where the data lives and that could be an HTTP endpoint.
It could be wherever the RDF lives and then the format. There are a number of serialization formats, standard by the W3C and you have some of them there. You have to specify which one your source data uses so that the parser treats it accordingly. There are common ones like Turtle, which is pretty compact. JSON-LD is pretty popular. There are also traditional ones, RDF/XML serializes XML, or N-Triples which is essentially a list of statements in plain text.
The URL, the location of your RDF source and the format are required, and then there’s a number of parameters that are optional. First example, I’ll run without parameters and we’ll see what happens.
Let’s try it!
Let’s have a look at this import capability in the first place. I’m going to bring up the Neo4j browser.
I have an empty database here. There’s no elements here and I’ve created a few queries that I’m going to use for this demonstration.
The first import statement looks something like this.
As you see it’s just a simple
CALL. The
CALLis the keyword Cypher to invoke installed procedure. I was calling the .importRDF that I was mentioning before, and all I’m passing as a parameter is the location of these RDFs. This is the URL of an RDF fragment that I’m going to show, and I’m specifying that it’s serialized as Turtle.
It would be a good idea for me to actually show that RDF. Here’s this fragment of Cypher.
For those who are familiar with Turtle, you see that we’re using serialization here. Also, there’s four descriptions and resources being described.
The first one is NSMNTX. The second one is APOC, which is the library of procedures and the third one is Neo4j-GraphQL. Finally, the fourth one is Neo4j.
I’m describing three plugins and I say in my triples that all three of them are Neo4j plugins. I have Neo4j at the bottom and it’s described as a graph platform. It’s an awesome platform. They all have a property that describes the version. You see that every resource has a version and some of them, in the case of plugins, have the release date and the version of Neo4j it runs on. This effectively implements the links in this RDF graph.
What I’m going to do is take this URL, publish in GitHub, and use it in my invocation of semantics import, and that’s all I have to do in Turtle.
I get a summary of the execution and what has happened there. Of course, this is a small set of triples – there’s just 19 triples there.
Now the triples have been imported, some namespaces have been created, and I’m going to show you what that looks like.
I’m going to click here to get all the elements, we have the three plugins.
There’s GraphQL, there’s Neo4j, there’s APOC and NSMNTX. All these statements and triples are not lost. You see that they have been represented. If I select one of them, as properties of the nodes, that’s the transformation that takes place when you input RDF into Neo4j.
Essentially what’s called literal properties, attributes of the statements, of the triples or of the resources are transformed into properties. We have GraphQL with its name, release date, version and a unique identifier, which is the URI. We have the same for APOC and NSMNTX. They’re linked through the
runsOnrelationship. You will notice that everything is prefixed and it’s not
runsOnbut
ns0_runsOn. This is the way we encode namespaces.
You realize in the RDF that I showed before everything represented was identified by URIs. That’s one of the building blocks of RDF. I want to be able to regenerate that, we don’t need that in Neo4j but if we want to regenerate that, I need to be able to do it. This
ns0actually is the reference in this list of namespaces that have been created.
We have our RDF data in Neo4j now. Now you are able to start querying it with Cypher and do all sorts of things.
Some people will say, “Hang on, I really don’t like having to work with these names because then that will affect my Cypher.” They think if they want to query a graph they’ll have to type
ns_0every time. The RDF data that I’m importing is just a data source. I want that RDF data stored in a natural way in Neo4j. That’s not a problem, so that introduces this idea of the configuration parameters that you are able to add to the input process.
Let’s go back to our import statement. What I’m going to do now is exactly the same as I did before. I’m going to add the handle vocabularies property. By doing that, I am able to tell where to keep the URIs, because I might want to reproduce them or the RDF afterwards. Or I want to keep them because I’m importing RDFs from different sources and I want to avoid clashes, so it’s important to me. The different options are important to different users. You could keep, you can shorten, or you can just ignore.
I choose to ignore this URI. I don’t care about the URIs, because all I want to do is import the data and make it look nice in Neo4j.
If I rerun this, I get exactly the same. However, this time I’m not generating namespace so I’m not needing them now. If I show the graph again, it’s exactly the same in terms of content.
You see that APOC still has all the attributes, but it all looks the same the way we’re used to in Neo4j. We name things without having to use prefixes. That’s one of the features that you get, configuration parameters that you could add to the import process. We are able to add more.
I’m going to show the filtering of predicates.
Let’s say that I’m going back to my RDF that I want to import. If you think you’re loading a dataset with hundreds of millions of triples, you might want to say, “I don’t care about everything. I’m not interested in inversions and release dates. All I want is the plugins and the connections with the platforms they run on.”
You are able to specify and be granular to the statement and the predicate level. You are able to specify which predicates you want to exclude from the import.
I do it by adding another configuration parameter, which is
predicateExclusionList. I provide basically a list of the predicates that I want to exclude. In this case, I just ignore the version and release date.
I’ll clear my graph to do the import one more time, this time I’m filtering the two predicates. You see that there’s 19 triples but only 12 have been loaded into. All have been parsed but only 12 have persisted in Neo4j. That’s precisely because of the filter we’ve just applied.
If I get the graph, you’ll see that it looks structurally still the same.
If you look at it now, we only have the name. We don’t have the version or the release date.
Export RDF data in Neo4j
Next, we look at how to export RDF data into Neo4j, the process is exactly the same.
You have your graph database and Neo4j. You don’t need to use a graph database, you could use any database or graph that you have in Neo4j. We get your graph and we run a series of commands.
This is implemented as an extension and produces a serialization, an export of our graph as RDF. That serialization is exactly what this second part shows, and these are nonstop procedures.
I’m in the process of including all the stop procedures that do these, but at the moment it’s implemented as an HTTP endpoint. I thought it would be a useful way to simplify integration with an RDF consuming application.
You are able to do things like take an element in your graph by ID and serialize it. You take an element by property and value, which is probably the better way to do it. Or you run any random Cypher on your graph and serialize the results as RDF.
Let’s try it!
I’m going to use a movie database. You are able to access it directly from the initial guide.
I load my graph with actors, directors, etc and you see that we have persons, directing movies and all the people that act in movies.
Above is the database that you get with Neo4j. We have a simple property graph.
Now, I want to be able to export this graph, as is, through an HTTP endpoint as RDF. The easiest way to do it is to describe operation.
I’m using the browser to describe the operation but what’s happening here is the column GET is simulating an HTTP GET request on that URL. The RDF is mounted on your server, in my case it’ll be running on localhost 7474, but it’s a proper HTTP request.
I’m going to ask NSMNTX to serialize this RDF, selecting it by ID. I understand that it’s probably not the safest way to do it but that’s what I want.
Let’s take Tom Hanks.
We see that Tom Hanks has 190 as his ID. I was saying it’s not safe because the ID is the internal unique identifier in Neo4j for specific nodes. You probably want to use some domain-specific nodes.
In this case, maybe the name of it’s a unique identifier, or any kind of primary key to identify a node. This is just for the example.
You select a node and when you click GET, you see that NSMNTX is producing an RDF serialization of this individual.
You see that this 190 is Tom Hanks – labeled as a person. I’m translating the labels in Neo4j into type statements in RDF and then the same with the properties.
Unique identifier 190 was born in 1956, has a name Tom Hanks, and has a number of relationships
acted inand
directedthat link it to other individuals. I am generating these unique identifiers on the fly.
We have the generation of RDF on the fly from any random graph. Another thing you could do is run some random Cypher, and get the output of this Cypher serialized as RDF.
Before I do that, I mentioned before that different serializations are probably going to show if I make any mistake. You could select the type of serialization that you want and the same RDF can be generated as Turtle, by default, or you could generate RDF/XML.
You could get any variant any possible realization JSON-based with JSON or the XML-based with RDF/XML, or Turtle, but this is completely computed on the fly.
Running Cypher on the database
Let’s run some Cypher on our database and export the output as RDF.
We do this with a POST request because I’m requesting the query with some parameters. I’m sending a Cypher element which contains the Cypher that I want serialized and the serialization format that I want to use.
Instead of returning any possible triple, let’s get triples of type
directed.
These will return all the paths formed by nodes connected through the
directedrelationship. I run this, and if I’ve made no mistake, that would produce a larger set of course. because it’s returning all the links and nodes represented by directors connected to movies.
You are able to produce RDF on the fly from your Neo4j graph.
If the Neo4j graph is the result of importing RDF previously, you have the capability to generate exactly the same RDF that you imported. That’s what this lossless nature of the import comes into play.
What else could you do with NSMNTX?
Model mapping
First, there is model mapping.
You have your Neo4j database and whatever graph. You could define a mapping between the entities in your graph and the schema of the elements in your graph.
We could use some public schema or any other public schema or ontology out there. What happens is you define this mapping and then when you run it, where mappings will be applied on the fly. You would be exporting your graph data coming from Neo4j, but describe in terms of the public vocabulary that you’re mapping to. That’s a pretty powerful capability.
There’s a number of other methods like public vocabularies, to public schemas, and then define mappings. You have all the documentations in the NSMNTX page. There’s some utility functions for listings, for dropping, for adding, etc.
Ontology management
You could publish the ontology in your graph.
I know the type of model we have because it’s the movie database. I could do something like RDF onto that and it will generate an ontology, an OWL description of the entities in my graph.
There’s a category called
movie, and there’s a category called
person. There’s a number of relationships between these two things such as
ACTED_INwhich connects persons to movies.
Because OWL is a language with vocabulary on top of RDF, you could publish your ontology. Keep in mind that this is different from manually, explicitly creating an ontology. This is an ontology that’s created on the fly. However, you could import an existing ontology and there are methods that will help you do that.
.importOntology
We have an .importOntology, which will take the ontology from wherever it lives and it will import it. Additionally, .importOntology will filter some of the elements and bring into Neo4j a simplified representation in terms of categories, subcategories, domains and range, but not much more.
This is intended to exclude explicitly some of these complex things. However, you could import ontologies using RDF, because it’s RDF in the end. That will bring a complete import of every single statement in the ontology.
Inferencing
Inferencing is being able to define as data explicit descriptions or explicit behaviors that you want a general purpose engine to run. In other words, as you run you generate data derived from the data that you have in your graph. That data is computed on the fly based on certain rules which could be ontologies or rules.
We are able to do inferencing based on the type hierarchies of both nodes and relationships. A subclass of hierarchies could be used to infer labels for nodes and also for relationships as you see here.
On Twitter, we met a user of NSMNTX that asked about how to use it from Python. There you have a great example of how to run a Cypher query on a Neo4j database. Serialize the output as RDF and import it directly with rdflib, which is a Python library that manages RDF.
Cloning subgraphs
I recently wrote about the possibility of using RDF as a way of cloning graphs from one Neo4j instance to another.
You export RDF as we’ve seen out of your graph and then import it on another instance. You use RDF as the mechanism to exchange data, which is it’s intended purpose.
Conclusion
I just wanted to conclude by inviting you to join the community. Ask all your questions. Share your experiences, help grow this fantastic project and contribute to building it.
Above is the source code and anyone who wants to add to it is more than welcome.
Want to engage with more technical talks like this one? NODES 2020: Neo4j Online Developer Expo and Summit is happening on October 20, so be sure to save your spot today!
Save My Spot
Save My Spot | https://neo4j.com/blog/neosemantics-linked-data-toolkit-neo4j/ | CC-MAIN-2020-50 | refinedweb | 4,088 | 74.39 |
# GDB Tutorial for Reverse Engineers: Breakpoints, Modifying Memory and Printing its Contents
GDB is THE debugger for Linux programs. It’s super powerful. But its user-friendliness or lack thereof can actually make you throw your PC out of the window. But what’s important to understand about GDB is that GDB is not simply a tool, it’s a debugging framework for you to build upon. In this video, I’m gonna walk you through GDB setup for reverse engineering and show you all of the necessary commands and shortcuts for your debugging workflow.
GDB Setup With Plugins
----------------------
The most common mistake I see is that people perceive GDB as a standalone debugger tool. I suggest you think of GDB as a debugger framework that allows you to build your own tools. Or you can use premade tools. My GDB-based tool of choice is Peda. It’s pretty easy to install, just follow installation instructions from Peda repository: <https://github.com/longld/peda>
```
git clone https://github.com/longld/peda.git ~/peda
echo "source ~/peda/peda.py" >> ~/.gdbinit
```
Program example
---------------
I’ll be using this simple program as our debugging target:
```
#include int main() {int x = 5;printf("x = %d\n", x);x = x + 5;printf("x = %d", x);return 0;}
```
This code creates local variable X on the stack, prints its value to the console, then adds 5 to X, and prints its value again. Pretty simple, just what we need for our learning example.
First Steps with Debugging a Program
------------------------------------
To open a file in GDB, type `gdb [filename]` in the console. GDB will open and load your file. Notice, that code execution hasn’t started. That’s because there’s a separate command to start the execution, it’s called `run`, or `r`for short.
And if we start it, our program successfully completes execution. That’s because we haven’t set any breakpoints yet. We can set a breakpoint using symbol name, for example, `break main`, or using address: `break \*0x555555555149`. You can print a list of all your breakpoints with ‘info breakpoints’ and delete them with `delete` .
Now if I issue `run`command, the execution conveniently stops at the beginning of the “main” function. And just to save us some time we can use command `start`which comes from Peda instead of doing all of this. This command will do all this work for us.
Of course, as with any other debugger, we can use single-stepping with command `step`or `stepi`.
And if you like to do single-stepping a lot, note, that pressing ‘Return’ on an empty line will make GDB execute the previous command you entered once more. Also, you can use command `next`to single step without entering function calls.
To step through several instructions at once you can use command `next` .
If you want to continue execution to a certain point in a program (for example to exit a loop), you can use command `xuntil` .
Modifying Registers
-------------------
To modify registers, use following construction `set$ = 1234`. For example, if I want to skip incrementing X, I can change the value of RIP register to 0x555555555176 with command `set $RIP = 0x555555555176`.
As you might have noticed, you can treat registers like variables in GDB. So I can assign RIP value of EAX with `set $rip = $eax` command. Now I can issue `context` command to make Peda reprint its beautiful context “window” to make sure that RIP points to some nonsense.
And I want to start execution from the start of ‘main’ function I can just write `set $rip = main`.
By the way, with Peda you have a faster way to skip instructions without executing them with `skipi` command.
Modifying Memory
----------------
Modifying memory is similar to modifying registers in the sense that any memory location can be a variable in GDB. So in my example, I want to change the contents of the format string “x = %d”. This string is located at 0x555555556004 in my binary. I can use this address as a variable and type in the same command as with registers: `set 0x555555556004 = “AAAA%d”`. But in this case, we’ll see an error message. That’s because you should always provide a variable type when modifying memory in GDB. So let me correct my command to `set {char[7]}0x555555556004 = “AAAA%d”`.
Print xprint hexdump hexprint
-----------------------------
You can use the command `x` to examine memory. For example, if I want to print 20 8-byte words from the stack pointer, I’ll use command `x/20xg $rsp`: 20 — the number of words, x — hexadecimal format, g — giant (8-byte words).
By changing the second `x` to `i` you can print memory as instructions.
The full list of supported modifiers you can find in the documentation:
> *From*[*https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html*](https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html)
>
> *You can use the command*`x`*(for “examine”) to examine memory in any of several formats, independently of your program’s data types.*
>
> `x/nfu addrx addrx`*Use the*`x`*command to examine memory.*
>
> *n, f, and u are all optional parameters that specify how much memory to display and how to format it; addr is an expression giving the address where you want to start displaying memory. If you use defaults for nfu, you need not type the slash ‘/’. Several commands set convenient defaults for addr.*
>
> *n, the repeat countThe repeat count is a decimal integer; the default is 1. It specifies how much memory (counting by units u) to display. If a negative number is specified, memory is examined backward from addr.f, the display formatThe display format is one of the formats used by*`print`*(‘x’, ‘d’, ‘u’, ‘o’, ‘t’, ‘a’, ‘c’, ‘f’, ‘s’), and in addition ‘i’ (for machine instructions). The default is ‘x’ (hexadecimal) initially. The default changes each time you use either*`x`*or*`print`*.u, the unit sizeThe unit size is any of*`b`*Bytes.*`h`*Halfwords (two bytes).*`w`*Words (four bytes). This is the initial default.*`g`*Giant words (eight bytes).*
>
>
Also, Peda provides you with a convenient `hexdump address /count` (dump “count” lines, 16-bytes each) command which you can use… well, to display hex dump.
And if you want to print again all the stuff Peda shows to you (instructions, registers, stack, etc.), you command `context`.
Final Thoughts
--------------
So today we’ve seen a glimpse of GDB functionality. To sum up, I want you to take home 3 things:
* use Peda or a different GDB plugin that suits you
* use ‘break’ and ‘delete’ commands to control breakpoints
* use ‘x’ command to print memory contents
If you have any further questions on GDB, please leave a comment below. Like the article, if you want more content like this. And happy hacking, you guys.
As always, instead of reading this article, you can watch the full video (bonus outtakes and cats included): | https://habr.com/ru/post/551500/ | null | null | 1,274 | 56.66 |
Hi Wix Forum,I am trying to create a breadcrumb trail for my site so that the website user can visual see the pathway they took through my site. Has anyone successfully done this with wix code?If so, what was the code you used? Thank you
this seems like something you could easily do with wix-storage and wix-location
i believe this code would work
the homepage will probably save as an empty string
to get this list at any time to be used you would call
Hi Ethan,
I attempted to insert the above codes as a custom code applied to all pages.
However, the entire codes were reflected on the header section of the published site
Great thank you Ethan!
In order to get the breadcrumb trail to show up on a specific page, should I embed an HTML on said page?
or just change the variable on that page?
if you add it to the Site Code it will run on every page. Getting the info you would add it to Page Code to receive the array with the path in it
hey,
you can do video how it's work?
There's really nothing to show. You simply add the code above to the Site Code tab and it makes an array with every page the user visits in order and stores it in the session storage.
however it does not account for Router pages
@Ethan Snow
Ok and how I pull it from session storage For to see like that
Thanks for the help, I appreciate it
@topink webmaster To display it like that the code would be:
then to display it on the site
$w("myElement").text = path
@topink webmaster
put this in the Site code
put this in Page Code
the code must be inside the onReady or called from a function they can not be top level (Except the Import's those must be top level)
import {sessions} from 'wix-storage' and #Breadcrumb give errors, what did i do wrong?
my bad i forgot to let path =
also set #Breadcrumbs to the id of the element on your page
How do i set the id of the element on a page?
in the properties tab you can set it
replace #breadcrumbs with the value of the item you want to set
My breadcrumb works but shows alot of unnecessary "paths"
Do you know what causes this?
This is not a script to show your path, it is a trail of the pages you have been to during the session. hitting Refresh does not start a new session. close and open a new tab to do that
I meant trail but didn't think of the word, in what time does the session reset?
close the tab and open a new one to restart it
@Ethan Snow Hi Ethan! Would it be possible to just show the path in minimal way? So when a visitor goes from home to products it would show "Home > Products" instead of "Home > Products > Cart > Home > Products etc etc etc" . This is also really helpful with Google's SEO so it would be a mayor pleasure if it worked! Thanks!
I see, Thank you very much!
Hi everyone,
We are trying to integrate the breadcrum system and managed to get the session storage session look good. But there are two thing we couldn't figure uit:
1: How do we set the breadcrumbs at a specific place of the website? So the texts we are referring to don't show up in the middle of the page. Does anyone has an example of how they implemented this?
2: The text are not clickable, therefor we don't see the value of use the breadcrumbs that much. What are your opinions about this topic? Does it still help for better SEO results for example? And can we make the breadcrumbs clickable aswell? Looking forward to your replies.
~ Rutney
Hi Rutney Sluis and friends
I am also facing the same problem. I have created the breadcrumb system thru schema in JSON ID but that is not clickable.
I also like to know.
Kindly suggest
Sam
I am also interested for SEO value...just following this thread to find a solution. Thanks all! :) | https://www.wix.com/corvid/forum/community-discussion/create-a-breadcrumb-trail-with-wix-code | CC-MAIN-2019-47 | refinedweb | 714 | 77.67 |
Created on 2019-03-06 21:54 by Lyn Levenick, last changed 2019-03-25 07:49 by rhettinger. This issue is now closed.
Running Python 3.7.2, it is possible to segfault the process when sorting some arrays.
Executed commands are
$ python3
Python 3.7.2 (default, Feb 12 2019, 08:15:36)
[Clang 10.0.0 (clang-1000.11.45.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> [(1.0, 1.0), (False, "A"), 6].sort()
Segmentation fault: 11
I did not have the opportunity to test on systems other than macOS, but anecdotally this is reproducible on both Windows and Linux.
Can confirm on 3.7.1 on Linux. The same is happening for sorted. I wonder if the optimisation done in 1e34da49ef2 is responsible
Can confirm for 3.7.2 on my macOS 10.14 system. Although this is the case in 3.7 on my current build of the master branch I get the following AssertionError instead:
Assertion failed: (v->ob_type == w->ob_type), function unsafe_tuple_compare, file Objects/listobject.c, line 2164
It seems like the AssertionError gets thrown in unsafe_tuple_compare().
Link:
Adding ned and Łukasz since this seems to happen on release builds.
Confirmed on Linux:
$ python3.6
Python 3.6.8 (default, Mar 5 2019, 22:01:36)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> [(1.0, 1.0), (False, "A"), 6].sort()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'int' and 'tuple'
>>>
$ python3.7
Python 3.7.2 (default, Mar 5 2019, 22:05:50)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> [(1.0, 1.0), (False, "A"), 6].sort()
Segmentation fault
On mac I confirm that 1e34da49ef2 segfaults and 6c6ddf97c4 gives the expected result:
➜ cpython git:(6c6ddf97c4) ✗ ./python.exe tests.py
Traceback (most recent call last):
File "tests.py", line 1, in <module>
[(1.0, 1.0), (False, "A"), 6].sort()
TypeError: '<' not supported between instances of 'int' and 'tuple'
The code segfaults at coming from.
@remi.lapeyre please make sure you are not unsubscribing others by mistake while adding comment.
Hi @xtreak, sorry for that.
I think the issue may come from where ms.key_compare is set, the conditions on the first ifs looks weird to me and I suspect ms.key_compare is set to unsafe_tuple_compare when not all elements are tuples.
The following patch fixed the issue and made the whole test suite pass:
diff --git Objects/listobject.c Objects/listobject.c
index b6524e8bd7..5237542092 100644
--- Objects/listobject.c
+++ Objects/listobject.c
@@ -1,4 +1,4 @@
-/* List object implementation */
+ /* List object implementation */
#include "Python.h"
#include "pycore_object.h"
@@ -2338,21 +2338,21 @@ list_sort_impl(PyListObject *self, PyObject *keyfunc, int reverse);
- }
}
/* End of pre-sort check: ms is now set properly! */
I will have another look at it tomorrow and try to open a pull request.
> The following patch fixed the issue and made the whole test suite pass:
Rémi, are you saying there are failing tests currently in the master related to this bug? It seems there are actually very few tests that test for TypeError and all those introduced in 1e34da49ef2 fail to test this code path.
> Rémi, are you saying there are failing tests currently in the master related to this bug?
No, you are right there is no tests for this code path and there is no tests on master related to this bug as far as I can tell.
I think the issue comes from, there is an early exit from the loop when keys don't have all the same type, but it is wrong to still think they are all tuples since:
- they don't all have the same type
- we did not check all elements in the list anyway
The code at should be guarded by the `if (keys_are_all_same_type)`. I opened a PR to add the tests from Lyn Levenick and the proposed fix.
Thanks for the analysis and the suggested PR which is now awaiting review. While segfaults are nasty, I don't see how this problem would be likely exploitable as a DoS without direct access to the interpreter. So I'm downgrading it to "deferred blocker" for now to unblock 3.7.3rc1.
New changeset dd5417afcf8924bcdd7077351941ad21727ef644 by Raymond Hettinger (Rémi Lapeyre) in branch 'master':
bpo-36218: Fix handling of heterogeneous values in list.sort (GH-12209)
New changeset 9dbb09fc27b99d2c08b8f56db71018eb828cc7cd by Raymond Hettinger (Miss Islington (bot)) in branch '3.7':
bpo-36218: Fix handling of heterogeneous values in list.sort (GH-12209) GH-12532) | https://bugs.python.org/issue36218 | CC-MAIN-2021-49 | refinedweb | 773 | 68.06 |
You can check your TensorFlow version for GPU support by following these simple steps.
Introduction
This guide walks through the process of checking your TensorFlow version to ensure that your system has GPU support.
Checking your TensorFlow version
With the release of TensorFlow 2.0, there are now two main versions of TensorFlow that you should be aware of:
TensorFlow 2.0 for CPU-only – this is the regular TensorFlow that you are used to and is still supported on CPUs.
TensorFlow 2.0 with GPU support – this version of TensorFlow will only run on NVIDIA GPUs with drivers >= 418.39 and a compatible CUDA toolkit installed.
To check which version of TensorFlow you have installed, simply import the tensorflow module and check the version:
import tensorflow as tf
tf.__version__ # should print ‘2.0.0’ for CPU-only, or ‘2.0.0-gpu’ for GPU support
3.GPU support
GPU support requires TensorFlow 2.1 or above. To check your TensorFlow version, open a terminal or command prompt and enter:
python -c “import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)”
You should see something like “v2.1.0rc2 2.1.0” printed to the console if you have GPU support.
Other ways to check your TensorFlow version
In addition to using the tf.__version__ attribute, you can also call tf.version to get a Version object with information about your TensorFlow installation:
>>> import tensorflow as tf
>>> tf.version
>>> print(tf.version)
1.8.0-dev20180314
Conclusion
If you’re using TensorFlow with a GPU, you’ll need to ensure that your version of TensorFlow is configured to use GPUs. You can do this by checking the version of TensorFlow that you have installed and making sure that it’s equal to or higher than the version required by your graphics card. | https://reason.town/check-tensorflow-version-gpu/ | CC-MAIN-2022-40 | refinedweb | 305 | 66.84 |
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
(This is part one of a three-part series; part two is here.)
If complaintss = animal.MakeNoise(); // no noiseanimal = new Cat();s = animal.Complain(); // I hate yous = animal.MakeNoise(); // meow!Dog dog = new Dog();animal = dog;s = animal.Complain(); // squirrel!s = animal.MakeNoise(); // no noises =:
(1) transform all the virtual and override methods into static methods that take an Animal as "this", as before, and(2) make delegate-typed fields of those methods, and then(3) transform the calls to invoke the delegates
? If we do that then we can choose which delegates to put in the fields of each instance, and thereby control which method is invoked.
Next time we'll give that a try.
Since you're a cat person, I forgive you for not "WOOF"ing in your example.
I was kind of hoping you'd call C#-without-instance-methods "C-flat".
An aside from this, I just realized that extension methods are not as "broken" as I once thought, assuming they are guaranteed to make an unchecked call to a member of "_this" somewhere in their execution.
Framework Design Guidelines-inspired nitpick: SoreThroat should be called HasSoreThroat :)
Here's hoping for a Part X: Interfaces somewhere in the future. :)
For anyone interested in an under-the-hood perspective, I think Chapter 5 of the Rotor Book (freely available on Joel Pobar's blog: callvirt.net/.../Shared%20Source%20CLI%202.0%20Internals.pdf) does a nice job of explaining (at least the SSCLI's implementation of) method dispatch all the way down to the level of the execution engine (stubs, generic resolvers, and slot maps, oh my!). It includes shiny pictures and everything...(ok, there's nothing shiny about the pictures, I was simply trying to appeal to the attention deficit crowd--plus I'm dumb as toast).
The explicit-this implementation should be familiar to Pythonistas.
This implementation does not handle the Basenji.
@Gabe: That's enharmonically equivalent to B, which is more than 40 years old (details are available at en.wikipedia.org/.../B_(programming_language) for those who are unfamiliar with it). More of a step backwards than I think Eric intended! (See also en.wikipedia.org/.../Enharmonic)
Every time I put "virtual" before a method declaration I cringe at all the indirection. Thanks for reminding us that nothing is really magic.
This "explicit-this" stuff looks like "explicit-self" in Python :)
For the sake of completeness, with:
class Dog:
def make_noise(self):
return "yip"
Calling "s = dog.make_noise()" is exactly equivalent to "s = Dog.make_noise(dog)".
@Peter, @Gabe: Why "C-flat" aka "B"? Why not "C"? It's a surprisingly good fit, all things considered. | http://blogs.msdn.com/b/ericlippert/archive/2011/03/17/implementing-the-virtual-method-pattern-in-c-part-one.aspx | CC-MAIN-2015-18 | refinedweb | 461 | 58.28 |
Word processor and displayDownload PDF
Info
- Publication number
- US4495490AUS4495490A US06268609 US26860981A US4495490A US 4495490 A US4495490 A US 4495490A US 06268609 US06268609 US 06268609 US 26860981 A US26860981 A US 26860981A US 4495490 A US4495490 A US 4495490A
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- function
- text
- cursor
- display
- text manipulating devices or word processors arranged to respond to operator selected text and format for ultimately producing documents, but at an intermediate state for producing a visible image to assist the operator in selection and formating decisions especially during editing. The present invention is more particularly directed to those word processors which include displays (such as for example, cathode ray tubes) having the capability of producing a visible image of multiple lines of text, although it should be apparent that the invention can be employed in general purpose digital processors to effect word processing functions.
Program listings corresponding to the flow charts are attached as an appendix including pages A1-A257.
There are, at the present time, a wide variety of commercially available devices within the field of the invention and the capability of these devices is truly remarkable. They can be characterized as including a keyboard with a plurality of keys thereon (similar to a typewrlter). Depression of one or a combination of keys generates a unique coded signal (or keycode) which has the effect of providing a particular graphic or executing a function (such as carrier return, tab, etc.) with corresponding code storage; the keyboard forms one of the two major input capabilities. The other input capability is provided by a device for reading/recording on replaceable media; typically today, that media is magnetic, in the form of a card, tape or disc. Devices within the field of the invention also include some type of printing arrangement for printing with an impact, ink jet or other printer, a document including multiple lines of text in operator selected format; typically the document is the final desired product. Another output capability is provided by the display (usually an electronic display such as a cathode ray tube) which is capable of displaying a visible image made up of multiple lines each with multiple character locations. Responding to the input devices and controlling the output devices, is typically a microcomputer (although mini-computers and even main frame computers can also be used).
Notwithstanding the truly remarkable capability of devices currently available, there is a desire for further improvement. One very important area in which improvement can be obtained is that relating to the electronic display where a number of trade-offs have been made in the past. Some devices have what is termed "full page" display capability in that the electronic display is capable of displaying up to 66 lines of text (which can be considered to constitute an entire page). Since it is the purpose of the electronic display to enable the operator to preview what the printer would produce if it were initiated into action, it was believed essential that the operator be able to distinguish, on the electronic display, all the various characters which the printer is capable of producing. This implies a certain minimal level of visible resolution. This resolution requirement coupled with the capability of displaying up to 66 lines of text, each line including generally 80 to 100 characters or more, dictates size of the electronic display, and this size carries with it cost and space requirements that it is one objective of the invention to improve. Other devices within the field of the invention include electronic displays which are incapable of displaying a so-called full page of text, and typically these electronic displays produce a visible image consisting of from 6 to about 30 lines of text. Obviously, with this reduced requirement, the size of the electronic display can be relaxed without sacrificing resolution. Since, however, this electronic display does not produce an image corresponding to an entire "page" it is impossible for the operator to accurately preview the appearance of an entire page.
The prior art does illustrate prior approaches to the solution of this problem. Fackler et al, in, "Light Emitting Diode Editing Display" IBM TDB, Vol. 22 No. 7 December 1979, pp 2614-16 illustrates how two rows of vibrating LED's may be used to display a fully readable single line of text and an associated image of a full page of text displayed at a resolution inadequate for resolution of each character but adequate for format presentation. Other suggestions directed at CRT displays are Bringol,"Abbreviated Character Font Display", IBM TDB, Vol. 19 No. 9 February 1977 pp 3248-49; Webb, "Combination of Alphanumeric and Formatting Data on CRT Display" IBM TDB Vol. 15. No. 7, page 2136 December 1972 and Lindsay, "Segmented Display" IBM TDB, Vol. 14 No. 5 October 1971, pp 1528-29. Bringol illustrates a new font which can give a viewer quick access to a particular portion of a page without actually displaying text. Webb illustrates a CRT display which, in addition to displaying a few (3) lines or line portions of text simultaneously displays format information to locate the displayed line portions. Finally, Lindsey shows how a display of 40-70 characters in length can be used to access various line segments as selected by an operator for editing.
In addition, U.S. Pat. No. 4,168,489 discloses Full Page Mode System for Certain Word Processing Devices which allows a full page of text to be displayed on a 10×5 inch screen which is generally considered too small for displaying a full page of text. This is accomplished by displaying only five of the fourteen dot rows per character vertically and reducing the current to the CRT horizontal deflection circuit by a factor of three by switching an inductor into the deflection circuit. See also Haak U.S. Pat. No. 4,230,974. The assignee of U.S. Pat. No. 4,168,489 announced a video type 1000 which has one mode of display with a 23 line capacity, but has another mode for displaying 66 lines on the same 10x5 inch CRT.
It is thus one object of the present invention to provide a display which is of the size normally associated with less than full page displays, but which is capable of giving the operator an accurate preview of what the printer would produce.
Commercially available electronic displays and components can be arrayed in a spectrum of relatively simple devices having relatively limited resolution, up to more complex, bigger and more expensive devices with increasing resolution. At the lower end of the spectrum is a standard TV set and a standard TV monitor, which is identical to a standard TV set with the radio frequency circuitry eliminated. As is well known to those skilled in the art, characters can be displayed on an ordinary cathode ray tube screen by employing coded signals each character being associated with a different code. A character generator is provided which responds to different codes and generates video signals capable of causing a dot-like image of associated characters to be displayed. Commercially available componentry includes video control integrated circuits for controlling the interaction of such a character generator, and horizontal and vertical synchronizing signals to convert a string of coded signals representing characters into a video signal which, when displayed on a cathode ray tube will portray a visible image comprising a sequence of the characters corresponding to the sequence of coded signals. The commercially available integrated circuits are, however, configured to be compatible with the resolution of standard TV sets, so that the number of characters per line and number of lines per screen is limited to less than the number of character spaces per typical line of text, and less than the number of typical lines per page. It is another object of the present invention to enable the foregoing capability, i.e., that of giving the operator the benefit of a full page preview from an electronic display whose area is inadequate to display a full page of text with adequate resolution, and to achieve this object using commercially available cathode ray tube integrated circuits (chips).
Another area of improvement relates to the insertion and deletion of the underscore symbol. Present technology complicates the insertion or deletion of the underscore symbol especially when the insertion or deletion is to be effected without disturbing pre-existing character symbols. In addition it is desirable to allow insertion or deletion of multiple underscore symbols, relative to pre-existing character symbols with a minimum of operator action.
Present day technology allows the operator to change format of existing text by altering margins relative to page boundaries. However, it is also desirable to allow the operator to alter the relationship of the text to existing page boundaries without altering the format of the text relative to itself.
To assist the operator in editing functions it is common to use a cursor symbol on a display to indicate exactly where in existing text any editing will be accomplished. To vary the editing location to any selected location of pre-existing text the operator has available on the keyboard, cursor positioning controls to allow the operator to move the cursor position to the desired editing location. However, it is sometimes necessary to position the cursor with respect to a rectangular grid as opposed to pre-existing text. It is therefore desirable to allow the operator to readily identify cursor position with respect to such a rectangular grid.
In order to assist the operator in editing functions the display, in one mode, shows both tab and indent tab function locations via unique graphics. Other functions, e.g., carrier returns (CR) temporary left or right margin (TL,TR) are not displayed unless the cursor is positioned to overlie such functions. At that time, the function is displayed via further unique graphics.
In the same mode of display since less than a full page of text is displayed the machine supports scrolling. To maintain contextual reference, however, scrolling (vertically only) breaks screen displays at convenient paragraph or sentence boundaries, if possible.
In some word processors the insertion operation may serve to generate an unusually confusing display. More specifically, the insertion functions must relocate each display element beyond the insert location, one display element space to the right for each display element added to the document. Since the quantity of text to be added is developed time sequentially the serial relocation of many lines of text can be distracting. To avoid this effect, once an insert function is identified, the entire line to the right is opened up (a hole is created) for insertion, by relocating all characters to the right of the insert location, in a single step. Thereafter, as inserted characters are added the "hole" is filled. When completely filled an addiitonal hole is created until the insert is completed.
In one mode (unformatted) of display, the unnecessary word spaces on the display are created for display convenience and have no real existence relative to the document being created or edited. Accordingly, the cursor controls prevent the cursor from being located in any area not occupied by text and/or functions.
In another mode (formatted) of display a novel hyphenation function is supported which shows the operator information respecting not only the hyphenation candidate in its relation to the right margin but also its relation to the average preceding line ending. This allows greater uniformity in line endings without the necessity for justification.
In order to further assist operator formatting decisions, on request the display will indicate the right and left margins and tab grid. Since these parameters may well vary within a document the actual locations identified for these parameters is dependent on cursor location.
Contrary to the unformatted display, in the formatted display non-text filled locations have a real existence and therefore the cursor controls allow the cursor to be positioned within or without the text filled area. Performing an insert function to a document location outside of existing text produces a text matrix having special attributes. For one thing such text matrix (termed a block) can be treated as an entity separate and apart from the document text. It can be moved in a simple fashion relative to other document text or deleted, for example.
To produce the formatted and unformatted displays the processor uses the same basic document but applies different rules in creating the display. The former display is created by observing constraints imposed on the document by margins, indents, tabs, etc. In addition, the specific effect of any of these constraints can be varied under operator control. To achieve the effect of a full page formatted display without the necessary resolution to distinguish each display character, the display provides resolution sufficient to resolve only printable character/graphic filled locations from unfilled locations, i.e., each printable character or graphic is treated and displayed identically. On the other hand, the unformatted display simply extracts a string of text characters or punctuation graphics and strings them along, breaking lines substantially only when display capacity is reached i.e., margin effects are not displayed as such.
Finally, using high rate digital signals in connection with CRT's whose phosphors were not designed for such signals can produce an effect termed "blooming" i.e., the image actually displayed is increased in size as a consequence of the interaction between CRT characteristics and the particular driving signal. By use of appropriate logic to monitor the video signal along with signal shaping the effect of "blooming" can be controlled.
The invention meets these and other objects of the invention by providing an electronic display for a word processor operating in two different modes; in a first (or text) mode the electronic display is capable of producing a visible image including a number of lines, less than the normal number of lines included on a full page, each line including a number of character locations which are less than typical number of character locations on a line of a page. This mode is employed when the operator is initially keying in and editing text characters, and the display is essentially unformated in that the operator's decisions as to the number of lines per page, the location of the margins on a page, and most other formating decisions do not influence the format of the display. For example, aside from paragraph endings the operator need not key in line endings.
The same electronic display, is, however, capable of operating in a second mode, and in this second mode it is capable of displaying a preview of a full page; however, since the area of the electronic display is incapable of displaying a full page with a resolution required to distinctively identify each character, the display merely indicates whether or not a character is located at each character position. In this second or page mode the display is fully formated in that character filled positions are located between the operator selected margins and on a number of lines per page which is operator selected.
Architecturally the inventive word processor includes a display (for example, the aforementioned cathode ray tube) and apparatus to generate a composite video signal including horizontal and vertical synchronizing signals necessary to properly drive the display. The composite video signal is generated from one of two refresh random access memories (RAM), one for each mode of the display. The output of each of the refresh RAMS is coupled to a tri-state bus which provides an input to one of two character generators, a conventional character generator for the first (or text) mode, which is capable of displaying a number of different graphic characters (including text, puncutation, etc.) used by an operator in making up a document, and the other character generator which may only distinguish filled from unfilled character locations. The foregoing apparatus is capable of operating the display in the two aforementioned modes with the use of a standard CRT chip by appropriately selecting the font for a character filled location in the second mode of display to be less than the space normally occupied by a character, so that the limited capability of the standard CRT control chip is nevertheless capable of designating what appears to the operator to be a number of character locations greater than the control chip was designed for. However, in order to achieve the foregoing functions the processor which responds to key operations provides a document memory into which are stored the unique keycodes generated by the keyboard in response to the actuation of each key or group of keys. When the operator is inputing text for the creation of a document, the computer loads one of the associated refresh RAMs as text characters are keyed in so that the operator continually sees a display of the document being created. This first (text mode) display, however, carries essentially no format information therefore comprising essentially only a text stream to normally fill each line text of the display. Two exceptions are operator keyed carrier return and tab functions (or required carrier return and tab). However, in order to maximize the amount of text included on each display line, in this first or text mode, a tab function is displayed as a sigle unique graphic, regardless of the number of spaces actually associated with the operator keyed tab function and stored in document memory. In addition when reviewing already keyed text, other functions are not displayed unless the cursor, under operator control, is located to coincide with the operator keyed function, at which time a unique graphic which identifies the function at that location is superimposed over the cursor. At any time during the keying of text, or after text keying is completed, the operator can switch to the second (or page image) mode. On such command, the processor loads the ohter refresh RAM so as to control the display in the page image mode. While the time taken by the processor to write either of the refresh RAMS is short, it is percievable, however, once the RAMS are written, the operator can switch from one display mode to the other almost instantaneously. In the second or page image mode, the operator has available a number of formating commands such as readjusting global or temporary margins, moving the initial writing point, adjusting the number of lines per page as well as others which will be described.
The keyboard also has associated with it a number of cursor positioning control keys which can be used by the operator to move the cursor. The cursor can be used, in the second (or page image) mode to indicate the new location of a margin, writing point, last line per page, etc. The cursor can also be employed in the first display mode (text) to indicate the memory position at which some editing function is to be performed. In keeping with the philosophy for the use of the two different display modes, the cursor, in the text mode, is constrained to move only within a pre-existing text field; whereas in the page image mode the cursor can be moved out of a pre-existing text field, although it is constrained to remain within an operator selected page size. When the cursor is moved out of the text field, in the page image mode, the cursor is blinked. Since the page image mode is intended to represent a preview of the printer produced document, the display within the operator selected page area consists only of printable graphics, i.e., functions are not displayed. However, in the page image mode, a split screen display is generated and various prompt commands or mode indicators are displayed external to the outline of the page.
While the keyboard includes function keys for switching from text mode to page image mode or back, the processor will automatically induce a transition in the display mode, if the operator keys a printable character in the page image mode. The text displayed following such a selection of a printable character will be the text in the vicinity of the cursor position, since it is that position at which the operator has selected the addition or insertion of the printable character. In addition to editing (inserting, deleting, etc.) text from previously typed or stored text, the processor is also capable of manipulating text segments (or blocks) which the operator has designated to be inserted outside the pre-existing text. These blocks are inserted, in the page image mode (and hence in the ultimately produced printed document) at the position designated by the operator by positioning the cursor. For text mode display the blocks are shown at the paragraph boundary following the block insertion. Each block has associated with it its own format including margins, tabs, etc. Once inserted, blocks can be manipulated separate and apart from the text, that is, they can be moved relative to the text, or their margins adjusted by simple commands keyed in at the keyboard. At the same time, operator invoked first writing point changes will also affect the associated text blocks in the same fashion as other text is affected.
In accordance with a further feature of the invention underscore operation is simplified in a number of respects. Firstly, one bit of the byte which identifies a character in document memory is dedicated to the underscore graphic. Accordingly, an existing text character can be underscored, or an underscore of such character can be deleted, by simply setting or resetting the dedicated bit. This operation, of course, does not affect adjacent characters or their pre-existing coded representations. Furthermore, the underscore function is enhanced to perform a full word underscore or word underscore deletion. The same operation (setting or resetting the dedicated underscore bit) is merely repeated once for each character in a word, selected by cursor location, when the command is asserted. The word boundaries are determined by spaces, or functions surrounding text graphics.
In accordance with a further feature of the invention, hyphenation function is simplified for the operator. Some prior art word processors are arranged to request a hyphenation decision from the operator when no word ending is located in a "hot zone" in the vicinity of the end of the line. The operator, however, in connection with these prior art devices is required to make the hyphenation decision based only on the relation of the word ending to the right margin. In accordance with the present invention, in addition to this information, the operator is presented with information as to the average location of line endings for an immediately preceding number of lines, and in this fashion, the operator's line ending decision can result in a line ending which is more uniformly aligned with the immediately preceding line endings.
In accordance with another aspect of the invention, the electronic display in the page mode can display an operator selectable page outline from among one or more predetermined page sizes. Margins are specified with respect to the selected page outline, and cursor positioning is also constrained by the selected page outline.
In accordance with another aspect of the invention, the page image mode, in addition to displaying an operator positionable cursor, provides, as an operator option, display of a pair of ghost cursors, each tracking the position of the cursor in one of horizontal and vertical dimensions so that cursor position at any location in an imaginary rectangular grid can be obtained.
In accordance with still a further aspect of the invention, the particular lines included in a text mode display (or screen) are selected in accordance with not only cursor location, but also sentence boundaries so that, if possible, when scrolling forward the beginning of a sentence or paragraph boundary which appeared on the last line of the previous screen appears on the first line of the display.
Finally, while the page image displayed is not normally used for entry or deletion, under certain limited conditions it can be so used. For example certain functions (CR, RCR, TAB, etc.) can be inserted in page image. Those same functions can be deleted in an error correcting backspace operation in page image.
The invention will now be explained so as to enable those skilled in the art to make and use the same in the following portions of the specification when taken in conjunction with the attached drawings in which like reference characters identify identical apparatus and in which;
FIG. 1 is a block diagram illustrating significant components of the present invention;
FIG. 2 is an illustration of the typical keyboard which forms one of the input devices for use with and a component of the present invention;
FIGS. 3A-3C illustrate a correlation between text and a portion of a page image mode display for an alternate embodiment of the invention;
FIG. 4 is a more detailed block diagram of portions of the inventive apparatus, particularly associated with the display;
FIGS. 5-10 are schematics of the circuits used in different ones of the functional blocks of FIG. 4;
FIGS. 11A-11D illustrate the page image mode display for the same document with a cursor positioned at four different locations;
FIGS. 11E-11G are three text mode displays required to illustrate, in text mode, the entire document displayed in the page image mode displays of any of FIGS. 11A-11D;
FIG. 12 is a representation of document memory which will produce the display as shown in FIGS. 11A-11G, in connection with the present invention;
FIG. 13 is a high level flow diagram which is useful in understanding the overall processing performed in accordance with the present invention;
FIG. 14 is a map showing several significant functional components of system RAM in accordance with the present invention;
FIGS. 15A-15C provide a detailed flow diagram of the interrupt handler processing;
FIGS. 16.1-16.18 provide a detailed flow diagram for the main system routine in accordance with the present invention;
FIGS. 17.1-17.12 illustrate a detailed flow diagram for text mode processing in accordance with the present invention;
FIGS. 18.1-18.11 illustrate a detailed flow diagram for the cursor related processing in accordance with the present invention;
FIGS. 19.1-19.31 illustrate a detailed flow diagram for the page image mode processing in accordance with the present invention;
FIGS. 20.1-20.12 provide a detailed flow diagram of the graphics processing in accordance with the present invention; and
FIGS. 21.1-21.4 illustrate typical hyphenate function displays in accordance with the present invention.
The basic objective of the invention is to provide the operator with an instrument which is natural and easy to use without, at the same time, sacrificing a great deal in terms of equipment complexity or cost. To this end, the system of the invention includes at least a keyboarding entry unit, a processor or processor like device for responding to the input information and manipulating that information so that it can be employed by the remaining system components, a printer which is arranged to print the completed document when it has achieved the form desired by the operator, replaceable media memory to allow the document to be stored, and later recalled and edited or printed without requiring re-key boarding of the information which had been previously keyboarded, and a display unit to assist the operator in entry, editing and formatting functions.
Preferably, the display unit is a cathode ray tube, and features of the invention impose minimal burden on the CRT so that a relatively inexpensive cathode ray tube even down to and including a conventional TV set or monitor can be used as the display. Associated with the display, to enable it to be used effectively, is additional hardware providing at least for refresh memory and associated logic and control hardware for use of the display in either of its two modes.
In a first or text mode, typically used when the operator is keyboarding text, the display is unformatted in that the operator's previous selections as to margins (either permanent or temporary, tabs, page length) and the like play no part in the form of the display viewed by the operator. The display consists of a series of lines of text starting with the first line keyboarded by the operator, line length being dependent almost solely on the amount of text that can be accomodated in a horizontal text line on a display. The operator need not key in carrier returns and thus, initial keyboarding is essentially unformatted. The only format decisions which are reflected on the display are operator entered carrier returns (for example, such as those used at the end of a paragraph) and required (indent) tabs (for example, at the beginning of a paragraph). The former format decision is exhibited on the display typically by prematurely terminating a line, the latter format decision is illustrated on the display by the use of a single unique symbol, (regardless of the actual length of the desired indent tab which the printer will ultimately produce).
On succeeding lines an indent will occur on the display aligning the text to the right of the rightmost previous required tab. This indenting is terminated by a required carrier return. Some functions are displayed by unique graphics when the cursor is at their location.
At any time during the initial keyboarding the operator can preview what the keyboarded text would look like if printed, by switching the display to the page image mode with the use of controls readily accessible on the keyboard.
In the page image mode the same display is capable of allowing a preview of an entire page of text (for example, up to 78 lines of text). Resolution limits imposed by the size of the display screen force a change in character font, so that any character or punctuation graphic regardless of its identity is displayed in identical fashion. Accordingly, it will be impossible for the operator to read the page previewed on the display in the page image mode, but on the other hand, the format of that page, i.e., text filled versus empty locations will be readily apparent.
Readily accessible controls on the keyboard allow the operator to change the format of this page or the entire document by altering left or right margins (permanent) or imposing or altering temporary left or right margins. Likeimposing wise, the length of the page (the number of lines of text) can be increased or decreased (up to the limit of 78 lines displayable on the screen). In addition, the first writing point on a page can be altered; selected portions of the text can be moved with respect to other portions of the text or the tab grid can be altered. And in response to any or all of these format changes, the display will, substantially immediately provide the operator with a preview of a page that would be printed under the newly imposed format.
To assist in formatting commands in the page image mode, a cursor (the intensity of which is greater than text) is provided whose position is controlled by control keys on the keyboard. Various format commands are interpreted in connection with cursor position, so that for example, if the operator desires to move the left margin, the cursor is first positioned horizontally at the appropriate location for the desired left margin, and then the left margin command is asserted. To further assist in format changes the operator may superimpose a grid on the page showing the right margin, left margin, tab points, first writing line and last writing line. Each of these functions is controlled from the keyboard either on a document, temporary or page end basis. When the cursor is positioned at a location which is occupied by text, it is on continuously, however, when it is moved to a location outside of the text filled area, it will blink, and if the operator performs an insert concurrent with a blinking cursor, a block insert is provided which automatically assumes independent formatting for the block (temporary margins, tab points, etc.) based on the starting point of the insertion. The text forming the block is inserted in a special location in document memory and is movable as a unit in the page image mode with respect to text not in the block.
In the page image mode, the previewed page does not occupy the entire area of the screen, rather a split screen effect can be implemented with the right hand portion of the display utilized for prompting messages and the like.
In the page image mode the page being previewed is illustrated along with an outline of a page size, and while a default page size is the typical 8.5×11", other and special page sizes can be employed and will be appropriately shown on the display.
While the text mode is the normal mode used for input operations, it is also normal mode used for editing functions. To assist the operator, the text mode also provides a cursor, which is movable under operator control However, in contrast with the page image mode, in the text mode, the cursor cannot be positioned outside of a text filled area. When the cursor position corresponds to that of a function (which is normally not shown) a unique graphic will be shown to identify the function existing at that location so that the function may be deleted, for example. If the operator desires to insert text whether it be character, word or longer, an insert command is asserted; responsive to the insert command, the line of text to the right of the cursor is moved to the next line, leaving the remaining portion of the line on which the cursor exists, available for insertions. If the operator's insertions exceed that capacity, a new blank line is displayed for additional insertions, and so on until the operator again asserts the insert command which has the effect of removing the system from the insert mode and deleting any unused insert blank spaces.
In the text mode, in addition to displaying text keyed in by the operator, the first line of the display is employed for mode and prompting indicators.
As the operator keys in text, either during normal input operation, or in the insert mode, automatic pagination is effected.
Another powerful revision feature implemented in accordance with the invention is the information presented to the operator at the time a hyphenation decision is required. Typically, a hyphenate decision will be encountered in the page mode, after the operator has specified left and right margins. As is conventional, the last five or six character spaces on a line are considered a "hot zone" so that if a word ending does not fall in the "hot zone", a hyphenate decision is required. Under these circumstances the display will automatically switch out of the page display mode to a hyphenate mode in which the entire word requiring a hyphenation decision will be displayed, and in addition the operator's selected right margin will be displayed via a graphic character such as an arrow, a further graphic character or second arrow will indicate the actual average line ending for a previous predetermined number of lines on the page. The operator then merely positions the cursor at the location at which the hyphen is desired (and the cursor is constrained to move only within the word) whereafter asserting a special command will both return the display to the page mode and record the operator's hyphenate decision.
Reference is now made to FIG. 1 which is a block diagram illustrating important characteristics of the invention particularly related to the display features. As is shown in FIG. 1 the inventive system includes an input unit, the keyboard 10, for a processor 20. In an embodiment of the invention which has been constructed, the processor 20 is implemented via the IBM OPD minicomputer which is more completely described in U.S. Pat. No. 4,170,414; those skilled in the art will understand, of course, that the use of this particular processor is not at all a feature of the invention. As is shown in FIG. 1, the processor also is arranged to control a printer and a replaceable media recorder (such as magnetic tape, card or disc); these devices are not further shown or described in this application since, while they provide important capabilities to the overall word processing system they are, in and of themselves conventional, as is their association with the processor 20.
FIG. 1 also illustrates data and address bus 21 and 22 respectively of the processor 20. Data bus 21 is buffered and coupled to internal bus 31, which in turn, is coupled both to a text refresh random access memory (hereinafter text RAM 25) and a page display refresh random access memory (hereinafter page RAM 26) as well as the CRT controller 27. Outputs of the text RAM 25 or page RAM 26 along with the CRT controller 27 are input to the character generator 28 or psuedo character generator 29 to drive logic 30 to effect drive of CRT 32. As the operator keys in text, or as text is read from replaceable media, it is used to fill the "document memory", which is actually random access memory internally available to the processor 20. In an embodiment of the invention which has actually been constructed, the document memory consisted of a memory capable of storing up to two documents, so that, for example, while one was being printed, the operator could work with the other; hereinafter when we refer to the "document memory", we will be referring to the active document memory. The document memory contains information indicating the text characters keyed in by the operator and the order in which they were keyed in along with the typewriter functions normally associated with such text, along with functions to be defined hereinafter. The document being created is also partially defined by one or more tables which are also created by the processor 20 in the course of text entry. Thus, the document itself is defined partially by the contents of the referred to tables. It should be noted that this form of document memory distinguishes from the many prior art word processors in which a document memory maps to the final printed document. In any event, in addition to responding and storing the codes generated by the keyboard in the course of operation, the processor 20 also writes the text RAM 25 since it is from the text RAM 25 that the CRT 32 is driven. In an embodiment of the invention which has been constructed text RAM 25 was arranged to allow the display of 14 lines of 52 characters per line; and without significantly altering parameters it is believed that this capacity can be increased, for example, to 16 lines each of 64 characters since text RAM had 1024 byte capacity, 7 bits of each byte defining the font and the eighth bit used for undercore. In order to generate the text mode display on the CRT 32, the CRT controller 27 repeatedly addresses the text RAM 25 so its contents are placed on the internal display bus 31 which, as is illustrated in FIG. 1 is coupled to a character generator 28 and a psuedo character generator 29, the reason for which will become clear hereinafter. However, the combination of text RAM 25, bus 31 and character generator 28 operate in an entirely conventional manner. For example, a series of 8 bit character defining codes from the text RAM 25, defining a line of text, are provided repeatedly to character generator 28 to result in a series of binary signals which are used to intensity modulate the CRT 32 and provide thereon a display consisting of the line of text. Thus, for example the character generator 28 may include a typical read only storage character generator. As is shown in FIG. 1 the serial output string from the character generator 28 passed through logic circuit 30 before presentation to the CRT 32; the function of this logic circuit 30 will become apparent as this description proceeds, at this point suffice it to note that signals are added in the logic circuit 30 to denote the cursor position, for example, and in some embodiments of the invention to control the Z axis modulation for image enhancement purposes especially when the invention is used with a CRT 32 which is not particularly designed for text display.
Those skilled in the art will recognize, in the CRT controller 27 a conventional device, which, in response to clocking signals, provides addressing signals to the text RAM 25 so as to read it out to provide the appropriate input to the character generator 28 properly coordinated with CRT deflection. The CRT controller 27 is programmable via the processor 20, within limits to provide a selected number of characters per row and rows per screen. Among the various parameters of the CRT chip which are selectable, scans/data row, characters/data row and data rows/frame are the most significant. The software, in one embodiment, sets the parameters as shown below:
______________________________________Parameter Text Page Image (FAX)______________________________________Scans/data row 15 3Characters/data row 64 132Data rows/frame 16 64______________________________________
The term scan refers to a single sweep and data row, corresponds to a line, such as a line of text. Therefore, 15 scans/data row allows each character to include 15 pixels, vertically. Likewise 16 data rows/frame allows 16 lines of text to be displayed. In practice, several character positions (horizontally) and data rows or lines (vertically) are not used. In addition, two lines of those available in text mode are used in the message line, leaving 12 lines of text. In page image, of the 132 positions available only 85 character positions are used for display of a page and its contents, the remainder (less positions not used) are dedicated to various prompts and messages.
In the text mode the CRT chip 27 is capable of addressing the required number of characters. As will be described the page image RAM 26 cooperates with the CRT controller 27 in exactly the same fashion. That is, the CRT controller reads the RAM 26 to supply, on the bus 31 a byte stream representing the desired image, which is seralized and used to control the CRT. While FIG. 1 shows the RAM's 25 and 26 as separate from each other that separation is only functional as there is no reason why they cannot be located on the same partitioned chip. In addition, since the RAMs are used alternately they can in fact be implemented by a single RAM of capacity sufficient, at any one time, to perform the function of either text RAM 25 or page RAM 26. Furthermore, there is no fundamental reason why this RAM capacity cannot be integrated with the processor's internal RAM so long as either a DMA operation is used for refresh or some time multiplexing is used so both the processor 20 and CRT 32 can both use the same memory chip. In the embodiment we built the page RAM 26 also had 1024 byte capacity, however, each bit represented a character, so it was capable of storing 128 characters per row for 64 rows per screen. Of these we used 85 characters per row for our page image, the rest, 43 characters per row was available in out split screen for prompts, etc.
While this, and other available CRT control chips are programmable in that they are capable of varying the number of characters per row and the number of rows per screen, they do typically have limits which makes it impossible to display a full page. That is, for example, up to 156 characters per row and 78 rows per page or screen. Of course, RAM 26 would have to be extended (beyond 1024 bytes) to store an 156×78 character screen or page. In order to provide the desired display we must somehow provide a translation between the text characters (for example, the alphanumerics that the operator desires to appear on the printed page or the display) and the display characters, since the CRT control chip is incapable of providing a one-to-one correspondence between display characters and text characters. Accordingly, in one embodiment of the invention we group text characters into a two-by-two array, and define that two-by-two array of text characters as a single display character identified by a single byte. Now our limits of 156 characters per row is reduced to 78 display characters per row, well within the capability of the control chip, and the requirement of 78 rows per screen is reduced to 39 rows per screen, also well within the capability of these conventional chips. While we have suggested a 2×2 array it is of course possible to employ other, larger arrays, for example 3×2 or 2×3 with existing 8 bit bytes. While there is no theoretical reason why even larger arrays could not be used, 3×3 or larger arrays would require 9 bits or more to distinguish the various combinations of filled and unfilled locations.
FIG. 3A illustrates an exemplary two-by-two array of text characters containing, respectively, upper case e, space, lower case g and period; FIGS. 3B and 3C illustrate, respectively, 10 pitch and 12 pitch displays corresponding thereto. It will be noticed that any character location, regardless of character identity, is displayed as a pel array which is 3 high by 3 or 4 wide within which is a two-by-two pel array which is turned on if any character or punctuation is present and is off if the location is empty. To implement this, the page RAM 26 stores bytes which are significantly different from that stored by the text RAM 25. The text RAM 25 stores, as is well known to those skilled in the art, an 8 bit byte for each character (or space), which is translated by the character generator 28 into a sequence of binary signals for use by the display. On the other hand, the page RAM 26 stores an 8 bit byte for each two-by-two array of text characters, corresponding to a single display character. Since there are 15 different combinations in which the two-by-two array can be filled or unfilled, 4 bits will suffice to define the particular array configuration. The remaining 3 or 4 bits can be used for other functions, such as defining cursor placement with regard to any one of the four different text characters represented by this display character. Of course, it is necessary for the processor 20 to write the appropriate signals to the page RAM 26, and the manner in which that is effected will be described. From the preceding, it should be apparent that the pseudo character generator 29 then has the function of interpreting the 4 bits defining the particular stored display character and serializing the resulting binary signals so that the appropriate display is produced, for example, that shown in FIG. 3B or 3C.
The preceding description of a display scheme is not that actually employed in an embodiment of the invention actually constructed. In that embodiment the CRT controller 27 was a CRT video timer-controller (VTAC) designated CRT 5027 and available from SMC Micro Systems Corp., Hauppauge, N.Y. This particular chip is capable of displaying up to 132 characters per row, but if 132 characters per row are chosen, then it will only provide addressing for 32 rows. Otherwise, (for 96 characters per row, or less) it will address 64 rows per screen. In the embodiment we contructed we selected 132 characters per row and used the most significant row address bit DR4 and divided it by 2 in a flip-flop, to generate an equivalent to the unavailable DR5 which we called DR6. This gave us the ability to address 64 rows per screen and addressing for 132 characters per row.
It is also possible to go beyond the 132 character/row and 64 rows/screen with this particular chip as follows. Either parameter is established by the chip's addressing outputs, used to read RAM 26. That is, the chip will read or output 132 different addresses between horizontal flyback, and likewise address 64*132 addresses between vertical retraces. To increase these parameters we take the least significant bit of an address and multiply its rate. Thus, take H0-H7 where H0 is the least significant bit and halve the pulse width with a faster clock, giving an externally generated H(-1) to indicate the externally generated bit is less significant than the internally generated H0. So, instead of eight address bits per row (limited to a maximum count of 132) we now have nine bits which are limited to 264. With this new address set we can address up to 264 characters per row, or any lesser number with any one of a variety of decoding schemes. Further, we can, as we do, split the page image screen and put 85 characters per row in about 50% of the screen width. By using the same technique we can increase the row count, per screen, above 64.
Before describing the apparatus in detail, reference is made to FIG. 2, which illustrates the keyboard 10, as it appears to the operator. The keyboard 10 is, in many respects similar to conventional word processors in that it includes a first field of keys 150 which include typical typewriter alphanumeric symbols and functions. In order to further widen the number of unique key codes available from the keyboard 150, a code key 137 is provided which is used, in connection with the simultaneous depression of another key within the field 150. The combination provides a key code which is different from that which is obtained when only the other key within the field 150 is depressed. Thus, for example, a first code is produced by the keyboard 10 when key 136 is depressed and a second different code is produced when the keys 136 and 137 are depressed simultaneously.
Two additional control key fields, 151 and 152 are also provided. Key field 151 includes keys 101-103 to effect, respectively, a media read, a printing operation, and a media recording operation. In addition, keys 104-106 provide for placing the device in an insert mode, a review mode or a delete mode, respectively. The insert key 104 can be asserted in text or page image mode. When asserted in text mode it will cause the processor 20 to remove any text on the line at which the cursor is positioned, which is to the right of the cursor, that text is moved down to the next adjacent line, and the following text is adjusted accordingly, so that the operator sees, on the display 32, at least a portion of a line which is available for text insertion. If the operator's insertion exceeds this capacity, when in the insert mode, an additional blank line is provided, and all following text is moved down accordingly. In page image, insert will perform a text or block (if the cursor is blinking) insert depending on the cursor position. In either event if a (printable) character is entered the device will switch thereafter to the text mode.
Review can be asserted in text or page image mode. In the latter mode, each subsequent assertion will produce an image of the next sequential page on the display, unless key 111 (up cursor) is simultaneously asserted in which case the next preceding page image is produced. A similar effect is produced in text mode, except that rather than pages, each assertion will call up a different screen (maximum of 12 lines) either the next screen or the preceding screen if the up cursor (key 111) is also asserted. Rather than displaying an entirely different set of 12 lines per screen, the processor will attempt to maintain a sentence or paragraph continuity, on default the last (or first) line of one screen is the first (or last) line of the next.
The delete key is a modifying command and therefore must be used with another command. It has the expected effect, namely it can delete a block of text (in the page mode) or in text mode it can delete a character for each depression of cursor right (114), a word (key 110), as well as word (122) or character underscore. It can also be used to delete previously entered functions as is explained later.
The remaining key field (152), includes control keys 107 and 108 which can be used by the operator to control the display mode, either the text or page image mode. In the page mode asserting key 108 has no effect. However, in the text mode, asserting key 107 will align the beginning of a sentence/paragraph containing the cursor to the top line of the screen. The manner in which this is effected will be described hereinafter.
Keys 109-114 are provided to control cursor movement. Keys 109 and 110, for example, move the cursor one word to the left or right, respectively in text mode. In page mode these keys are ineffective. In a similar fashion, keys 111-114 are provided to move the cursor up or down, right or left, either a single line space or character space per depression. As will be explained, in the text mode the cursor is constrained to move within the field of existing text. That is, cursor right (key 114) will, at the end of text in one line, go to the leftmost character in the next line, even if the line ending is not at the right edge of the screen. In text mode, with the cursor at the top of the screen asserting cursor up (key 111) will scroll the screen up. Similar action occurs with cursor at the bottom of the screen and assertion of cursor down (key 112). Like action can be effected with cursor left and right. In page image the cursor can be moved anywhere within the page boundaries, the cursor blinks when it is not over text. The manner in which the key codes generated by assertion of any of these keys is used to implement the function will be described hereinafter.
Within the key field 150 itself, a majority of the keys and the results produced are conventional and will not be further described. However, the effect of actuating certain of the keys is worthy of brief comment. Several keys produce typewriter functions, i.e., index, backspace, carrier return, tab and space. The backspace is an error correcting backspace only when initially keying the document, continuing to key the document, revising in insert mode or under certain special conditions in fax mode. Otherwise, backspace is similar to a left cursor command. The Req backspace is always merely a cursor left.
Each of the keys to be discussed hereinafter will produce the function described when the key itself is depressed in combination with actuation of the code key 137 (i.e., a coded function).
Depression of key 116 enables the operator to change the position of the first writing point. For example, with the page image display, the operator can move the cursor to the point at which it is desired the text to begin. Actuation of key 116 will then effect this function. The reformating effected by changing the first writing point does not alter the integral relationship of the document and this is effected by appropriately changing the global and temporary right and left margins along with first and last writing line in a manner to be explained hereinafter.
Depressing the key 117 provides a page ending code which is used to demark the end of a page. This is used by the processor 20 to format text, as will be described.
Depression of key 118 puts the inventive apparatus in a format mode; this is used to superimpose on the page image a tab grid and horizontal and vertical ghost cursors each parallel to an associated grid, which track the document cursor in a single dimension, respectively. Second depression of key 118 restores the original page image.
In the page image mode the display includes a representation of the document boundaries so the operator can view the relationship of margins, etc. with respect to the document boundaries. For example, a common document boundary size might be 81/2×11". Keys 119 and 120 can be used to display document boundaries of different but predetermined sizes. For example, key 119 can cause an envelope display, and key 120 might cause a display simulating the shape of an IBM card. Similar document boundaries could be called up by other conventional techniques.
Keys 125 and 126 respectively, establish temporary right margins and temporary left margins at the present cursor position. Therefore, an operator desiring to change or insert a temporary right margin, for example, would first move the cursor, using any of the keys 109-114 to an appropriate location, and then actuate key 125. The temporary margins will be applied, by the processor 20 to the text in the paragraph at which the cursor is located. The effect of temporary margins are cancelled at required carrier returns as is the effect of other temporary parameters. Margins affect text display only in page image mode.
Permanent margins for the document can be selected by the operator in a similar fashion using keys 124 and 132, respectively. However, only in page image mode.
Key 123 provides a key code requiring return to the beginning of a document, and is used in either mode.
Key 122 provides for word underscore in text mode only. In use, the operator positions the cursor to lie somewhere within a word which it is desired to underscore. At that time, key 122 is asserted, and the processor 20 sets an underscore indicating bit in the document memory for each byte associated with each character in the identified word.
Keys 129 and 131, respectively, provide for tab sets and clears, used in page image only. Key 130 is similar except it provides for temporary tab registers. Temporary tab registers are, like temporary margins, effective until a required carrier return is encountered. Keys 129 and 131, respectively, set or clear the temporary or global tab register depending on cursor location when the command is asserted.
Keys 128 and 135 allow the processor 20 to switch between the two documents in the document memory, key 128 calling up document A and key 135 calling up document B. In addition, when recalling document B, for example, after working on document A, the portion of document B which is displayed and the position of the cursor within document B are the same as the conditions existing when document A was called up.
Key 134 is used, in page image, to designate the number of lines on a page. In use, the operator positions the cursor at what is desired to be the last line on a page and then asserts key 134.
Key 133 is used to move blocks of text and is used in page image, only. In accordance with the present invention the document is broken down into main text, and one or more text adjuncts which are termed blocks. Text blocks, or simply blocks, while constituting a portion of the document, have an identity separate and apart from the main text, and key 133 allows blocks to be moved, individually, relative to the main text. A block insert is implemented, after keying in the main text. This is effected by simply positioning the cursor in the page image mode outside of main text. Since the cursor is outside the main text field, the display 32 will show it blinking; asserting the insert key 104 places the machine in a block insert mode. The operator can thereafter key in text, and as text is keyed in, display 32 shifts to the text mode to display the keyed in text. The text is stored in the document memory, along with the main text, but its beginning and end carries block delimiters to identify the block. Key 133 then can be used, by firstly positioning the cursor at a desired location for the block and then asserting key 133 to move the block thereto. If there is more than one block in the document, the block which moves in response to assertion of key 133 is the block last operated on.
Similar techniques could be used for simplifying operator action on initially keying in main text by suitable modification to the system disclosed herein. More particularly, rather than requiring the operator to identify a left margin and first writing line before inserting text, the software could be modified to pick up these parameters from the operator's cursor positioning decision when initially keying in text.
Many of the functions briefly described in connection with the key field 150 can be deleted in a similar fashion by operating the delete key 106 along with another key. This places the machine in a delete condition such that rather than inserting the thereafter asserted keys function, that function, if present at the cursor position, is deleted. This applies, for example, to permanent or temporary margins, temporary tabs and word underscore. Likewise, a block can be deleted, by asserting the block move key 133 when the inventive device is in the delete condition, in text mode. The delete condition is active when (and only when) the delete key is depressed, i.e., it is not a toggling function like insert, for example.
Finally, key 136 provides for clearing the document memory; in the embodiment of the invention in which the document memory is capable of holding two documents, the clear function is effective for the document being worked on.
Although some of the functions described above in relation to the keyboard are implemented via codes which are generated by simultaneous depression of more than one key, those skilled in the art will have no difficulty in understanding that this is not at all essential to the invention, and, with the addition of one or more keys, or a rearrangement of the functions performed by the available keys, the particular function implemented by simultaneous action of more than one key, could be implemented by the action of only a single key, the only requirement being that different functions are identified by unique key codes, generated by the depression of a key or group of keys.
Refer to FIG. 4; a detailed block diagram of the apparatus interacting with the processor 20 and the display 32. As is shown in FIG. 4, the processor 20 has a data out bus 21A which may, for example, be 8 bits wide which is provided as an input to a buffer 40, the output of the buffer 40 is coupled to the internal bus 31 which may also be, for example, 8 bits wide. A second buffer 41 drives a data input bus 21B to the processor 20 which may, for example, also be 8 bits wide, from the same internal bus 31.
The processor 20 also has an address bus 22 which may, for example, be 10 bits wide which provides an input to selectors 42 and 43 which have a multiplexing function in that they select one of their two inputs and provide it as an addressing output to the associated text RAM 25 or page RAM 26. The other inputs to the selectors 42 and 43 are provided by the output of the CRT controller 27 over a bus 60. Control signals input to the selectors 42 and 43, from the logic 58 make a selection; of course, when the processor is writing into the text RAM 25 or page RAM 26, the selector passes the address from the processor address bus 22, whereas, when it is actually desired to display, the selector 42, 43 passes addressing information from the CRT controller 27 to read out the associated text RAM 25 or page RAM 26. The internal bus 31 is also used to transfer status informaton from the processor 20 to a status latch 50 which provides various controls to the logic circuits 58. The status latch 50, in an embodiment of an invention which has been constructed includes a bit for page image mode, a bit for highlight, a bit for black on white or reverse image, a bit for blanking, a bit for ghost cursor (split), a bit for blinking and a bit indicating when the vertical sync pulse, from the CRT controller 27, is active. The buffer 51 enables the processor 20 to read the contents of the status latch 50 at any time; since it is mainly useful for experimental operations, it is not at all necessary for providing the features described herein.
Display of cursor position is partly controlled by the CRT chip 27 which receives cursor position information from the processor 20 over the data bus 31. However, several features of the invention provide ghost cursors, a vertically moving ghost cursor identifies the horizontal row of the cursor, and horizontally moving ghost cursor defining the vertical column of the cursor. Display of these cursors employs a pair of comparators 48 and 49 indicating the location of the horizontal and vertical ghost cursors, respectively. To provide these signals the comparators are provided with information as to the real time development of the raster, and this is implemented via the bus 60 which carries 10 bits of sweep identifying information, 7 bits providing character counts and 4 bits providing what is in effect line count information. Selector 47 provides the line count, and lowest order bit of character count to the comparators 48 and 49, for comparison purposes. The other input to comparators 48, 49 is SP.0.-11 and EP.0.-11 from address latch 45 which provides 12 bits of address for each cursor.
The output of the comparators (the signals SE4, from start comparator 48; and EE4 from end comparator 49) are provided to the logic circuit 58 to form the ghost cursor illumination which is merged with the basic video signal in a manner to be explained.
The output of the comparators (SL4 and EL4) are provided to logic 58 to form a highlight condition (from start position SL4 to end position EL4) to be merged with the basic video signal in a manner to be explained
As will be explained hereinafter, in the course of text entry, the processor 20 forces the display apparatus to the text mode, and in that mode continually updates text RAM 25, so that it reflects the contents of the internal document memory of processor 20. When the processor 20 is not actually engaged in writing text RAM 25, the RAM is read out on a continuous basis via the control signals provided by the CRT 27. Each 8 bit byte in the text RAM 25 corresponds to a different character, space, etc., which is input to the character generator 52, in concert with 4 bits of row count information to generate, for that byte, an N bit word where N is the width of any character box. In the embodiment we constructed each character is 10 pels or columns wide by fifteen pels or rows high. This count is provided to latch 53. Actually, only seven bits of the 8 bit byte are used by the character generator 52, the eighth bit corresponds to the underscore bit. This is coupled directly to the latch 53. The latch 53 loads a shift register 54 with the N bits, and the contents of the shift register are read out by the appropriate timing signals to generate a serial signal stream CO which is input to the logic circuit 58. The underscore bit is read out of latch 53 and provided directly to the logic circuit 58. The underscored bit is simply a binary signal indicating, during the appropriate row time, whether an underscore graphic is to be displayed. The CO signal stream is the binary signal stream identifying the character, UNDC is the underscore signal stream.
In an entirely similar manner, but with substantially different results, the page RAM 26 is read out by the CRT controller chip 27. However, in the page image mode when the page RAM 26, is read, the signals on the internal bus 31 are coupled through a latch 55 to a shift register 56, which performs functions substantially similar to the function of the shift register 54. As will be explained hereinafter, the page RAM 26 is written by the processor 20 on a one bit per character basis in a fashion that maps directly to the CRT screen, in this respect, similar to the text RAM 25 where each byte corresponds to a character or space. Accordingly, each memory location in the page RAM 26 corresponds to a unique position on the CRT screen. The presence of a one bit at a memory location in the page RAM 26 corresponds to the presence of a character (or punctuation graphic) on the CRT screen at the corresponding location. The page RAM 26 is read out a byte at a time, and the 8 bits are serialized in the shift register 56. Each bit, provided to the pixel select circuit 57 is used to generate a binary signal depending on the presence or the absence of a bit from the page RAM 26 Thus, the timing signals Fc0, Fc1 (corresponding to one of four columns) and R0,R1 (corresponding to one of three rows) is used to generate the serial bit stream F*CLO which is input to the logic circuit 58. The pixel select network 57 could be any fixed switching network (such as a ROM) which is enabled by a bit from the shift register and which is addressed by the timing signals.
Accordingly, the input to the logic circuit 58, either F*CLO, or the combination of CO and UNDSC (in the page image mode or text mode, respectively) serve to define, for the most part, the basic configuration of the display. The logic circuit 58 adds to this signal stream, in a manner to be explained, cursor position, any highlighting by the processor 20, ghost cursors, if selected, and character enhancement.
Before describing the logic circuit 58, reference is made to FIG. 5 which describes the clock circuit 59.
As is shown in FIG. 5, the clock circuit 59 which provides timing signals for the CRT chip 27 and the logic circuit 58, comprises a crystal oscillator operating at appropriate frequency (in one embodiment of the invention the crystal operated at 12 MHz). The signal is provided to a pair of counters 70 and 71, both operated as dividers. The counter 70 provides a carry out (RCC) which divides the input rate by 10. This output is coupled to an AND gate 72 where it is ANDED with the signal FAX*. Thus, in the text mode (not FAX or page image) the output of AND gate 72 is a clocking signal at 1.2 MHz which is coupled through OR gate 73 and inverter 74 to provide the dot carry clock (hereinafter DCC). In addition, the counter 70 provides column count signals (C.0. through C3) which divide the input rate by progressively high powers of 2. The signal DCC provides one pulse per character.
In a similar fashion, the counter 71 is operated as a divider and its 2 outputs (Fc0, Fc1) are respectively the input rate divided by 2 and 4. These signals are coupled to AND gate 75 whose third input is the signal FAX. Thus, in the page image mode the output of AND gate 75 is at a 3 MHz rate (12 MHz divided by 4) which, when coupled through the OR gate 73 and inverter 74 provides DCC. The outputs of the counter 71 are also used in other portions of the logic circuit as well as, for example, in the pixel select circuits 57. Thus, in page image the character clock is 2.5 times as fast as in text, this accounts for our ability to display at a rate of approximately 130 characters per row in page image versus 52 characters per row in text.
The logic circuit 58 is shown in schematic fashion in FIGS. 6 through 10.
Referring now to FIG. 6, circuitry is illustrated to develop the signal VDATA2. FIG. 8 illustrates how that signal, in conjunction with several others is used to generate the composite video VIDCOMP. FIGS. 7, 9 and 10 illustrate the development of certain input signals used in the circuits of FIGS. 6 and 8.
Referring again to FIG. 6, AND gate 82 is provided with input signals R1L, and R3L as well as the output of inverters 80 and 81 which are driven by R2L, and R.0.L respectively. Each of these input signals is basically one of the row defining signals R.0. through R3 provided by the CRT chip 27 in respose to DCC, synchronized with HSNYC (also provided by the CRT chip 27). Accordingly, AND gate 82 is satisfied for the 10th row of each character, which is a row which is illuminated for the underscore graphic. The output of AND gate 82 is provided as one input to AND gate 83 whose other two inputs are FAX* and UNDSC. Accordingly, AND gate 83 provides an output for each character (whose underscore bit has been set), during the tenth row of that character's display and only in the text mode. Thus, the output of AND gate 83 is the underscore graphic. This is added to the basic binary signal stream CO in OR gate 84, and provides one input to AND gate 85.
Since each character in the text mode is ten columns wide inverter 86 driven by the timer output C3 is used in conjunction with AND gate 85 to blank columns 9 and 10 which are cleared on signal C.0.. Accordingly, the output of AND gate 85 is the display stream with underscore graphic added for text mode display. For page image display the basic binary signal stream F*CLO is coupled through inverter 92 to NAND gate 93 whose other input is the signal CURS*. The output of NAND gate 93 is input to NAND gate 94. The other input to NAND gate 94 is developed in the NAND gate 91 by combining the signals EDGE and FAX. The first signal is, as will be explained hereinafter, developed to draw document boundaries in the page image mode and the second is the status signal indicating page image mode. Thus, the output of NAND gate 94 is the page image binary signal for all locations (with FAX data cleared at the cursor location). This signal is ORed with the text image defining stream in OR gate 87. Of course, only one of the inputs to OR gate 87 is active at any time since FAX and FAX* are mutually exclusive.
To see the development of the cursor image in the page image mode reference is made to FIG. 7. As shown in FIG. 7 OR gate 162 is provided with signals CRV*, a signal developed in the CRT chip to define the position of the cursor, and the signal BLNK which is alternately high and low if the cursor is to be blinked, which it is in the page image mode, when the cursor is outside a text field area. The output of OR gate 162 is provided to inverter 163, the output of which is provided as one input to OR gate 164. The output of OR gate 164 (CRV1) is provided as a signal input to a flip-flop 90 whose clock input is coupled to DCC. The output of the flip-flop 90 (CURS) provides an input to the exclusive OR gate 88 whose other input is the output of OR gate 87. Accordingly, the output of exclusive OR gate 88 is a signal stream which is either text mode video (including underscore characters) or a page image video including the page boundary and cursor video, for both text and page image.
The use of a ghost cursor feature has already been explained, and the location of the ghost cursors are determined by the comparators 48 and 49, producing the signals SE4 and EE4 (for the location of horizontal and vertical ghost cursor, respectively). These signals are provided to an OR gate 166, the output of which is coupled to AND gate 165. The other two inputs to the AND GATE 165 are SPLIT (one of the status signals, which is on in the page image mode and only when the ghost cursors are to be displayed) and the signal CRV* (to clear the ghost cursors/FAX data at the main cursor location). Thus, the output of AND gate 165 defines the ghost cursors and is the other input to OR gate 164. Accordingly, the signal CRV1, in addition to defining cursor location for both text and page image mode, in the page image mode also defines the position of the ghost cursors. As a result, the signal CURS also carries cursor and ghost cursor information as an input to exclusive OR gate 88.
Exclusive OR gate 89, one of whose inputs is the output of OR gate 88, provides a highlighting feature. The presence of the gate allows areas to be highlighted (displayed as a reverse image). In other words, for the portion of the screen which is highlighted, an otherwise normally on pixel is off and an otherwise normally off pixel is on. This is simply effected by rendering the other input to exclusive OR gate 89 active for the entire period during which the highlighted portion of the screen is being displayed. The signals SL4 and EL4 provide respectively, time definitions of the start and end of the highlighted field. Thus, the combination of inverter 95 and AND gate 96 provide a signal which is high during the portion of a field to be high-lighted. This signal, delayed by one clock count in the flip-flop 97 is provided to pass AND gate 99 (in the text mode). The signal HILIT, a status signal partially enables AND gate 102 to provide the active input to exclusive OR gate 89 to provide the highlighting functions. In the page image mode the delayed highlighting output of flip-flop 97 is further delayed in flip-flop 98 where, in a similar fashion it is coupled through AND gate 100 in the presence of FAX through OR gate 160 and AND gate 161.
Accordingly, the signal VDATA2 includes all character graphics, cursor position, underscoring and highlighting.
Reference is now made to FIG. 8 to illustrate the further development of the video signal. A status signal BONW is input to exclusive OR gate 165 the other input of which is the signal VDATA2. Accordingly, in the presence of the signal BONW the entire screen is highlighted or reversed imaged. The output of exclusive OR gate 165 (VDATA3) is one input to AND gate 166 the other input to which is the NOR gate 168. The output of NOR gate 168 (BLEN) enables the video which would otherwise be output by AND gate 166 to be blanked. The inputs to NOR gate 168 define three conditions, any one of which is sufficient for blanking. Blanking can be accomplished by the status signal BLANK which is effective for the entire screen. The signal AD15P (which is indicative of processor writing into an active refresh memory, and is therefore active for the processor access time) or the signal BL provided by the CRT chip 27 which defines those portions of the video which are outside the area being written. It is therefore, effective only for those portions of the screen for which the signal BL is active. Thus, the output of the AND gate 109 (VIDEX) is the blanked video.
The remaining portion of FIG. 8 is used for character enhancement, and may not at all be necessary depending upon a particular monitor or the display being used. For those high resolution displays with phosphors arranged for digital display, the enhancement provided by the remaining logic of FIG. 8 may not be essential. FIG. 8 is useful, however, when the CRT being used is that of a commercially available TV receiver or monitor whose phosphor is subject to "blooming" when driven by a digital signal of repetition rate higher than a predetermined rate. Character enhancement or prevention of "blooming" is effected by reducing the duty cycle of the drive to the grid of the CRT depending on the frequency of its input. Since this can only be determined by noting a successive number of pulses to the video, the output of AND gate 166 is provided as an input to a four stage shift register 167. This enables the remaining logic to look one bit in the future (at stage A) and two bits in the past (stage D). The "present" video output provided is one input to AND gate 177. This partially enables AND gate 177, the other input will determine the pulse width. Under certain circumstances (particularly bit combinations in the register 167) the other enabling input to AND gate 177 will be a half pulse width. The NAND gates 169, 170, 178 pick out particular bit combination patterns and clock flip-flop 171 for the patterns (in stages A-D) of 0101; 0111; 1100; 1101; 1110; and 1111. Whenever such a bit pattern is detected flip-flop 171 is set, gate 172 is partially enabled, and is fully enabled at the rate, and duty cycle of TWMHD (a 12 MHz signal). On the other hand, gate 173 passes the signal TWMHD for all text mode (FAX*) display; as will be seen, the upper pair of gates 172 and 174 handles all non-cursor representing pulses and the lower row of gates 173 and 175 handles the cursor representing pulses. Since one input to gate 173 is FAX*, the FAX cursor is not subject to width limitation as a result of the gates 169-170 and 178.
For non-cursor signals in text mode which are selected by the gates 169-170 and 178 (and hence set flip-flop 171) the signal TWMHD passes gates 172, 174 and 176 to, in effect, slice the pulse width output of gate 177. Accordingly, for selected pulse combinations, a single pulse has its width reduced by gate 177 to limit "blooming". Similar action is produced in gates 173, 175 for text cursor representing pulses, i.e., one or more of the pulses will be reduced in width.
The gates 174 and 175 are personalized for noncursor and cursor representing pulses by their inputs from inverter 183 and shift register 182, respectively. In text mode, cursor representing pulse times are identified by an output of gate 179, which is suitably delayed in register 182 to match the delay of register 167 and flip-flop 171. This partially enables gate 175 which is fully enabled at the 12 MHz rate and duty cycle by the output of gate 173 (in text mode only). On the other hand, in page image mode, gate 180 is enabled. Because one input is DCC, the normally four pulse wide CRV.0. is reduced to three pulse widths at the outputs of gates 180, 181. This reduced width page image cursor is coupled through the inverter 183 to pass gate 174 and 177. Although the actual pulse count out of gate 177 for the page image cursor is only three, the visual effect is that of a four pel wide cursor because of "blooming" on the monitor.
Transistor Q1 and Q2 provide, in effect an OR gate to sum the horizontal and vertical sync, through exclusive OR gate 127 with the output of AND gate 177 which is the complete video exclusive of the synchronizing signals. The result, available at the tap, is VIDCOMP which can be directly provided to a CRT display. Accordingly, the logic 58 develops an unformatted VIDCOMP in text mode and a formatted VIDCOMP in FAX mode.
FIG. 9 illustrates how the signal BLNK* (used as an input to OR gate 162 in FIG. 7) is developed. That is, the signal VSYN (available from the CRT chip 27) is used as a clocking input to a rate multiplier 184. The output of the rate multiplier is used to clock a flip-flop 129 which is cleared by the status signal BLINK. Accordingly, the signal BLNK alternates in polarity once for each eight times VSYN is produced. This provides for the blinking cursor in that the cursor is illuminated for eight screens and blanked in the next eight. (Of course, only when the signal BLINK is active).
FIG. 10 illustrates the production of the signal EDGE. The input signals to the gates of FIG. 10, that is, R.0., R1 define the third row of any character box in the page image mode. The signals DR.0. through DR6 define character lines, the signals Hn or Hnx (where n is an integer) are column signals defining character columns and the signals FC.0. and FC1 define dot positions within a column. The logic in FIG. 10 thus provides an output through the inverter 186 concurrent with the sweep writing first line, last line, first column and last column. The concatenation of these signals is the signal EDGE defining the boundaries of the document. In a manner to be explained, documents of other sizes can be provided on the display screen.
We can now explain how our display in page image can illustrate 85 characters per data row with 64 data rows per screen using a CRT chip capable of writing either 132 characters per data row with 32 data rows per screen or 96 characters per data row with 64 data rows per screen. We set up the CRT chip to write 132 characters per data row and provide a simulated least significant row address bit, expanding row addressing from 32 to 64 rows. Our refresh RAM had 1024 byte capacity which, since we used 1 bit per character allowed for 8192 characters or more than enough for our 64 rows of 85 characters each.
FIGS. 11A through 11G illustrate respectively, several different page image and text mode displays of a single document and FIG. 12 represents a map of document memory which will produce the displays shown in FIGS. 11A through 11G.
FIGS. 11A-11D illustrate four different page image mode displays of the same document, the only difference between the four displays is caused by the fact that the cursor is located in different areas in each of the four displays. Because the number of lines of text in the page being displayed in FIGS. 11A-11D exceeds the text mode display capacity, the document which is shown in the display of FIGS. 11A-11D cannot be illustrated on a single text mode screen. Rather, three different screens are required. FIG. 11E shows the first screen constituting roughly the first one-third of the document, FIG. 11F shows the second screen constituting essentially the middle third of the document and FIG. 11G shows the last or third screen constituting essentially the last third of the document.
A comparison of FIGS. 11A-11D with FIGS. 11E-11G will illustrate effectively that the text making up the document is not readable in the page image mode displays whereas it is readable in the text mode displays; on the other hand, the document's format which is explicitly illustrated in FIGS. 11A-11D is not directly revealed in FIGS. 11E-11G.
Referring now to FIG. 11E, and reviewing the screen vertically from the top, the uppermost feature of the screen is "the message line"; this is an area of the screen which is set aside for operator prompts and the like, and it is effectively used, for example, to indicate to the operator what mode the machine is in (for example, insert mode, etc). The message line is demarcated from the document text by the dotted line. While the message line is generic to all page image displays, the remainder of FIGS. 11E-G is specific to an exemplary document display. Immediately below the dotted line separating the message line from the text, is text relating to the addressee, brief reference to FIGS. 11A allows the reader to correlate the location of the addressee data in page image mode display. Immediately below the addressee text, and contained within a pair of horizontally directed dotted lines and a pair of vertically directed lines, is addressor data, which is shown in text mode vertically below the addressee data. Reference to the page image display shows that the addressor data is actually a text block, and as formatted will appear on the same lines of the document as does the addressee data but displaced to the right. Thus, it should be apparent that the horizontal dash lines set off block from main text; and as will be discussed the vertically directed solid lines indicate temporary margins. The small rightward directed arrows adjacent each of the vertically directed solid lines indicates that the temporary margins are to the right of the global margins .
Immediately below the dashed horizontal line demarcing the block from the main text, is a continuation of the main text of the document. Note that the tab preceding the "W" (merely shown as a rightward directed arrow) is actually reflected in the page image mode display as a plural space indent; as has been described above the text mode display does not illustrate, except symbolically, the actual extent of a tab.
Referring now to FIGS. 11E and 11F, note that the last two lines of text in FIG. 11E form the first two lines of text in FIG. 11F. This is an example of the overlap between two adjacent text mode screens, which as will be demonstrated below, is a function of sentence/paragraph boundaries.
The fourth through the eighth lines of text in FIG. 11F (braced by a pair of vertically directed solid lines) begins with a pair of graphic symbols indicating temporary margins, the relation between the temporary and global margins is indicated by the leftward directed arrows. The absence of horizontally directed dash lines indicates that while this paragraph is subject to temporary margins, it is a component of main text and does not comprise a block.
The last two lines of text on the screen shown in FIG. 11F is preceded by another graphic indicating a tab. Reference again to any of FIGS. 11A-11D indicate the extent of the tab, a parameter which is not explicitly illustrated in the text mode display. Finally, reference to FIG. 11G clearly indicates the concluding lines of text in the document; note also that the first line of text in FIG. 11G is the last line in the text in FIG. 11E.
In the text mode display of FIG. 11G, the plural rightward directed arrows indicate several tab functions, concluded with a required tab (shown as an underscored tab graphic), reference to any of FIGS. 11A through 11D indicates the actual effect of these tab functions.
Returning now to FIGS. 11A-11D, this page mode display is atypical in that the Figures illustrate the page mode display including format assisting graphics, which are only displayed as an operator actuated option. The format assisting graphics include the horizontally moving ghost cursor, the vertically moving ghost cursor, an indication of the first writing line, an indication of the left and right margins which are applicable to the text associated with the cursor location, as well as an indication of the effective tabs which are also associated with text in the vicinity of the cursor location. Note, for example that the cursor location is different in FIGS. 11A and 11B, and so is the indication of the left and right margins. Note also the difference between the effective tab positions. Inasmuch as the cursor, in the display of FIG. 11B is located within the block, the message "BL" appears outside the document area in the display, this message appears so long as the cursor is located within this or any other block. Reference to FIG. 11C illustrates that the cursor is located within the text subjected to margins different from the document margins, and this can be confirmed by comparing the margin indicators in FIGS. 11A and 11C. Finally, the cursor in FIG. 11D is located in still another portion of text associated with a tab grid which is not the same as either FIGS. 11A or 11C, although the right and left margin indicators are identical in FIGS. 11A and 11D.
FIG. 12 is a representation of a portion of document memory, and the information contained therein, which is capable of generating the display of FIGS. 11A-11G. In FIG. 12. text information (or codes) is indicated by the character X, other functions are illustrated by the use of various combinations of alphabetic symbols and in some cases some of these alphabetic symbols are separated by semicolons, it should be emphasized that the semi-colons in the drawing are solely for purposes of illustration and have no counterpart in document memory. The codes TL, TR and TT are multi byte codes indicating horizontal location as well as the particular function. The tab code (TT) we used is 13 bytes long including a bit for each character position, which bit is set or reset to indicate the presence or absence of a tab stop. Further, the horizontal lines are also used for purposes of explanation and do not have any counterpart in document memory. Document memory is, as has been mentioned before, one contiguous memory; FIG. 12 is broken into several parallel lines solely for illustrative convenience, it would be more appropriate to illustrate it the document memory as a single unbroken line or storage array.
Referring now to the upper lefthand portion of FIG. 12, the first line of text display is indicated as comprising a plurality of character codes concluded by a carrier return CR. Likewise, the second and third display lines are similar and each concludes with a CR. As will be seen in discussing other portions of text, the line ending CR is only used when the operator intends to prematurely terminate the line, (relative to display width) otherwise no line ending at all need be keyed by the operator. Immediately following the third CR are a number of block start bytes, a block insert begin code, and two codes TL, TR providing for temporary left and right margins, and finally, a temporary tab TT code. Immediately following the block start bytes are three lines of text for the block, each terminated by CR. Concluding the block is an end of block byte (BLIE). The actual TT codes are not shown in FIGS. 11E-G since those codes are not displayed in this embodiment. If desired, of course, they could be displayed by changes which should be apparent. Following the block codes are a series of codes representing the salutation, terminated with CR. This is followed by a further CR byte to provide the line space between the salutation and the remaining text, see FIG. 11A, for example. Note that this line space is not evident on the text mode display. A code indicating an indent tab follows which is represented in the text mode display by the rightwardly directed arrow graphic. Following that is a sequence of codes for a plurality of characters forming the next three lines of text, this sequence terminates with a CR, since the operator intends the line to terminate early, as is illustrated in any of FIGS. 11A-11D. Following this code is a byte for another line space. Thereafter, codes are inserted for the temporary left and right margins. Following the temporary margin codes is another sequence of codes corresponding to the text in the intermediate paragraph of the document; this sequence concludes with a required carrier return (RCR). This is used rather than the normal carrier return so that the effect of temporary left and right margins is terminated in the following portions of the text. This code is followed by a further byte for a line space. The last paragraph of text is initiated with several bytes for temporary tabs. After the tabs are established another text sequence follows, concluded by another required carrier return; this is used to cancel the temporary tab register set up immediately before this text sequence. After another CR code for a line space, the signature line is provided. This includes a tab sequence concluding with a required tab. Following that is several text characters concluded with a CR code. Because of the required tab, the line below the signature line begins at the same place as the signature line and this location in text is concluded by a further required carrier return, to cancel the indent tab. Thereafter, the author/ typist initials appear followed by a CR. The last byte in the document memory is EOD indicating end of document.
The manner in which operator input is stored in document memory to achieve a result illustrated for example, such as that shown in FIG. 12, and the manner in which a document memory containing a sequence such as that as shown in FIG. 12 results in the display shown in FIGS. 11A-11G will be explained in the following portions of the specification including a series of examples and description of associated flow charts.
Before discussing the functions of the processor 20, in response to operator keyed inputs, in order to explain how the functions alluded to hereinbefore are actually achieved it is worthwhile to outline the functions performed by the processor. Thus, FIG. 13 is a simplified flow diagram of the processing, and FIG. 14 is a memory map of the processor's internal RAM.
As shown in FIG. 13 the processor 20 goes through a sequence of steps in order to respond to operator key actuation to provide the operator with the appropriate display. The processor 20 is normally in a wait condition waiting key operation detection. When processor 20 recognizes key actuation, it determines at function 200 if a delete key is actuated (depressed/released). If it has been, the processor merely sets/resets a delete condition (a flag) at function 201 and returns to the wait loop to await further key actuations.
On the other hand, if the delete key was not the actuated key, then the processor determines at function 202 if a coded function has been actuated. If it has been, function 203 processes this function, and may call other routines if necessary. The particular processing depends, of course, on the function itself. Following the processing, the processor 20 returns to the wait loop for additional key actuations.
On the other hand, if function 202 did not detect actuation of a coded function key, then function 204 determines whether or not the machine is in the page image mode. If it is, function 205 determines whether or not it is also in the delete condition by checking the flag set back at function 201. If it is, then function 213 processes those delete functions which are allowed, and that concludes the processing in response to the particular key actuation.
Function 206 and 215 relate to actuation of outboard keys, i.e., keys in the set 151 or 152 (see FIG. 2), and of course, the particular processing depends upon the particular actuated key.
Function 208 operates on function key actuation, and in that event function 209 provides for insertion of the function at the appropriate location in document memory and perhaps in the display itself, and the processor returns to the wait loop. The location in which the function is inserted depends on cursor location, and perhaps on other factors which will be explained hereinafter. If, in the page image mode the key actuation was neither an outboard key nor a function, then it must be a character. As has been explained, the page image mode does not display unique characters and therefore it is not desirable to use this for text entry. Accordingly, in that event function 210 switches the display to the text mode, either for addition of text, if the cursor is located so as to continue typing into the document, or in the insert mode. When the condition stated above is true, function 211 will allow the character to be left in the keyboard queue (if untrue, the character will not be placed in the queue) and thereafter the processor returns to the wait loop.
The processor 20 also supports strikeover typing where the operator keys in text over existing text (not in an insert mode). This action replaces the existing text, character for character, with the keyed in text.
If back at function 204 it was determined that the machine was not in the page image mode, then function 212 (similar to function 205) is performed. If not in the delete condition, function 214 determines if the key entered was a character key. If so, function 217 stores the code corresponding to the character in the document memory at the appropriate location (of the cursor) and the machine returns to the wait loop. If not a character key, function 215 determines if it is an outboard key if it is function 218 is performed (similar to function 207). If not an outboard key nor a character, the key must be a function which is then stored by function 216 at the appropriate location.
Not shown in FIG. 13, of course is the particular processing carried out in response to actuation of different keys, nor how the display is controlled in either text or page image mode.
Prior to discussing particulars of the programming of processor 20 reference is made to FIG. 14 which indicates in a general fashion the manner in which internal RAM of the processor 20 is employed. Omitted from the memory map of FIG. 14 is the memory area allocated to program storage itself. Although in an embodiment of the invention which has been constructed, that program was present in the internal RAM, those skilled in the art will understand that is not essential to the invention since a program could also be stored in ROM. Initially, as the operator actuates various keys, coded signals corresponding to actuated keys are progressively stored in the keyboard queue 224. The processor 20 extracts codes from the queue usually in the order in which they are entered, and eventually generally stores the related coded signals in the document memory 222. During text entry, the machine is in the text display mode and to effect a text display, the processor 20 generates a text table (or table A), which is located in portion 221 of of the RAM. In conjunction with generating the text table 221 and document memory 22, the processor 20 writes in a text RAM buffer 25 to effect a display in connection with the apparatus shown in FIG. 4. In the course of this processing, the processor 20 also keeps track of various parameters in a number of pointers, counters, etc., which are generally included in the RAM area 223. At any time in the course of text entry, or thereafter, the operator may select the page image mode. In page image the processor must format the entire document up to the page selected for display to determine how to format the page selected for display. Once this is accomplished the processor 20 reviews the contents of the document memory 222 and creates a page table (or table B) which is stored in portion 220 of the RAM. From the data stored in document memory 222 the processor 20 then writes in a page image RAM buffer 26, to effect the appropriate display. In a fashion entirely similar to the text display processing, the page image display processing also relies on a variety of pointers and counters from the RAM portion 223.
The writing of page boundaries for a single page size is provided for in the logic of FIG. 10. Other page boundaries may, in the alternative, be displayed with the use of the OR memory 227. This is a 1K byte portion of RAM into which page boundaries of a selected size is written by the processor 20 on appropriate command. When writing page image RAM 26, the processor OR's each byte, as it is developed, with the corresponding byte of the OR memory 227. If desired, the logic of FIG. 10 could be eliminated and all page boundaries could be implemented via the OR memory 227. The particular page size written into OR memory 227, and from there to the display, is determined by software (or firmware), examples of which will be described. In the alternative the EDGE developing hardware could be modified to make it more flexible so it could generate a selected page outline from a repertoire of outlines.
In general the processing effected to write either text or page image display has similar purpose. The processor examines document memory in the vicinity of the cursor and based on a set of fixed rules (different for page image and text) displays sequentially the characters found and arranges the display (line endings, etc.) based on a combination of mode and codes encountered. Since the text mode display is essentially unformatted, display of any portion of document memory can begin at any point. To the contrary, the page image display, since it is formatted, depends heavily on preceding text format to determine the location of the first character on a page as well as the margins in effect for the beginning of the page. If the document had previously been formatted (and if the format is not altered) the processor saves significant signposts to allow rapid writing of the page RAM 26. The information forms the page queue. If the page queue is not available or no longer accurate, it must be recreated before display can begin in page image for at least the portion of the document which precedes the displayed segment. The software which controls processor 20, to effect the functions briefly described above, and to be described in more detail hereinafter, can be divided into six segments. An interrupt processor (shown in FIGS. 15A-15C) the purpose of which is to respond to changes in the keyboard state, that is, depression and/or release of the various keys, and from this to build a keyboard queue for use by other routines. The main system program (shown in FIGS. 16.1-16.18) the function of which is to firstly respond to the keyboard queue and from that information, build the document memory, and call other routines as necessary to respond to the information passed or derived from the keyboard queue. A sentence processor, or text mode associated set of routines (illustrated in FIGS. 17.1 through 17.12) which, when called and based on information in the document memory, builds the text table referred to hereinafter, and in conjunction with the text table and the document memory writes into the text mode refresh RAM 25. A set of cursor routines (illustrated in FIGS. 18.1 through 18.11) which manage cursor position in both the text and page image display modes. A set of FAX or page image routines (illustrated in FIGS. 19.1 through 19.31) which, when called and in response to the document memory both build the page table and page queue referred to hereinbefore as well as write to the page image refresh RAM 26. And finally, a series of graphic routines (illustrated in FIGS. 20.1 through 20.12) which are used by the page image routines for the purpose of drawing page outlines other than the standard 81/2×11 page outline which is, in effect, "drawn" by the hardware referred to previously. The graphic routines also put up the tab grid, temporary margins, ghost cursors and active margin indicators, as well as first and last writing line indicator.
The main function provided by the interrupt handler is to translate codes derived from the keyboard 10 into internal system codes, and store, in the order of receipt, the codes in a keyboard queue. The main system control program initializes various portions of the system hardware on power up, and thereafter is in a wait mode which is terminated when one or more signals are detected in the keyboard queue. The main system control program examines the contents of the keyboard queue in response to the particular contents of that queue either processes it itself, or calls other routines for processing, or both. These other routines can be divided into those routines directed to page image display mode, and other routines directed to text display mode. Both these groupings of routines respond to the various inputs, to alter the associated display if necessary in accordance with the input as well as in accordance with previously input text, functions, etc. Both of these groupings of other routines operate with the document memory, various page parameters such as left and right margins, number of lines on the page, etc. Both the text routines and page image routines, in the course of writing the associated text RAM or page image RAM also construct an associated table to speed up the processing, especially cursor control. Each of these tables is line organized, that is, there is an entry for each line on the display. In a first table (table A) which is related to text mode operations, the table includes 5 bytes per (display) line for the entire document, the first byte comprising 6 bits indicating a PAD distance (that is, the distance from the left edge of the screen to the first displayable character in the line) and two control bits, one indicating the presence of a carrier return (or required carrier return) in the line and the other indicating the presence of a sentence ending in the line. The second and third bytes for each line point to the location, in document memory of the first character on the line. The fourth byte indicates a quantity for an indent tab, if such exists on the line as well as bits indicating the presence of active temporary left or temporary right margins. Finally, the fifth byte is reserved for an indent amount on the line for any block in that line. The other table (table B), related to page image display, includes six bytes per line of the portion of the document on the display. The first byte includes seven bits for the quantity PAD, and a single bit indicating format change on that line. The second byte indicates the length of the line segment in character spaces. The third and fourth bytes point to the location, in document memory, of the first character for the line. The fifth and sixth bytes of an entry in table B contain the address to another entry in table B which describes an additional segment on the line, if any.
FIG. 15 A-C is a flow diagram of the interrupt handler.
As shown in FIG. 15A, on detection of an interrupt, the processor 20 determines the source of the interrupt, printer interrupts or recorder interrupts are appropriately handled by routines not disclosed herein. If the processor determines that the interrupt is neither printer, recorder or keyboard originated, it is considered an extraneous interrupt and is merely reset. On the other hand, when a keyboard originated interrupt is detected, processing continues at decision blocks 244-246. The first determines if a keyboard inhibit bit (flag) is on, which is set in the course of servicing a printer or recorder interrupt. If that is not the case, and if this is considered a normal keyboard interrupt, then the processing skips to function 247 (FIG. 15B). The normal keyboard interrupt is that whose source lies in a key depression or release. In addition, certain repeat key capabilities are provided when a key remains depressed, and this function is implemented by a timer on the keyboard interface, when enabled. Thus, decision block 246 determines whether or not the timer has been enabled due to depression of a particular one of a selected one of the keys on the keyboard 10.
Function 247 determines whether or not the key is a make/break entry, meaning that a different code is generated on making as opposed to breaking. In either event, (functions 248 or 253) the keyboard interrupt code is translated to an internal code set. If the particular key made is not a make/break entry, decision block 249 determines whether or not it is a typamatic input. If it is such input, decision block 256 determines whether or not the keyboard queue is empty. If it is not, function 257 sets a flag to terminate page image processing early. Note however, that the code is not stored in the keyboard queue, this is to prevent storing a series of similar codes in the queue. If on the other hand, the queue is empty then decision block 258 determines if this particular character is an allowed typamatic (if it is not, this indicates the character is not a typamatic and no further processing is performed). If it is allowed decision block 250 checks to see whether the code key is depressed, and if it has been, function 259 sets the code bit. Regardless of prior processing, function 251 determines if this is a control entry, referring to a space, carrier return, tab, required tab or required carrier return. For any of these functions it is necessary to terminate the page image processing early, and therefore, function 260 sets a flag to this effect. In any event function 252 stores the translated code in the keyboard queue for processing by the main system program.
Also illustrated in FIG. 15B is the processing performed in the event that the key depression is a make/break entry, subsequent to translation of the code to an internal code set and storage in the keyboard queue. Decision block 254 determines if the key depressed is the cursor key, if it is then function 255 enables the typamatic interrupt, that is enables the referred to timer, and that concludes this processing.
Returning now to FIG. 15A, if decision blocks 245 and 246 responded to a make/break typamatic interrupt, then the processing performed is shown in FIG. 15C. If decision block 261 indicates that the code key is depressed, then processing is terminated, i.e. the interrupt is ignored. If the review key has been depressed, as determined at decision block 262, then function 264 merely disallows or clears the referred to typamatic or timer interrupt. If the review key had not been depressed function 263 resets the typamatic interrupt bit and will store the previously entered cursor code on the keyboard queue if the queue was empty, otherwise the code is not stored. Reviewing FIGS. 15A to 15C, it will be seen that a keyboard queue is built, in response to sequential key actuations of the keyboard 10. Under certain conditions flags will be set to terminate page image processing early, the effect of this will be seen as this description proceeds.
Referring now to FIG. 16.1 through 16.18, and especially FIG. 16.1, functions 265 through 269 are performed only on power on or reset. As illustrated, these functions provide for initializing the apparatus including appropriate internal status pointers, document memory, the text and page image refresh RAMs, the "OR" memory as well as setting up the CRT controller parameters for the page image mode. Function 269, to complete the initialization operation calls the routine KBFAXSET which itself calls KBPAGEIM (the latter shown in FIG. 16.12). Set up of the CRT controller merely refers to passing certain parameters to that chip so that it will appropriately coact with the other devices to refresh from the page image RAM 26.
Once the processor returns from the routines called at function 269, function 270 is performed to check the key-board queue, if empty, the processor remains at function 270 until the keyboard queue has one or more codes therein for processing. If the queue is empty FAX messages, if any, are dequeued and displayed.
In order to describe the manner in which the routines illustrated in FIGS. 15 through 21 operate in the text display mode, those functions will be described including writing to the text refresh RAM, and in that regard, effecting cursor movement in response to actuation of the various cursor movement keys, providing for word underscore, delete word underscore, character underscore, delete character underscore as well as scrolling.
Before beginning the discussion some terms will be defined to assist the reader in understanding the description. Firstly, the processor 20, in the text mode, supports a number of registers, the majority of which are internal to the processor RAM. The one external register is STATUS (corresponding to the status latch 50 (FIG. 4). The following internal registers shown in the table below maintain the quantities identified opposite their name in the table.
______________________________________Register Function______________________________________ITABPOS Indent tab positionINDEXPAD Quantity or amount to pad for index found on previous lineSENTPTR Pointer to next unfilled location in text refresh RAMWORDLEN Counts length of word du- ring a look ahead to see if the word will fit on the current lineWORDCTR Counts the length of a word as the word is dis- playedCHARCTR Counts the number of characters that have been displayed on the current lineDOCPTR Pointer to the address in document memory of the next character that has not yet been displayedTABAPTR Pointer to the text ta- ble (table A)LPAD Amount to pad with spaces from the left edge of the display to first display- able characterLINELEN The maximum length of a line on the display______________________________________
For this description we will assume that the machine is in text display mode, and that the operator is keying in characters or functions. In initially keying in a document the operator actuates key 123 (T) in concert with key 137 (code); resulting in a "coded T". This is recognized by KBINTROT and stored (function 252 FIG. 15B) in the keyboard queue. The main system recognizes the entry (function 634 FIG. 16.2), sets DOCPTR (function 635) and, if in text mode calls KBSENTEN (function 638 (FIG 16.3)). The latter is illustrated beginning at FIG. 17.1. There, functions 638-641 initialize certain (software) registers. If there is no text in memory this concludes the processing. In that case there is no need to use coded "T".
Thereafter, significant processing awaits insertion of further data representing the desired document. Assuming the operator keys in information for a new document then function 270 (FIG. 16.1) will recognize that the keyboard queue is not empty, the processor will examine the contents of the first position of the keyboard queue, and in our example, function 271 will recognize a character or required space. Function 272 will store that code in the document memory. Function 273 then calls KBSNMDRT (routine to call the sentence modifier routine (see FIGS. 16.17)). As shown, function 274 calls KBSENMOD (see SENTMOD in FIG. 17.6). As shown, functions 277 and 278 obtain the text table (table A) pointer and the address of the cursor, i.e., the location in document memory relating the cursor to text on the display. Function 279 sets a bit in STATUS to indicate that sentence modify processing is taking place and function 281 sets LPAD and LINELEN. The parameters which are used, namely a PAD amount of 5 character spaces and a line length of 52 characters are parameters which have been employed in an embodiment of the invention which has actually been constructed. As pointed out previously, these may be varied well within available technology and components, and particularly, the line length may be increased for example up to 64. Functions 285-287 determine whether or not the particular line has an active temporary left margin, right margin or a block insert, and this is determined by reference to the text table. Thereafter, function 280 sets INDEXPAD if there is a preceding index function. Function 282 gets ITABPOS, function 283 obtains the value DOCPTR and function 284 computes the address of the text table at which point processing will terminate. Since the text table is in a rigid format, this is easily determined from TABAPTR. The routine thereafter skips to a point A (FIG. 17.1).
In keying the first character of a document the processing is relatively simple, requiring only writing the appropriate text table entries and inserting one byte to RAM 25 to display the character. As more and more characters are entered into the keyboard queue (and from there to document memory) additional entries are made to the text table and to the RAM 25. The RAM is rewritten beginning with the line the cursor is on. To explain this processing we will, in the following description, assume that a quantity of text exists in document memory and the refresh RAM 25 is being written.
As shown in FIG. 17.1, the processing determines if the cursor is pointing to a block insert, BLIE, BLIB, (BLIE and BLIB refer to internal codes designating block insert end and block insert beginning, respectively), or a temporary tab. Assuming it is not, function 292 moves the cursor off the screen. This merely requires setting the cursor location to some non-displayed location. The processing continues through functions 293 and 294 with no significant effect. At function 295, if INDEXPAD is not zero, function 296 is performed to load CHARCTR with INDEXPAD. The sum of CHARCTR plus LPAD is, as a consequence of performing function 297 stored as the PAD in the text table. On the other hand, if INDEXPAD is zero, then ITABPOS is transferred to CHARCTR (function 639). Function 298 writes into the indent tab byte of the text table. Thereafter, functions 298 through 303 are performed with no significant effect at this time. Function 304 sets a flag to indicate that no characters have yet been displayed, and function 305 sets a flag to indicate that no characters have been looked past on the current line. As will be seen, before anything is displayed, the processing determines if a particular word containing the first character on the line will fit on the line, the flags set by functions 304 and 305 are used to keep track of this look ahead process. Function 306 determines if the process has gone past the end of the display, and this is determined by comparing the present position (the contents of CHARCTR) with LINELEN. Function 308 thereafter displays five required blanks (the required blank is indicated by the character b). On the other hand, if a TLM is active then function 640 is performed to load b in position 1-4 and 6 and "1" (vertical bar) in position 5. Since functions 308, 640 write to the refresh RAM 25, the result of performing function 640 on each line subject to a TLM is the vertically directed solid line, as seen in FIG. 11F, for example. Function 311 calls a routine PAD and passes to it the quantity in CHARCTR. This routine is shown in FIG. 17.8, and it is used in effect to write the text refresh RAM. The writing occurs at function 314, and the register SENTPTR keeps in step with the writing so that it always indicates the next unfilled position. The processing then returns to point A7 (FIG. 17.2). Via function 316 DOCPTR is obtained. Functions 317 and 318 provide for zeroing of INDEXPAD and WORDLEN, and saving the contents of DOCPTR in a register TEMP. Thereafter, function 321 stores the DOCPTR in the text table, providing a ready cross-reference in the text table to the document memory address of the first character on the line. Since we have assumed that the code which initiated this processing referred to a regular character, function 324 will branch to point D (FIG. 17.3). There an EOS (end of sentence) indicator is cleared (function 325). Functions 327 and 328 then increment TEMP and WORDLEN, which provide an access to the next character in document memory and track of the length of the word that is being processed. Function 329 determines if the current word will fit on the line so far; this merely requires comparison between WORDLEN plus CHARCTR and LINELEN. Assuming it will, the processing branches to A2 (FIG. 17.2) where on this pass function 320 determines that the first character has indeed been passed. Therefore, function 321 need not be performed. Similar processing continues until the end of the word is recognized, such as for example, at function 330. Assuming that this is a normal end of word, the program jumps to point E (FIG. 17.3) where function 331 determines if the current word will fit on the line. Assuming it will, the program jumps to point F (FIG. 17.4) where function 332 resets the register WORDCTR to zero. Thereafter, function 333 compares the contents of WORDCTR with WORDLEN, since the latter indicates the length of this word whereas the former is zero, functions 334 et seq. will be performed. In effect, functions 333 through 339 form a loop which is traversed once for each character in the word, and depending on the character, the routines CDSB or DSB are called, the former to display (i.e. write to the RAM) a space or a character depending on cursor position, the latter to display (i.e. write to the RAM) a character. Each time the loop is traversed WORDCTR is incremented. The routines called are shown in FIG. 17.1. Depending on the entry points to these routines function 348 (FIG. 17.7) may or may not be performed, it is only performed to indicate that the character in this location will be displayed only if the cursor is on it. Reference to FIG. 17.4 indicates that CDSB is called to display a required space or an underscored required space. As will be seen, carrier returns (CR) as well as required carrier returns (RCR) also call CDSB rather than DSB. In any event function 349 (FIG. 17.7) sets a flag to indicate at least one character has been displayed on the current line, function 350 determines whether or not to set the cursor on this character, by comparing the character position found in SENTPTR, with the cursor position. In the event the cursor is not to be set on the character then function 351 is performed to determine if the processor has gone past the end of the display. Assuming it has not, function 352 is performed and, if the character to be displayed is not a character (such as a CR, RCR, etc.) then some or all of functions 353-355 are performed to determine what must be displayed. In any event, function 358 through 360 are performed to display the character (that is to write to the text refresh RAM), and increment SENTPTR, CHARCTR and DOCPTR. The process then returns to the calling routine (FIG. 17.4) wherein the loop is again repeated until such time that function 333 determines that the entire word has been displayed. When it is determined that the entire word has been displayed function 344 determines if the previous character was not a space, for it is was not, then a special function must be processed. If the previous character was a space then function 345 determines if the end of sentence was found. Assuming it was not the program loops to point A4 (FIG. 17.2) wherein the next word is examined in a manner similar to that previously described.
At some point in this processing, function 331 (FIG. 17.3) will indicate that the current word will not fit on the line. In that event function 347 is performed to determine if processing has gone past the end of the display. Assuming it has not, functions 348 and 349 are performed, the first calls PAD (FIG. 17.8) to fill the rest of the line with spaces (in other words, since the word will not fit on the line no portion of the word will be displayed there and any former contents are blanked out) and function 349 increments SENTPTR to point to the address of the location in the text refresh RAM corresponding to the first position on the next line of the display. Thereafter, function 350 is performed to determine if SENTPTR has gone past the end of the display. Assuming it has not the program loops back to point A3 (17.1) where the parameters LPAD and LINELEN are again reset (function 294) and additional characters are examined and displayed to fill out the new line. At some point in the processing, an end of sentence may be recognized. If function 361 (FIG. 17.2) determines that a space was preceded by a period then some or all of functions 362-366 are performed. End of sentence is recognized as a period space space sequence. Function 366 sets in table A an indicator (to be used in the cursor routines) for end of sentence (EOS). Function 366 also increments WORDLEN by unity to account for the second space, and the next word is examined.
During the display operation, if at any time function 344 (FIG. 17.4) determines that a word ending was not via a space, then a special function has been recognized, the program loops to point G (FIG. 17.5) for that processing. The character's identity is determined, in a specified order, in one of functions 367 through 374. As shown in FIG. 17.5, when a carrier return is recognized (function 375) a bit is set to indicate a CR for appropriate writing in the text table (function 376) and a call to CDSB, to display a space unless the cursor is on it. Required carrier returns, in addition, terminate indent tabs, and any temporary margins. After a return from CDSB entered via a carrier return, processing loops back to point E1 (at function 347--FIG. 17.3) indicating end of line.
On the other hand, tabs and required tabs both call DSB (functions 380-381, 383-384--FIG. 17.5) for display of the right directed arrow (see FIG. 11E, F and G for examples). The corresponding functions 382 and 385 reset the status bit to indicate no characters have been displayed on the current line and the return is to point A4 (function 318--FIG. 17.2). Recognition of an insert code results in a display (functions 378-379--FIG. 17.5) similar to the end of a current line. This leaves the remainder of the current line blank for the inserted characters. As will be seen, when insert is again encountered any remaining space is deleted and the display is closed up. Indexes are recognized (function 372) to ensure that an index is effected and maintaining current horizontal position, INDEXPAD is loaded with CHARCTR (function 386), via functions 387 and 388 CDSB is performed and the return is as the end of a current line.
End of document code is recognized (function 373) and function 389 appropriately writes the text table. Function 390 re-initializes LPAD and LINELEN, so as to be ready for additional keying and display, function 391 calls CDSB for display. Function 392, in the event the end of display is not reached, calls BLEND (function 393) which blanks to the end of the display.
The routine BLEND is shown in FIG. 17.9. As shown there functions 394 through 396 determine how far to blank on the current line and then call PAD (FIG. 17.8) to effect that purpose. On return, function 397 through 399 form a loop, blanking an additional line on each iteration of the loop until function 398 determines the end of the display has been reached.
Accordingly, via the foregoing functions the initial text entered by the operator results in building a keyboard queue, which is then used to both fill the document memory and then write the text table and the text refresh RAM.
The operator can, after keying in initial or "normal" text, key in text "blocks", a discussion of that however will be postponed until after a description of the page image mode. Prior to that discussion, however, reference will be made to the cursor routines to indicate how cursor motion is constrained during text mode display.
Assuming the operator has actuated one of the cursor keys; in the course of processing (of FIG. 16.1) function 400 will identify a code, in the keyboard queue, corresponding to a cursor movement key, and the processing loops to point J (FIG. 16.8). At that point, the display mode is determined, and assuming that function 401 determines that the machine is in the text mode, function 402 examines whether or not the function input was cursor right. Depending on the particular cursor motion required, one of the routines KBSCRT (for cursor right), KBSCLT (for cursor left), KBSCUP (for cursor up) or KBSCDN (for cursor down) is called (assuming the processor is not in the insert mode which is tested for at functions 403-405). These routines are shown in FIGS. 18.5, 18.8, 18.6 and 18.9, respectively. On the other hand, if an insert mode different processing shown at entry AE (FIG. 16.8) or at entry point AF (FIG. 16.10) is effected. Discussion of cursor processing in insert mode will be postponed.
Turning to FIG. 18.5, a text cursor right is executed as shown. Function 450 determines if the next displayed character from the document memory is on the next line of the display. Since the text table indicates the address in document memory of the first character on the next line of the display, function 450 need merely compare the address of the next displayable character with the corresponding entry in the text table for the next line of display. Assuming that the next display character is not on the next line, function 451 determines if the cursor is on an EOD code; for if it is, the cursor cannot be moved forward, in fact, it is not moved at all. If that is not the case, the cursor is moved to the next displayed character by writing an appropriate address in the CRT chip at function 452. On the other hand, if the next displayed character is on the next line, function 453 determines if the cursor is presently on the last line of the display, if it is, the routine DOCROLL is called to scroll the display. Either at the conclusion of scrolling, or if no scrolling is required, function 454 moves the cursor to the first displayed character on the next display line by appropriately writing to the CRT chip. Thereafter, function 455 determines if the cursor was last on a character which is displayed only with the cursor thereon. If that is the case then the function which has previously been displayed is replaced by a space, at function 456. Function 457 looks at the new character the cursor is on, and if it is one of those which must be displayed only if the cursor is on it, then function 458 replaces the space with the appropriate function by altering the contents of the text RAM at the cursor location. From the foregoing it should be apparent that the operator is not capable of moving the cursor, in text mode, to any location which does not presently contain a character (or space) which is actually being displayed. More particularly, the operator cannot position the cursor on blanked areas of the display. Furthermore, specially defined functions, i.e., CR, RCR, INDEX or REQUIRED SPACE are normally displayed as a space, and will only carry an appropriate graphic if the cursor is located thereon. FIG. 18.8 illustrates the processing carried out for cursor left, which is to a similar effect.
The processing carried out for cursor up is shown in FIG. 18.6. As shown there, function 413 determines if the cursor is on the first display line. If it is, function 414 determines if the cursor is at the start of document, for if it is, the cursor cannot be moved up or back at all, and this ends the process. On the other hand, function 416 determines if the cursor is on the first line of the display, and if it is, then the routine RVDOCROLL is called to scroll the document back so as to allow the cursor to move up. After the processing, or if that routine is not called, function 417 moves the cursor to the start of the prior line, i.e., the leftmost displayed position. Thus, not only does the cursor up move the cursor up one line if possible, it also moves the cursor to the start of the previous line. Thereafter, functions 418 to 421 determine whether or not the previously displayed character must be changed since the cursor has left it, or if the character newly associated with the cursor must be changed because the cursor is on it. Again, the processing illustrates how cursor movement is constrained. The processing for cursor down motion is shown in FIG. 18.9; in view of the preceding discussion, discussion of this processing is not believed necessary.
As seen, because of cursor movement, a scrolling may be effected, either forward or reverse. The routine RVDOCROLL is shown in FIG. 18.7. As shown, functions 422 through 425 set a display line counter to zero, and then examine the first line of display to determine the presence of CR, RCR, or EOS. If a line thus examined has one or more of these codes then the loop is exited to function 427. In this fashion, the first line in the display having one of those codes is selected, and the display line counter, which has been incremented each time the loop was traversed, function 424, identifies the first line so found with that code. If, on the other hand, the screen did not contain any of these codes, then at the last line of the display, function 425 jumps to function 426 to reset the display line counter to zero. Accordingly, either following function 423 on identification of one of these codes, or following the completion of function 426, function 427 calculates the top line of a new display by reducing the present value of the display top line by 11 less than a number in the display line counter. This provides for a "carry-over" in a reverse scroll to maintain sentence or paragraph boundaries. With this calculated top line value, function 428 calls SENTROLL. This routine is shown in FIG. 17.6. The routine SENTROLL, starts with the new top line as calculated, to create a new display from the text table and document memory.
The forward document scroll, FDOCROLL, is similar to the processing shown in FIG. 18.7 with three exceptions, instead of examining the first line of the display, as does function 422, the last line of the display is examined. Rather than looking at the next line of a display, at function 426 an earlier line is examined, and instead of reducing the old top line value to achieve the new display top line value, in doing a forward scroll that value is calculated by increasing the present displayed top line number by the appropriate amount.
In addition to simple cursor motion right, left, up or down, the cursor can be advanced one word forward or back (in text mode only) via keys 109, 110 (FIG. 2). When one of these keys is depressed an indicative code is inserted in the keyboard queue. It is examined by the main system program, via K (FIG. 16.1 to FIG. 16.5) where it is recognized at function 701. Processing skips, via O (to FIG. 16.11). Depending on whether key 109 or 110 (for forward or reverse motion, respectively) was depressed either function 702 or 703 will recognize the code. Accordingly, functions 704 or 705 will determine if the machine is in insert mode, if not, either KBSWCFW or KBSWCBK is called via functions 708 or 709. Discussion of cursor motion in insert mode will be temporarily postponed. The sub-routines are illustrated in FIGS. 18.10 and 18.11, respectively.
FIG. 18.10 shows the processing for moving the cursor forward one word. Functions 712, 715 and 718 form a loop which is traversed once per character until a word ending is located. Each time the loop is traversed TEXT CUR FWD (TEXT and SEN refer to the text mode processing) is called (FIG. 18.5) to move the cursor forward one character. When a space is located function 714 (FIG. 18.10) sets the function flag. On reaching the character or required space beyond the word ending the flag is reset (function 717) and processing returns. Similar processing is carried out for word reverse (FIG. 18.11) except that TXT CUR RV is called (function 725) to back up the cursor. In either processing if EOD or start of document is encountered processing terminates.
In this section we will discuss the functions underscore, delete underscore, word underscore and delete word underscore.
If in the course of processing the keyboard queue the processor determines, at function 437 that an underscore code is detected, then function 438 is performed. At function 438 the document memory location corresponding to the present cursor position is examined. This is found by referring to DOCPTR. This point in the document memory is examined and if a character is located there the underscore bit for that character (bit 7) is set; a similar storage will be effected in the text refresh RAM as will be explained. On the other hand, if the document memory does not contain a character at this location, then the operation is ignored. Thereafter, function 273 calls KBSMDRT which will call KBSENMOD to modify the display. On the other hand, if the operator had asserted keys 137 and 122 (see FIG. 2) simultaneously, indicating a desire to perform a word underscore, then in the course of processing, function 439 determines that a coded function has been read from the queue. The function 441 (FIG. 16.3) recognizes a coded I (the word underscore key-122) and therefore functions 442 and 443 would call firstly KBWUSCOM (FIG. 16.13) passing a "set" flag.
To see this processing in detail we now refer to FIG. 16.13.
KBWUSCOM begins at function 444 to scan the document memory backward to find the beginning of the current word. This is determined by locating a space (or a function or a buried code) immediately preceding a character. Thereafter, function 445 scans the document memory forward until the end of the current word is found, this again requires locating a character followed by a space (or function or buried code). In the course of forward scanning through the document memory, from the beginning of the word, function 445 either sets or clears the underscore bit of each character. Since the program was entered, at function 443 passing a "set" flag, each underscore bit in the document memory is set via function 445. After function 445 the subroutine returns to FIG. 16.3. Function 443 is a call to KBSWCBK, the processing for which is shown in FIG. 18.11. The function of this subroutine is to move DOCPTR to the beginning of the word, so that when KBSENMOD is later run to re-write the display, the entire word becomes underscored. More particularly, with DOCPTR now pointing to the beginning of the word, following the call to KBSWCBK, function 273 (FIG. 16.1) calls KBSNMDRT (FIG. 16.17) which calls KBSENMOD (FIG. 17.6) to rewrite the text display from the cursor location forward. To ensure that the coded I (key 122) produces underscoring of an entire word, the processing first writes document memory (KBWUSCOM--FIG. 16.13), moves the cursor backward to the word initiation (KBSWCBK) and then rewrites the text display (KBSENMODFIG. 17.6).
Before describing operation of delete underscore and delete word underscore we will examine how the processor enters and leaves the delete condition. When the operator asserts the delete key 106 (FIG. 2) the code in the queue is recognized at function 446. Thereafter, function 448 sets the delete condition on depression and resets the delete condition on lift; the delete condition is merely a flag maintained by the processor, so that the flag is set so long as the key is depressed, but as soon as the key is released, the flag is reset. Accordingly, delete is a mode or function modifier, by itself it has no function. Processing then skips to function 449 to look at the next entry in the queue.
Accordingly, when the next code is recognized, the processor at function 459 determines that the processor is in the delete condition (assuming the delete mode flag is set). Accordingly, the processing skips via point I to function 460 et seq (see FIG. 16.7). In the course of this processing, function 462 will recognize that the entry is the delete hyphen or delete underscore and the cursor is on a character. Accordingly, function 463 deletes the underscore bit of the character in the display refresh RAM (located at cursor location) and in the document memory, (located via DOCPTR). Following that function 464 calls KBSCRT for a cursor right function. Thereafter, the processing returns to function 449 to read the next code in the queue.
If, rather than a simple delete underscore, the I (key 122) was actuated in a delete condition, then function 642 would recognize the code. Accordingly, processing similar to that described in connection with word underscore is effected except that rather than "setting" the underscore bit in document memory, that bit is cleared by function 643. Processing continues at function 644, to call KBSWCBK and then with KBSENMOD to rewrite the display.
Most of the rest of the processing in FIG. 16 relates to FAX or page image modes (which will be discussed below) except for the insert mode operations.
If, in processing the keyboard queue the processor recognizes assertion of the insert key 104 (FIG. 2) function 465 directs the processing via point Q to FIG. 16.9A.
Functions 650 and 934 direct processing to R, see FIG. 16.9B. At this point function 643 recognizes an entry to the insert mode. Function 644 checks that the machine is in text mode, and if so, function 645 opens the document memory. By this function, all codes in document memory to the right of DOCPTR are moved down to the maximum extent possible so that all available memory is directly adjacent DOCPTR. The processing then skips (AC) to function 273 (FIG. 16.1) to call KBSNMDRT (FIG. 16.17). There function 274 calls KBSENMOD (FIG. 17.6) to rewrite the display. This rewriting begins at the beginning of the display line in which the cursor is located. KBSENMOD rewrites the lines on the display, however when the document memory had been opened (function 645) an insert character was recorded in document memory. In the course of rewriting the display, when that character is encountered (function 323--FIG. 17.2) the processing skips to point C (FIG. 17.3). There function 646 recognizes the insert character, this results, at function 348, in blanking the remainder of the line. Thereafter the remainder of the text display is written. Accordingly, going to insert mode opens up both document memory and the portion of the display to the right of the cursor. However, the "opening" or "hole" is of different extent. In document memory the "hole" is as large as possible. In the refresh RAM the "hole" is the remainder of the line the cursor is on. As new text is now added, KBSENMOD adds the new text into the previously blanked area on the display by continually moving the insert character ahead of inserted text. When the inserted text completely fills the previously blanked space, the insert character is pushed down to the next line with similar results, i.e., the next display line is blanked for the provision of additional inserted text. On the other hand when the main program recognizes the insert character in the keyboard queue, and the machine is in insert mode, then function 647 (FIG. 16.9B) recognizes an exit from insert mode. Accordingly, function 648 closes the document memory, resets the insert bit (exiting insert mode) and function 649 directs the processing to call KBSNMDRT which in turn calls KBSENMOD to rewrite the text display to in effect "remove" any existing blank space to close up the display. If in insert mode, the operator operates either key 109 or 110, move forward or reverse, then function 701 (FIG. 16.5) directs processing (via O) to FIG. 16.11. Since the machine is in insert mode, function 706 or 707 is performed to look for a next or preceding word initiator, when found function 710 or 711 moves the "hole" in document memory to that location. Processing then skips, via AC, to function 273 to rewrite the display to illustrate the new insert location.
To explain the processing performed to write the page image display (refresh RAM 26) we will assume that the operator has keyed in some text, and now wishes to examine the image the already keyed in text will produce if printed. To effect this the operator asserts key 108 (FIG. 2) the page image key.
When the code corresponding to that key is detected at function 465 (FIG. 16.1) the processing skips to Q (FIG. 16.9A) where function 650 recognizes the code and causes function 652 to be executed.
Before describing the page image processing in detail, some terms will need to be defined to assist the reader in understanding the description. Firstly, the processor 20, in the page image mode supports a number of registers, the majority of which are internal to the processor. The internal registers shown in the following table maintain the quantities identified opposite their name in the table.
______________________________________Register Function______________________________________WORDCTR Counts the length of a word during a look ahead to see if the word will fit on the current line.CHARCTR A character counter giving the pre- sent position on the current line; this position or count does not in- clude anything in PAD (left margin, temporary left margin or index amounts) but does include indent TAB amounts on the line in which the indent TAB occurs.DOCPTR Points to document memory.FAXPTR Points to the page image refresh RAM display memory.INDEXPAD Pad amount for index found on pre- vious line.TEMPLMAR Current temporary left margin amount.TEMPRMAR Current temporary right margin amount.DEFLMAR Default left margin.TABBPTR Table B pointer.______________________________________
Table B, as mentioned above is the page image or FAX table created during running the page image program for cross-reference between cursor movement and document memory. The table provides 8 bytes for each line in the display of display memory, these include one byte for the PAD amount plus an indicator indicating whether or not temporary parameters are modified on the line, one byte for the length of the line in character counts, two bytes for the address in document memory of the first character on the line, and two bytes for a link to the next segment, this may be to the next line or to a location in document memory for a block insert if such appears in that line, and finally two bytes for each line are presently unused.
On POWER UP or RESET function 269 sets the CRT controller 27 for page display, via KBFAXSET and KBPAGEIM shown in FIG. 16.1. As shown in FIG. 16.12, when KBPAGEIM is called, function 466 calls KBPAGEPRG passing to it the location in document memory of the start of the current page. On first processing text for a page display a page review queue or simply page queue is created which, among other things identifies for each page the start of the page, i.e., the address in document memory of the first character on the page, as well as an indication of the tabs and margins in effect at the beginning of a page. In the absence of any changes to text on preceding pages, function 466 merely refers to the page review queue for the necessary information. KBPAGEPRG is shown in FIG. 19.1. Initially, function 468 stores the start of a document, i.e., a document memory address. Thereafter, function 469 calls a subroutine GDGRID (FIG. 19.28) which can put up or delete the format line and colummn. As shown therein, this subroutine calls GDGRAFOR (see FIG. 20.1) which merely ensures that the grid area of the display memory is blanked (function 528). Following that function 529 determines whether the GRID is to be displayed. If it is, functions 530 and 531 are performed, and when the subroutine GDGRAFOR is discussed, the manner in which this is effected will become clear. At this point, however, assume the GRID is not to be displayed. Function 532 calls GDGRAFOR to clear the "OR" memory of any margins or tab display. Functions 533 and 535 determine whether a different page outline is to be displayed. Assuming no such special page outline is to be displayed, function 537 calls SETGHOST. FIG. 19.26 illustrates that subroutine. Function 538 determines if the GRID indicator is on. If it is, functions 539 and 540 may be performed. However, assuming it is not then SETGHOST returns and thereafter GDGRID returns.
The preceding description is based on the assumption that the special page images "CARD" and "ENVELOPE" are not required. Normally the hardware (via EDGE) writes the image of a 8.5×11 inch page on the screen to both inform the operator of the correct paper size for printing and to allow the operator to see the relationship between the document margins and the actual page boundaries. In accordance with the invention the operator can select other page sizes, "CARD" refers to an 80 column punch card outline (an IBM card) and "ENVELOPE" refers to an envelope outline. Assertion of keys 119 to 120 (see FIG. 2) selects one or the other of these images. Thus functions 533 or 535 check to see whether one of the special images had been selected; if so the OR memory is appropriately written. The manner in which this is effected is described in connection with a discussion of FIGS. 20.1-20.12. However, it should be emphasized that other images could also be written in this fashion.
Thereafter, function 470 (FIG. 19.1) sets up the document parameters, for example, margins, location of first writing line and number of lines per page. If these have not previously been selected by an operator, default parameters are selected and stored. Thereafter, function 471 determines if the hyphen flag is set, we will later explore the processing in the event that flag is set, but at this point we will assume it is not. Thereafter function 473 gets the address of the cursor location and function 474 determines if the cursor is on a buried code. Functions 474-476 are arranged to jump the cursor past a block insert. In the course of writing the page image or FAX refresh RAM, first the main text is written, that is "blocks" are skipped via functions 474-476. Thus, if either the cursor is not on a buried code or once the code has been passed, function 477 sets the display pointer FAXPTR to START, i.e., the beginning of the refresh RAM 26. Thereafter, function 478 sets FMTPTR. This is a pointer to the format or "OR" memory. Function 479 dequeues page image messages. These messages result in mode indicators or prompts on the display; "BL" in FIG. 11B is an example. Function 480 checks to see if the return immediate flag is set. That flag may be set at functions 257 or 260 (see FIG. 15B). Assuming it is not function 482 determines if the first writing line has been reached, and assuming that the first writing line has not been reached, function 483 will be performed to determine if a block insert is being processed. Assuming it is not, function 484 calls EBLR indicating an entire line is to be blanked (see FIG. 19.21). As shown in FIG. 19.21 the subroutine includes a loop of functions 542-545 which calls DISPLAY (FIG. 19.22), to display a space each time DISPLAY is called, and it is called once per loop. Each time the loop is traversed the number of positions to be blanked is decremented, and when either the number of positions to be blanked is equal to zero or a number of character spaces are blanked which is greater than the number of spaces on a line, the loop is terminated. In the latter event, function 546 advances FAXPTR and FMTPTR to the next line and a return is effected. Reference to FIG. 19.22 shows how DISPLAY operates. A register, FAXBYTE eight bits long is loaded serially, and when it is full it is output to the FAX RAM at a location addressed by FAXPTR. The bit which is loaded on each pass of DISPLAY is a 1 if a character or graphic is to be displayed by a dot at that location, and is a θ otherwise. If a 1 is loaded from FAXBYTE to the refresh RAM, then a 4×4 pixel is illuminated, otherwise it is not. Referring now in more detail to FIG. 19.22, function 547 shifts FAXBYTE left, 1 bit. Function 548 sets an indicator to indicate that something has been displayed on this line. (This indicator will be checked by other routines). Function 549 determines if a character is to be displayed or not. If it is, function 550 sets the low bit. Otherwise, function 551 does not set the bit. Function 552 determines if the bits in FAXBYTE will go past the right edge of the display and if they would function 553 sets an indicator to note an attempt to pass the edge of the page. If that is not the case function 554 increments FAXCTR and function 555 determines whether or not it is equal to eight, i.e., is FAXBYTE full. If it is, function 556 logically OR's FAXBYTE with the byte at the address pointed to by FMTPTR (the format memory or OR memory). Thereafter, functions 557 and 558 increment FMTPTR so it points to the next byte in the format or OR memory and function 558 stores the result of the logical OR operation mentioned above in the refresh RAM at FAXPTR and then increments FAXPTR.
Function 559 clears FAXBYTE and FAXCTR. Thereafter, function 560 is executed to increment CHARCTR to increment the number of characters written on the current line. This function is also performed in the event that FAXBYTE is not full.
The display routine in summary is used to display a single character as either a dot or space. To do this FAXBYTE is shifted to the left one position, the rightmost bit is either set or not set depending on whether a dot is to appear at that position or not, and when FAXBYTE is full it is put into the refresh RAM and cleared out to accept eight more bits.
When EBLR returns (FIG. 19.1) function 485 stores eight zeros in the page table for this line inasmuch as there is nothing on the line. The loop of functions 482-485 is continued until the first writing line is reached. At that point, processing skips via point D to function 486 (see FIG. 19.3).
As shown in FIG. 19.3 in the initial processing, function 497 increments the number of lines displayed on the page so far, in the course of our example the result of function 497 would be 1. Function 498 checks INDEXPAD, and if it is non-zero, function 499 sets the PAD amount to INDEXPAD.
On the other hand, if INDEXPAD is zero, then the TEMPLMAR (temporary left margin) is used to set PAD (function 500). Thereafter, function 505 zeros INDEXPAD and the PAD amount is stored in the page table. Function 503 thereafter calls EBLR for the PAD amount. As before EBLR blanks out the PAD amount and function 504 zeros CHARCTR, thereafter the processing skips to point F (FIG. 19.4). In pertinent part this processing includes zeroing WORDCTR (function 508) and assigning a temporary document pointer TEMP used in the look ahead to determine if the next word will fit on the line. Thereafter, function 509 gets the next character from the document memory; this is pointed to by TEMP. Functions 510 through 517 examine this character to determine if it is an insert code, block insert begin, a space or some other special function. Assuming it is not, function 518 increments WORDCTR and function 519 determines if the word will fit in on the line so far. Assuming it will, function 520 checks to see if the hyphen flag is set and assuming it is not, the processing loops back to function 509 to get another character from the memory. In this fashion, a series of characters are retrieved to either find a space (function 514), a special function (function 517) or determine at function 519 that the line length has been exceeded. Assuming a space is first found the processing skips through point H2 (see FIG. 19.7) to display the word, assuming the hyphen flag is not set. Functions 563 through 567 check for an insert code, a block insert and for the presence of the cursor, assuming none of these are true, then function 569 determines if this is the first character to be displayed on this line. In our example, it is, and thus processing moves to function 570 to check whether DOCPTR bytes have been set in the page table; assuming they have not, functions 571 and 572 are performed to determine whether this is a special function (and assuming it is not) whether the character is a space. If it is, function 573 sets an indicator that this is a space, on the other hand, function 574 sets an indicator that this is a character. Thereafter, function 575 calls DISPLAY to display a space or character depending on the indicator that had been set at functions 573-574. Since we have already discussed the manner in which DISPLAY operates, we will continue with the processing, where function 576 determines whether or not the character just displayed was a space. Assuming it was not, the processing skips through point H1 to continue displaying the present word. Function 577 increments DOCPTR to point to the next location in the document memory and similar function continues. When the end of the word is reached function 576 skips to point F (FIG. 19.4). The processing at this point first looks to see if the next word in the document memory will fit on the line, and if it will, similar processing is effected.
In this fashion, each line of the page RAM is written, taking into account first the page width and page margins. In this first processing as we have noted at function 512, the processing skips past blocks in document memory.
Aside from characters and spaces, the character retreived from document memory could be a special function which is recognized at function 517 (FIG. 19.4). In that case, the processing skips to point H (FIG. 19.7). After processing through functions 561, 563, 565, 567 and 569, if function 571 recognizes a special function, the processing skips via point K to what is in effect, a directory, shown in FIG. 19.8. There, depending upon the specific special function found, the processing skips to one of a number of different points. For example, if the special function found is a carrier return, processing continues to point M1 (FIG. 19.12). There, functions 577 and 578 increment DOCPTR and CHARCTR and set a flag to indicate the text has been processed in this block or line; the processing then skips to point G1 (FIG. 19.6). There function 579 checks to see if CHARCTR is less than 85, if it is, function 580 sets the byte in table B for the length of the line to the quantity in CHARCTR. The remaining functions 581-586 provide for writing other parameters in the page table. In the event that CHARCTR is greater than 85 then function 587 sets the length in the page table equal to 85. In the event that a block insert is being processed function 588 calls SKIP to skip to the end of the line. In the event that the special function was a required carrier return rather than a carrier return, then preceding function 577 (FIG. 19.12), functions 589 and 590 would be performed to indicate in the page table that a temporary parameter is set (since required carrier returns terminate temporary margins and other temporary parameters) and accordingly, function 590 resets the temporary parameters.
Processing for a required space merely requires a call to DISPLAY since it is not considered a word ending (see FIG. 19.9).
Processing for an index is shown in FIG. 19.10. Function 777 stores in INDEXPAD the sum CHARCTR plus PAD. This is the character position on the display at which the index was encountered. Function 778 increments CHARCTR and function 779 sets a flag to indicate that text within the current block has been processed so that any temporary parameters encountered will not modify the default parameters. Thereafter, processing skips via G1 to FIG. 19.6 where line ending housekeeping is performed.
When the next line is processed (see FIG. 19.3) function 498 will determine that INDEXPAD is non-zero (since it was written at function 777). Accordingly, that value is transferred to PAD at function 499 and INDEXPAD zeroed at function 505. Accordingly, an index function ends one line, and begins the next vertically below the index.
Processing for a temporary left margin is shown at FIG. 19.17. Function 780 writes to the page table to indicate a temporary parameter has been changed on the current line. Function 781 checks if CHARCTR is non-zero. If it is, processing skips via G1 to advance to the next line (before processing the temporary margin). Since DOCPTR is not incremented the next line will begin with the TLM. There function 782 will advance FAXPTR to the beginning of the line and FAXCTR and FAXBYTE are zeroed (by function 783). Functions 784-787 ensure that TLM is legal (greater than zero and less than 85). Functions 788 and 789 store the new TLM and save the global left margin, respectively. To begin writing the new line assuming a block insert is not processed function 790 (FIG. 19.18) calls EBLR to blank the TLM amount. Function 792 zeros CHARCTR and function 793 advances DOCPTR past the TLM codes in document memory. Function 794 retains the PAD amount, function 795 sets a flag to indicate DOCPTR has already been set in the page table for the current line. Function 796 checks if this is a block insert, if so function 797 changes the default parameters to the TLM becomes a global left margin for this block.
Processing for a temporary right margin is shown in FIG. 19.13. The primed functions 780', 784', et seq. all have counterparts in the left margin processing; accordingly, no further discussion is deemed necessary.
Processing for an indent tab and tab is shown in FIG. 19.16. The tab processing comprises functions 901-903. Function 901 determines the current display position from the left margin. Function 902, with a pointer to the active tab grid compares the present distance to the next higher tab position. The quantity is returned at function 903. Function 906 calls EBLR to blank out the required amount to the next tab point. Assuming the function is not an indent tab (checked at function 904) the next word is checked.
On the other hand, for an indent tab, function 900 is first performed to set a flag to distinguish tab and indent tab. The flag is checked at function 904. For indent tabs, function 905 is performed which sets TLM to the current position.
Processing for a temporary tab is shown in FIG. 19.11. In order to ease tab limitations, the processor manipulates each different tab grid as a unit. It is located in document memory and a pointer is used to access the grid when required. In the processing, function 907 provides for writing a bit to the page table to indicate a parameter change. Function 908 rewrites the tab grid pointer to point to the new tab grid. Function 909 increments DOCPTR past the tab bytes. That concludes the processing unless a block insert begin is being processed. If that is the case function 911 changes the default parameters to point to the active tab grid.
Comparing the foregoing page image mode processing with the corresponding sentence (or text) mode processing reveals that in text mode, carrier returns, required carrier returns and index functions are displayed as unique graphics only when coincident with the cursor (see FIG. 17.5). On the other hand, tab and required tab are always displayed, however tab and required tab do not display the actual tab function (see FIG. 17.5). Temporary left and temporary right margins are also displayed as a vertical line corresponding to the margin with a left or right arrow indicating the relation between the temporary margin and the previously effective margin (see FIG. 17.11).
In contrast, as just explained, in page image mode the functions are not displayed via graphics; however, the effects of the functions are displayed by displaying text formatted in accordance with the functions.
At the conclusion of writing the page refresh RAM for the main text, the processor returns to process any blocks. This processing is either at the conclusion of a page or the end of a document, where if processing was not terminated early, function 467 (FIG. 16.12) calls KBFAXBI, at function 589. That processing is shown at FIG. 19.2.
As shown there function 590 sets a block insert indicator, to indicate that a block insert is being processed. Function 591 moves the page table pointer (TABBPTR) to the beginning of the block insert section, i.e. an unused portion of table B and function 592 sets the document pointer to the beginning of document memory. Thereafter, function 593 calls FINDCODE; that subroutine is shown in FIG. 19.30. In general, the processing here looks for a specified character code in document memory incrementing DOCPTR until the required code is found. Function 594 stores an invalid code for the comparison so that the first test, at function 597 will never be satisfied. The processing loops through functions 595 through 618 looking for a match. In the subject under discussion, function 612 will find a block insert beginning code and correspondingly, function 613 sets DOCPTR to 4 plus the address of the block insert begin code, so that on the return the DOCPTR points to the first character in the block insert, and function 614 adds 6 to the return register to indicate to the calling program what type of code was found. As will be explained later, a block insert includes an initial 3 byte sequence comprising BLIB (a block insert begin byte) TL and TR (temporary left and right margins). Returning now to FIG. 19.2, therefore, functions 620 and 621 can determine whether the code found is temporary left, temporary right, temporary tab or EOD (by using the return register as a multiple branch point). Assuming that the temporary code found is not any of these then it can be assumed that the block insert begin code had been found. Function 622 gets the document line number for this block, this is accessible via the table B pointer. Functions 623 and 624 can therefore determine if the document line number, determined at function 622, is on this page, if not, the processing loops back to function 593 again looking for a block insert on this particular page. Assuming a block insert for this page is found, functions 627 through 630 are performed. As shown in FIG. 19.2 these functions make the appropriate entries to the table B, namely, function 627 sets up a link in a table B entry for the block insert line number; function 628 stores the document parameters in the default parameters; function 629 stores the active parameters and function 630 indicates that no text has yet been displayed in the current block so any temporary parameters found will modify the default parameters. Processing thereafter skips to point A0 (shown in FIG. 19.1). In this pass, contrary to the previous description, when function 482 is performed then the program skips to point D (FIG. 19.3) function 486 will indicate that a block insert is being processed, thereafter functions 632 and 633 are performed to follow the links to the last link and to store the address of the block insert for table B at the last link; this provides a link from the main text document line to the block insert which is to appear at the same line on the display. Following that processing, function 492 again determines that a block insert is being processed, and processing skips to point E1. At this point functions 498 through 500 are performed again to set a new PAD amount. Thereafter, when decision point 501 determines a block insert is being processed, function 502 is called to skip past the PAD amount (subroutine SKIP is shown in FIG. 19.31). The overall purpose of the subroutine is to skip in the OR memory and the refresh RAM past the previously written text, instead of blanking as is done with the PAD amount for main text. The processing shown in FIG. 19.31 should be apparent to those skilled in the art in that FAXCTR and FAXBYTE are used as previously described, except that zeros are not loaded. Instead the values already in refresh RAM remain unchanged. Thereafter, CHARCTR is zeroed (function 504) and the processing skips to FIG. 19.4. Each line of the block insert is processed identically to main text except that, through the use of SKIP instead of EBLR, the previously written text is maintained instead of being blanked over. Furthermore, by maintaining links in table B between main text on a line and the block for the line, coordination between CURSOR movement is maintained, as will be explained.
Now that writing the FAX refresh RAM has been explained, we can illustrate how that RAM is written for the alternate display embodiment of FIGS. 3A-3C. In the embodiment illustrated in FIGS. 19.1 to 19.31, each bit of the refresh RAM is mapped to a unique location on the display. Since our CRT chip 27 did not provide us with a sufficient address space, we expanded the addressing capacity. In the alternate embodiment we can live with the limited addressing capacity since we are not uniquely mapping each bit of the refresh RAM. Rather we define a display character as comprising a 2×2 array of text, reducing our addressing requirements in half, both vertically and horizontally. Now the CRT chip which gave us 96 characters/row and 64 rows/screen or 132 characters/row with 32 rows per screen is entirely adequate since to display 132 text characters per row we need only half that number of unique addresses horizontally (thus the 96 is sufficient). Alternatively, the 32 vertical addresses per screen is adequate since we need only that many to show 64 rows. Note that both FIG. 3B and FIG. 3C show a single display character (stored in one byte of refresh RAM) but actually represents four text characters in a 2×2 array. However, to write the RAM we must first format two lines, i.e. following all active formatting rules identify which character positions on each of two lines are occupied by an alphanumeric or punctuation graphic. The processing described in FIGS. 19.1-19.31 provides the function for only one line at a time. Thus, it must be modified to first format two lines. Once the two lines are formatted, each 2×2 array is examined. Depending on its filled vs. unfilled conditions a 4 bit code is derived and a buffer such as FAXBYTE stores the code representing that display character. This code is then written to RAM since each such code maps to the display. As the RAM is read by the CRT chip, the code is used with a ROS character generator to provide the display character of FIGS. 3B or 3C.
In order to explain cursor motion in page image or FAX mode, we will assume that the operator has keyed in some quantity of text, and that text is being displayed in the page image mode; and thereafter, the operator keys one of the cursor control keys; e.g., 109-114 (see FIG. 2). As explained, the interrupt routine places a code corresponding to the particular depressed key in a keyboard queue; subsequent to the interrupt, the main system program, at function 270 (FIG. 16.1) determines that the keyboard queue is not empty. Processing skips via point J (FIG. 16.8) where, since we have assumed we are neither in text mode nor in insert mode, one of functions 433-436 would recognize a cursor right, cursor left, cursor up or cursor down, respectively. Note that word cursor left or word cursor right is not recognized at all in page image mode (recognized in text mode at function 701 (FIG. 16.5) and if a key is depressed in that mode, the processing, after passing function 436 loops back via point F to function 449 (FIG. 16.1) to bump the keyboard queue, in effect disregarding either of these key depressions.
Depending on which cursor key was actuated, either KBFCRT, KBFCLT, KBFCUP, OR KBFCBN is called via functions 428-431. The processing in response to a cursor right motion (function 428) is illustrated in FIG. 18.4, the processing in response to a cursor up motion is shown in FIG. 18.1 and the processing in response to a cursor down motion is shown in FIG. 18.2. The processing in response to a cursor left is not specifically illustrated, but will become clear as FIG. 18.4 is discussed.
Turning now to FIG. 18.4, it will be recognized that in page image mode cursor right will only move the cursor within a line, and therefore, at function 651, if it is recognized that the cursor is at the end of a line, the processing terminates. Assuming that the cursor was not at the end of the line prior to the cursor right command, function 776 increments the register (horizontal) defining cursor position. Then function 652 checks to see if the format grid is being displayed, if it is, function 653 increases the value of the character counter which is tracking the horizontal ghost cursor, so it will be moved one character to the right. Thereafter, function 654 sets a value "anticipated segment" to be equal to the former segment in which the cursor was located, in Table B. With this parameter function 655 calls FINDSEG, the processing for which is shown in FIG. 18.3.
The routine FINDSEG is used to locate a segment, a particular main text line, or block in which the cursor will be located. The sub-routine is entered with the parameter ANTICIPATED SEG, already set.
Referring now to FIG. 18.3, function 660 sets another value TEST SEGMENT to be equal to the segment which begins at this particular display line; this is extracted from Table B. Thereafter, functions 661 and 664-666 examine each segment in turn, by determining first whether the segment contains data, and thereafter, if the cursor is located before the beginning of the segment or after the end of the segment. Assuming for purposes of discussion that the first segment examined contains data, but the cursor is located beyond the segment, i.e., the cursor is not to the left of the end of the segment, then function 662 checks to see if this line in the display has an additional segment. If it does, function 676 increments the test segment to identify the next segment on the line and the preceding functions are repeated. Movement of the cursor forward, can result in the cursor being located (1) beyond any segment on the line containing data, (2) located in an area of the line between segments, (3) maintained in the existing segment, or (4) entered into a new segment. Taking these particular cases in order, if the cursor has moved beyond any segment on the line then function 662 will identify the fact that the line has no additional segments, thereafter function 663 sets the blink flag. This will result in a blinking cursor which indicates that the cursor is not pointing to any text. Thereafter, function 675 sets the parameter segment found equal to the last segment on which the cursor was located, function 674 checks if the reformat flag is set (on cursor right it will never be set); if it is, function 672 calls the subroutine REFORMAT, and thereafter function 673 sets the parameter, associated segment (TABBPTR) equal the segment found, thereafter the subroutine returns.
If in the alternative (2) the cursor is located on a position on the display line between text segments, at some point function 664 will identify the cursor as not being to the right of the segment start (that is, as soon as it is past the segment that the cursor is immediately to the right of) function 665 will set a low flag and the loop will continue examining additional segments to the right of the cursor, each such examination will result in additional passes through function 664, 665 until function 662 identifies that all segments on the line have been examined, and then function 663 et seq., previously discussed is then repeated with the same results.
On the other hand (3), the cursor can remain in the anticipated segment. In this event function 664 and 666 will indicate that the cursor is right of the segment start and left of the segment end, when the particular segment in which the cursor is located, is identified. Under our assumptions function 667 will determine if the segment is the anticipated segment, and therefore, function 675 et seq. are again performed.
The final possibility (4) is that the segment in which the cursor is found is changed from the anticipated segment. In this event, function 667 is performed but the exit is to function 668. If the segment that the cursor is in, is found to be a block, then function 669 writes the warning BL (see for example, FIG. 11C). On the other hand, if the segment is not a block then function 670 ensures that any BL warning is deleted. Function 671 then sets the segment found equal to the test segment and functions 672 and 673 are performed.
Accordingly function 655 finds the segment in which the cursor is located. Function 656 determines if a format change has occurred (the manner in which this is determined and the results which follow will be explained). Depending on the outcome at decision block 656, function 657 may or may not be performed. Functions 657-659 execute housekeeping functions. Function 657 controls the contents of registers identifying the horizontal and vertical ghost cursors; function 657 may increment or decrement either the horizontal or vertical register (in the CRT chip 27) by one count to move either the horizontal or vertical cursor forward or back. Function 658 implements the decision made at function 663 (see FIG. 18.3) which changes the bit in the status register indicating whether or not the cursor is to be blinked. Finally, function 659 controls the horizontal and vertical registers which identify the cursor position, either incrementing or decrementing one of those registers by one count.
Although not illustrated, the routine page image cursor reverse KBFCLT is identical to the functions shown in FIG. 18.4 with the exception that function 651 determines if the cursor was at the beginning of a line rather than the end and function 653 decreases rather than increases the ghost cursor position.
In addition to moving the cursor forward or back, page image mode also supports moving the cursor up or down. KBFCUP or cursor up is shown in FIG. 18.1, and KBFCDN is shown in 18.2i and 18.2ii.
As shown in FIG. 18.1 function 677 determines if the cursor is on the first line of the display. If it is, it cannot be moved up and therefore, the processing ends. Assuming it is not on the first line of the display function 678 subtracts one from the cursor register indicating vertical position, this in effect, will move the cursor up one line on the display because the cursor position is determined in this register. Function 679 determines if the format grid is being displayed, if it is, function 680 must be performed so that the vertical ghost cursor tracks the vertical position of the cursor. Function 681 calls a routine to determine if a printer warning is appropriate (to prevent the operator from moving the cursor too close to the top of the page, so close that the page cannot be adequately supported by the paper bail). Function 682 and function 683 determine if the cursor was above or below text (cursor blinking), in either event function 697 sets the anticipated segment register to be equal to the last segment in which the cursor was located and function 698, with this parameter, calls FINDSEG. The cursor need not be located on text, but as will become clear it must be associated with some text in order to appear in the text mode display if a switch is made without moving the cursor, this processing ensures that the cursor, even through it is not on any text, is associated with some text in the document.
On the other hand, assuming that the cursor was not off text, function 684 determines if the prior line (the line the cursor is being moved into) is empty. If it is, then an above text flag (see function 685) is set to indicate that the cursor is above a text segment. Thereafter, function 699 stores the segment the cursor was previously associated with as the anticipated segment and then calls FINDSEG function 691.
On the other hand, if the prior line is not empty, then function 686 determines whether the line the cursor was moved from had a format change. If so, function 687 must be performed to set the reformat flag to call the subroutine REFORMAT during the FINDSEG routine, so that the cursor is maintained relative to the appropriate format. In the event the reformat flag need not be set or after it is, function 688 sets the anticipated segment to the prior segment, the segment the cursor was associated with prior to its movement. Function 689 determines if this segment had a required carrier return, which also may change the format. If it does, then function 691 sets the reformat flag. After the flag is set, or if setting is not required then FINDSEG is again called (function 691).
A review of this processing indicates that FINDSEG is called after identification is made for the anticipated segment, and after certain flags have been set indicating whether reformatting is required and indicating whether or not the cursor has moved off text. FINDSEG, previously discussed locates the actual segment associated with the cursor (if one exists). Thereafter, function 692 calls FINDDOCX. The information developed so far has identified the cursor location in terms of the display (that is a horizontal and vertical position for the cursor) has also identified a particular segment of text within which the cursor is located, or with which the cursor is most nearly associated. Using this information as well as the other information in Table B, FINDDOCX returns with a document memory address DOCPTR (this is either the address of the character with which the cursor is associated, the leftmost character in a block if the cursor is off text and to the left of the block, the rightmost character in a block, if the cursor is off text and to the right of that block, a particular character in the first line of a block if the cursor is off text and above that block, or a particular character in the last line of the block if the cursor is off text and below that block). Function 693 determines if the format has been changed, i.e., whether REFORMAT was run during the running of FINDSEG. If it has been, function 694 is called to adjust the ghost cursor position. Thereafter function 695 and 696 are performed for the purposes previously described.
Referring now to FIG. 18.2i and 18.2ii, the processing for a cursor down motion is described (KBFCDN) and in many respects this is similar to the processing shown in FIG. 18.1, but will be briefly described. Function 700, rather than checking for the cursor on the first line, checks for the cursor on the last line. Function 701 and 705 relate to an alarm function for the printer, if cursor movement will result in inadequate gripping of the paper by the paper bail. Function 702 adjusts the cursor register in the CRT chip and function 704 performs the same function for the ghost cursor location. Function 708 (FIG. 18.2ii) determines if there was a format change on the last segment (the segment with which the cursor had been associated) since the cursor down motion is moving forward in the document. If there had been no format change, function 713 uses the location of the next segment as the anticipated segment in calling FINDSEG. On the other hand, if a format change had occurred, function 709 determines if that was because of an RCR. If it was, function 722 determines if the prior segment was in text, and if it was then function 723 sets the active right and left margins and tab grid to those at the beginning of text, since that is what is required by the RCR. If the prior segment was not in text then function 724 sets the REFORMAT flag so that the appropriate formatting can occur during the running of FINDSEG. If the format change was not due to a RCR, after the preceding processing, functions 710 and 711 determine if the last character was the end of a block or main text. Depending upon the outcome, and whether or not the last character was an EOD or EOB, either the next segment is used as the anticipated segment or, at function 712 the below text flag is set and the previous segment is used as the anticipated segment in calling FINDSEG. Following the running of that subroutine, the processing continues identically to that previously discussed for page image cursor up.
In the course of processing document memory in order to write the page image or FAX RAM, processing for each new word begins at point F (FIG. 19.4). At this point, depending upon what the code encountered in document memory is, the processing adapts accordingly, for example, an insert code results in a skipping to the next non-null character in document memory, i.e. processing skips over the "hole", a block insert begin calls a subroutine to skip past the block insert (block inserts are processed by FAXBI, called after the main text processing is concluded), a space character is indicative of the end of a word (assuming the hyphen flag is not set) and processing skips to point H2 (FIG. 19.7). Aside from a regular character the only other codes recognized fall into the category of special functions, this is recognized at function 517, and the processing skips via point H to FIG. 19.7. A word ending can be either a space or a special function, so long as it is not a required space. Thus, at FIG. 19.7 function 727 determines if the code is a required space, if it is, it is treated as merely another character in a word to be displayed and the processing skips via point F2 back to FIG. 19.4. On the other hand, if this is a word ending that is a space, or a special function other than a required space, then processing continues (FIG. 19.7) at function 561 (either from function 727 or skipping to H2 from FIG. 19.4). At this point, assuming the hyphen flag is not set and assuming that the code is not for an insert or a block insert begin, function 567 determines of the cursor should be set on the current character. Cursor location in the page image mode may or may not correspond to a location in document memory. It does if the cursor is within main text or a block, but if the cursor is in neither location, which is achieved by cursor motion then there is no address in document memory which corresponds to the cursor location. Function 567, to determine if the cursor is to be set, merely requires comparing the document pointer to the location saved when the FAX routine was entered. If they are equal then SETCURS is called via function 568. Assuming for the moment that they are not the same, then the processing continues to display that character, including calling DISPLAY at function 575, and assuming the character displayed is not a space, processing loops back via H1 to increment DOCPTR and then reenter the loop just discussed. If in the course of displaying the current word the cursor location is encountered (i.e., DOCPTR is equal to the cursor location), then SETCURS is called as described above. That processing is shown in FIG. 19.25. As shown, function 728 sets an indicator to indicate that the cursor has been set. Function 729 determines if the horizontal position of the cursor should be left unchanged. This flag (detected by function 729) is set by the main program on temporary margin changes from the keyboard so that the cursor and DOCPTR track with the paragraph in which the cursor was located. Function 730 determines if the current cursor position would move the cursor past the right edge of the page; since the right edge of the page is a relatively fixed parameter, depending on the particular page outline being displayed, it can be easily determined whether the counter position would place the cursor past the right edge of the page. If so, function 731 indicates that the cursor is to be set at the right edge of the page. Function 732 effects setting the horizontal position of the cursor, i.e., writing to the horizntal cursor position register in the CRT chip 27. Thereafter, function 733 sets the vertical position of the cursor similar to function 732. This requires writing to a particular (vertical cursor) register in the CRT chip 27. Function 734 determines whether there is any text in the document, this is easily effected by referring to Table B. If there is no text in the document, then function 735 blinks the cursor; this is effected by setting the appropriate bit in the status register (see FIG. 4). Function 736 is executed after function 735 or function 734, depending on whether there is any text in the document, to set parameters in the document memory that apply to the cursor position. If the cursor is located in a block, these parameters are determined at the beginning of the block, if the cursor is contained in main text, then beginning with the global parameters, and by scanning through the page table looking for format parameter changes, the current parameters can be determined and written to appropriate registers. For example a register is maintained to point to document memory with the active (for the present cursor location) tab grid. Function 737 determines if a block insert is being processed, this merely requires checking a flag to that effect. If so, function 738 displays the BL warning (see FIG. 11B). On the other hand, if a block insert is not being processed then function 739 is performed to display blanks in the warning location to ensure that no warning is displayed. Function 740 calls SETGHOST (which is discussed immediately below). Thereafter, the subroutine concludes after executing function 741 to copy the default parameters into a cursor routine default parameter location.
Referring now to FIG. 19.26, the subroutine SETGHOST is illustrated. As shown, function 538 first determines if the grid indicator is on, and this merely requires reference to a specific flag to that effect. If it is not, there is no need to set the ghost cursors and therefore the subroutine returns. On the other hand, if the grid indicator is on, then function 539 determines if the display is in page mode, if not, the ghost cursors are not displayed and the subroutine can return. If both outcomes are positive then function 540 sets the horizontal and vertical ghost cursors. The discussion in connection with move cursor routines illustrated that the routines adjusted counters tracking horizontal and vertical ghost cursor position. As those skilled in the art will understand those counters are software counters. Function 540 requires writing the parameters in the counters into the hardware registers 48 and 49 (see FIG. 4) and the manner in which these hardware registers operate to cause the horizontal and vertical ghost cursors to be illuminated has already been explained. After function 540 the subroutine returns.
If the operator selects insert (key 104-FIG. 2) with the machine in FAX mode, the keycode is recignized at function 447A and processing skips through H to FIG. 16.6. Function 447B recognizes the insert keycode and processing skips through Q to FIG. 16.9A. Function 643 (FIG. 16.9B) recognizes the insert entry but since the machine is in page image, function 926 determines whether the cursor is blinking. A blinking cursor indicates that it is off main text. Assuming the cursor is not blinking, then functions 932-933 preceeds the insert code with the text mode code and processing returns to function 270 to process the switch to text mode. Alternatively, if the cursor was blinking, then functions 932-933 are not performed. Function 927 checks to see if there is any text. If not processing identical to that effected for a non-blinking cursor is effected. However, if there is text in the document memory, function 928 determines if a block insert is possible. If not the processing terminates, i.e. the function is ignored. If a block insert is available, functions 929-931 prepare for a block insert and functions 932 and 933 are also performed.
An implicit insert operation (insert without use of the explicit insert mode) is effected if the operator selects a CR, RCR, TAB, RTAB, R Page End, Index, Space or R Space in FAX mode. Processing for any of these codes proceeds in FIG. 16.1 through function 447A to FIG. 16.6. There, functions 100A-104A recognize different ones of these codes. Function 105A determines if DOCPTR is a valid location for insertion. If not processing terminates and the entry is ignored. If a valid DOCPTR address, function 106A effects an implicit insert by storing the codes and moving following codes one code unit to make room for the new code. Function 107A then increments the keyboard queue. If the queue is empty or if the next character is a graphic then function 110A calls KBPAGEIM to rewrite the display. Processing returns to FIG. 16.1 when DOCPTR is located on the displayed page. Alternatively the queue is processed until the conditions of either functions 108A or 109A become true. This conditions results in a call to KBPAGEIM.
If the operator had entered a character with the machine in page image, processing is initiated in the manner described, but function 113A (FIG. 16.6) recognizes a character keycode and processing skips to R (FIG. 16.16). If the cursor is not blinking and DOCPTR is on an insert code, EOD or BLIE (a valid location for an insertion) then function 114A stores a KBCHCD, a code to switch to text mode. Processing then returns to function 270 to process the change to text mode unless this is the first entered character. In that event function 115A initiates format setting as if a coded 1 had been entered and then processing skips to function 270. If the cursor is blinking, this is not the first entered character, and DOCPTR is not on EOD then a check is made (processing skips via AH - to FIG. 16.9B), at function 928 et seq., to see if a block insert is available. If so a block insert is effected, otherwise processing terminates, i.e. the operator's entry is ignored.
On the other hand, if the operator depressed the BKSP key (without depressing the code key 137) and the cursor was positioned (in page image) so that the preceding code was a CR, RCR, TAB, RTAB, Index or the cursor was positioned over EOD or BLIE and the preceding code was a space, then that preceding code will be deleted. The deletion will occur without leaving page image. No other codes are deleted in this fashion.
The processing follows a path, beginning on FIG. 16.1, where function 447A recognizes the FAX mode-processing skips to FIG. 16.6. Function 120A recognizes the error correcting (non required) backspace code--processing skips to FIG. 16.18. Function 121A checks to see if DOCPTR is at the start of a page. If not, function 122A determines if the prior code is not a buried code (buried code deletion via the FAX error correct backspace is not allowed). If not, function 123A checks for the remaining conditions. If any of the tests imposed at functions 121A-123A are failed, function 127A determines if FAX was terminated early. Depending on whether or not FAX was terminated early processing continues at function 449 (FIG. 16.1) or function 107A (FIG. 16.6). In either event the backspace function is not performed.
However, if the tests imposed at functions 121A-123A are passed then function 124A checks for insert mode. If in insert mode DOCPTR points to that code. If not in insert mode then function 126A closes up document memory by deleting the byte prior to DOCPTR. Because of function 123A this is one of the specific codes enumerated in that test. If DOCPTR does point to an insert code, then function 125A is performed. Decrementing DOCPTR causes it to point to the prior code, writing an insert code deletes the code previously stored there, writing the NULL code overwrites the insert code previously stored beyond the deleted code. Processing, after either function 125A or 126A picks up at function 107A (FIG. 16.6).
Cursor motion in text and FAX modes are explained above. The effects of cursor motion vary when the machine is in insert mode. That processing and the resultant effects are now described.
Actuation of a cursor key (text mode-insert mode) is recognized at function 400 (FIG. 16.1), processing skips via J where function 401 recognizes the text mode. Depending on whether a cursor left or right, functions 404 or 403 check for insert mode. If insert mode is detected processing skips to either AE-FIG. 16.8 (cursor right) or AF-FIG. 16.10 (cursor left). For cursor right and assuming DOCPTR is not at the end of the document, function 130A determines the length of the next code and function 131A moves the "hole" to the right of the next code. Functions 132A and 133A check a further code and the one just moved past to see if they are non-displayed. If either is, functions 130A and 131A are performed again; the loop of functions 130A-133A is performed until either of the tests imposed at functions 132A and 133A are not met. At that time function 134A stores an insert code and NULL code at DOCPTR. The net effect of cursor right is to move the "hole" in document memory past the next displayed code. Thereafter, function 273 (FIG. 16.1) is performed to rewrite the display to move the "hole" in the display to correspond to the beginning of the "hole" in document memory. The processing shown in FIG. 16.10 also effects a movement of the "hole" in document memory, but this movement is to the left, otherwise the effect is similar. Function 135A is included to get the cursor past the non-displayed code after KBSCLT moves the cursor to the left.
Alternatively, if the operator had selected a cursor up or down then, before functions 411 and 412 are called to process this cursor motion, function 405 removes the machine from insert mode. Thus cursor left or right is effective, in insert, to move the insertion location, other cursor motion cancels the insert mode.
On the other hand if a cursor key is operated in FAX mode the processing is via function 447A (FIG. 16.1) where function 136A (FIG. 16.6) recognizes the cursor key code. Processing picks up at J (FIG. 16.8) where function 137A is performed as is function 138A (since we have assumed insert mode). Function 138A has the effect of removing insert mode by closing up document memory and rewriting the display. Following function 138A, processing calls the appropriate cursor motion subroutine. Thus, in page image, any cursor motion cancels insert mode.
As is well known to those skilled in the art hyphenation is conventionally an interactive function whereby the operator is informed, by the processor, in some fashion or other, that a particular word extends beyond a margin, and the operator indicates whether or not hyphenation is to be effected, and if so, the point at which the hyphen should be placed. As pointed out above it is an advantage of the invention that, in the course of hypehnation, in addition to providing an operator with an indication of the extent to which the word extends beyond the margin, the operator is also informed as to the average length of the preceding lines on the page. In this fashion a more uniform right margin may be achieved. The hyphenation function is a special modifier of the page image mode. Thus, one technique for asserting the hyphenation function or requesting hyphenation information, is achieved by simultaneously depressing the page image key 108 (FIG. 2) and the code key 137. Other techniques for entering the hyphenation mode will be apparent to those skilled in the art. The operator's depression of the foregoing key combination results in the storage of the keyboard queue of a code identifying that particular combination. When the main system program is run, and that particular code is encountered (refer to FIG. 16.5) function 465 recognizes the code corresponding to the page image or FAX key and the processing skips via point Q (FIG. 16.9A) wherein function 650 identifies the page image mode key 108, and function 651 recognizes that it is a coded function. Accordingly, function 742 sets the hyphen flag and function 743 changes the CRT chip to the page image mode (in the manner discussed above) and calls KPBPAGEIM passing to it the identification of the current page. The latter subroutine is shown in FIG. 16.12 wherein function 466 calls KBPAGEPRG, and passing to it again, the start of the current page. This subroutine is shown in FIG. 19.1.
Before describing the processing in detail, we will first describe how the operator and machine interact to achieve the overall hyphenation function. For this description we will assume that the operator has keyed in a coded page image command and in the course of writing the FAX display the processor encounters a word which crosses the right margin. For purposes of example, we will assume the word crossing the right margin is the word "understanding". At that point the writing of the FAX RAM ceases, the processor switches to text mode and produces a display on the CRT of the form shown in FIG. 21.1. As shown in FIG. 21.1 what is displayed is the word which crosses the right margin, an arrow, and an identification of the meaning of that arrow (right margin); meaning that the letters "tanding" of the word are to the right of the right margin. Without any further operator action, the display then changes to that shown in FIG. 21.2. As shown, in addition to the preceding, a second arrow is illustrated along with the legend "AVERAGE RIGHT MARGIN", indicating that the arrow indicates the location of the average right line ending for a preceding given number of lines above the line ending in the word "understanding". The given number of lines could be all the lines on a page preceding this line, or it could be a smaller fixed number of lines above the line ending in the word "understanding". The given number of lines could be a smaller fixed number of lines. The manner in which this quantity is determined will become clear as the processing is described in detail. Also shown in FIG. 21.2 is the cursor which is illustrated to the right of the last character in the word. At this point the processor inserts a hyphen and waits operator action. The operator may then actuate the cursor left key, and each time the cursor left key is actuated the cursor moves left one position and in effect first opens up a "hole" in the word by moving the last character to the right (to the next line) of the cursor location one character space. Thereafter the "hole" is moved one space left each time cursor left is operated. Thus, for example FIG. 21.3 illustrates the appearance of the display after the operator has actuated the cursor left key, once. The operator then continues operating the cursor left key until such time as the cursor is in the position at which a hyphen is desired. When the cursor is in the correct position for hyphenation the operator actuates the page image key (108-FIG. 2) to indicate the processing should terminate and the portion of the word to the left of the cursor should be maintained on the current line a hyphen added thereafter, and the remaining characters of the word moved to the next line in the document. Now the manner in which this foregoing interaction is achieved is explained in detail.
The processing beginning at FIG. 19.1 proceeds much as did the processing for the uncoded page image, of course, the only difference being that now the hyphen flag is set. In the course of the processing, now function 471 will recognize the hyphen flag being set and therefore function 472 zeros two software counters TOTAL LENGTH and HYPHEN COUNT. The processing again proceeds in the manner discussed above, and when the first writing line is reached, (function 482) the processing begins to verify that each word found will fit on the line, when a word is found which fits on the line being processed, the word is displayed; when a word will not fit, the next display line is begun and the appropriate entries are made in Table B. Before identification of a first word, two additional software registers OLD DOCPTR and WORD LENGTH are also set to zero by function 507.
The look ahead processing to determine if a word will fit on a line is shown in FIG. 19.4. Each time through this processing, if the hyphen flag is set, function 507 zeros the registers OLDDOCPTR and WORD LENGTH. When a space is recognized on a line, by function 514, function 515 checks to see if a hyphen flag is set, if it is, function 516 sets the end of word flag, the processing skips via point H2 to display the word. At the conclusion of displaying the word the processing loops back to that shown in FIG. 19.4 where the next word is examined. On each pass WORDCTR is incremented, function 519 checks to see if the word will fit on the line so far. Assuming the hyphen flag is set (checked at function 520) each character, other than the character immediately following a space results in a negative determination at function 521, since the end of word flag is set. For the first character of a word following a space, which is within the page margins, function 521 results in resetting the end of word flag, at function 522. If, on the other hand, the space, which was the word ending for the preceding word is within the right margin, but the next character is outside the right margin, function 519 will determine that the word will not fit on the line so far, and function 523 checks to see if the word flag is set. If it is, then function 525 resets the end of word flag and functions 526-527 are performed. Function 526 adds one to the register HYPHEN COUNT which had been set to zero on entry of the hyphenate mode and function 527 adds CHARCTR (the total length on the line just ended ) to the register TOTAL LENGTH (which had also been zeroed on entry to hyphenate). In this fashion, each line which occupies at least the entire left margin to right margin space increments HYPHEN COUNT by one, and TOTAL LENGTH by the number of positions on the line.
In the more typical case, when the character which crosses the right margin is preceded by another character, rather than a space, the scenario is a little different. Let us look at the processing on the first character of the word which includes a character crossing the right margin. In that processing, function 519 will determine that the word will fit on the line so far (since we have postulated that it is not the first character of the word that crosses the right margin). Accordingly, and assuming the hyphen flag is set, when function 521 recognizes that the word flag is set, function 522 will reset it. Thereafter, since the same word crosses the right margin, the word flag will remain reset so that when a character is reached which crosses the right margin then function 519 determines that the word will not fit, function 523 sees the word flag reset, and therefore, function 525 is not performed. In this case, since the hyphen flag is set the processing skips to point F3 (FIG. 19.5).
At that point, function 744 loads a hyphenate memory (a portion of RAM) with three carrier returns; function 745 adds 24 space codes and function 746 adds codes or bytes selected to display the phrase "right margin". Function 747 then adds a carrier return and 24 additional space codes. This portion of RAM or the hyphenate memory is being created as a pseudo document memory which will be read by the text mode routine to effect a display. The code thus far entered into the pseudo document memory provides for line spaces from the top of the screen and then spaces over to the approximate center of the screen, provides for displaying the phrase "right margin" drops down another line spaces, over to a location immediately below the r in the word right. Thereafter, function 748 provides a byte selected to display a left-downwardly pointing arrow followed by a carrier return. This brings the display to the line on which the word "understanding" is displayed in FIG. 21.1. Thereafter, function 749 adds a number of space codes equal to 24-WORDCTR, that is the number of spaces from the beginning of the word to the right margin, this provides for spacing over to the beginning of the word. Thereafter, function 750 transfers bytes designating this word from document memory, starting at DOCPTR (which is pointing to the beginning of the word which crosses the right margin) to the hyphenate memory. The foregoing functions thus provide for creating a document memory which can provide for display such as that shown in FIG. 21.1.
Function 751 saves the cursor location (since it will be moved) and function 752 sets the cursor location to the next position in the hyphenate memory, i.e., succeeding the last character in the word; this provides for displaying the cursor as shown in FIG. 21.2. Thereafter, function 753 sets the software register WORD LENGTH to the length of this word, that is, the length of word crossing the right margin. Function 754 adds an insert code and a carrier return, the insert code is that which is recognized by the processor as effecting a change into or out of insert mode. The following carrier return will in effect move to display writing to the next line. Function 755 determines if this is the first line of a document or a block of text, if it is there is no point in determining what the average right margin is, since there are no preceding lines from which to determine an average. Assuming, however, that the word overhanging the right margin is not on the first line of text in the document page or in a block, then functions 756 through 758 provide for the display shown beneath the word "understanding" in FIG. 21.2. More particularly, function 756 adds a number of spaces equal to the algorithm shown in FIG. 19.5. The expression in the inner parenthesis is the average right margin, since it is the ratio of total length (which had been accumulating line lengths ending at or beyond the right margin), and the quantity hyphen count which is the number of such lines. This is subtracted from the difference between the left and right margin and the difference is subtracted from 24; the position from which the display of FIGS. 21.1-21.4 is written. Thus, at this point function 757 writes the left-upwardly directed arrow followed by a carrier return to move the display down another line. Function 758 adds a number of spaces in this following line which is equivalent to the quantity determined by function 756, so that the first character written will be immediately below the left-upwardly directed arrow. And at this point, function 758 adds the phrase "average margin". Function 759 then adds an EOD byte to indicate that this is the end of the (pseudo) document memory.
Thereafter, function 760 calls the sentence subroutines to write the text RAM, passing the address at the beginning of the hyphenate (pseudo document) memory. The sentence routine operates in a manner described above to write the refresh RAM. When the subroutine returns, function 761 replaces the insert graphic, in the text RAM with the hyphen graphic. Function 762 then alters the parameters in the CRT chip to the text mode, thereby effecting the display as shown for example in FIG. 21.2. At this point, function 763 is in a wait mode, awaiting operator action. The only meaningful action that can be taken by the operator is operation of the cursor left key or operation of the page image key. Each time cursor left is depressed functions 765-770 are performed. More particularly, the insert code (in the pseudo document memory) is interchanged with the prior code or codes. Thus, for example, if the operator actuates the cursor left key once, the cursor position, originally following the g, now precedes the g, as shown in FIG. 21.3. Function 767 adjusts the quantities in WORD LENGTH and cursor location address, function 769 calls the sentence subroutine to rewrite the display and function 770 then replaces the insert graphic with a hyphenate graphic, similar to function 761. This loop is performed each time the cursor left key is depressed, such that the first time it is depressed the display of FIG. 21.3 results, and if the operator depresses cursor left two additional times, the display of FIG. 21.4 results.
At this time we assume that the operator actuates the page image key indicating the conclusion of hyphenate interaction. Function 766 determines if the cursor has been moved. It it has, processing skips via point H* to function 771 where the discretionary hyphen graphic is located in document memory as the sum of OLDDOCTPTR plus WORD LENGTH, OLDDOCTPTR pointed to the beginning of the word which was the hyphenate candidate, the sum of that quantity plus WORD LENGTH indicates the number of characters in the word which will remain on the line which is now ended, so that the hyphen graphic is placed in document memory in a manner such as shown in FIG. 21.4. Thereafter, functions 772-775 are performed, and these same functions are performed if at function 766 it was determined that the cursor was not moved. More particularly, to determine the actual length of the line the quantity WORD LENGTH is added to TOTAL LENGTH, HYPHEN COUNT is incremented, the sum in CHARCTR is added to TOTAL LENGTH, and function 775 restores the cursor location address to that saved at function 751. Moving now to FIG. 19.7, as the word "understanding" is being placed in the FAXBYTE and the hyphen flag is set (decision 561), when the "i" is encountered decision 562 will be true and force the display line to be terminated after the discretionary hyphen by turning the program to the routine which simulates a CR (G1-see FIG. 19.6).
The attached program listings do not include the hyphenation function.
The graphic subroutines, illustrated in FIGS. 20.120.12 are called in the page image or FAX mode using data passed to it in order to assist in formatting functions. To provide this assistance the graphics routines provides for the writing, on the display of horizontal and vertical grids with half inch and one inch markings, respectively, the graphics routine also identifies, on a horizontal line at the upper portion of the screen, active tab locations, as well as the margins in effect at the location of the cursor in the document. At the left side of the screen dots represent vertical position of first and last writing line. In addition, the graphics routine also draws the page outline, if that outline is to be something other than the standard 8.5 inch×11 inch page; and it should be understood by those skilled in the art that, while the present embodiment employed hardware to generate the 8.5×11 inch page outline, the graphic subroutine could have been used instead. The graphics routine is driven by compacted data located in a font table defining the image to be drawn, this compacted data consists of a plurality of bytes of one of two classes, either a "FIRSTBYTE" or a "COMMANDBYTE". The processing performed by the graphic subroutines manipulates the data and is capable of setting and/or clearing bits in RAM. The graphics routine capabilities can be used to either directly write the page image refresh RAM, or to write the OR memory, and this capability is used to write both memories; the page outline and horizontal and vertical grids are written in the OR memory, active tab and active left and right margins as well as the first writing point and last writing line are written directly into the refresh RAM.
In operating on the compressed data, the processing routine first looks at a "FIRSTBYTE" to determine the effect of the succeeding one or more "COMMANDBYTES". The first byte determines, whether or not each selected bit location is to be set or cleared, whether or not the pointer should be reinitialized to that in existence at the time the first byte is read. The COMMANDBYTES, in effect provide for manipulation of a pointer, pointing into the memory to be written, and the pointer is manipulated so as to selectively locate certain bits for manipulation. In the course of manipulating the pointer the bits passed over may be either set, cleared or skipped (in the latter event of course, the bits remain in their previous condition).
Turning now to FIG. 20.1, the entry point to the graphic subroutine as illustrated, and as shown there are three different entry points, a first entry point, headed GRAPHIC 1, a second entry point, headed GRAPHIC 2, and a third entry point, headed GRAPHIC 3. The first entry point will result in modification of the page image refresh RAM, the second entry point provides for modification of the OR memory and the third entry provides for modification of some other RAM location which is identified when the sub-routine, at the third entry point is called. In the embodiment of the invention actually constructed, this third entry point is never used.
Accordingly, as shown in FIG. 20.1, function 851 at the first entry, saves the FONT storage pointer which was passed by the calling routine, and retrieves an offset address into the refresh RAM, so memory locations in the refresh RAM can be modified. Similarly, function 852, at the second entry point, provides for storage of the FONT pointer, and uses an offset address to the OR memory so locations in that memory can be modified. Thereafter, regardless of the entry point function 854 sets a bit to require setting of each memory location, if the "FIRSTBYTE" provides for clearing or skipping, the bit set at function 854 will be modified. Function 855 checks to see if the entries in the FONT TABLE are non-zero at the address specified by the pointer. Assuming the locations are non-zero (identifying an active table entry) function 856 retreives the FIRSTBYTE, and stores it at a location identified as FSTBYT. Function 857 branches on bit 7 of FSTBYT; function 860 being performed if that bit is set. Function 860 provides for resetting the memory pointer (DISPTR) to the initial entry value and re-effecting function 854. After completing function 860, or if it is not performed function 858 is performed to check bit zero of FSTBYT. If set function 861 clears the bit set at function 854 or 860 so that thereafter memory bits are cleared rather than being set. If function 861 is not performed, function 859 is performed to check bit 1 of FSTBYT, if reset the routine returns via an error condition; but if set, or if function 859 is not performed, function 862 is performed to advance FNTPTR to the next position in the FONT TABLE. Thereafter, the processing skips to point BA (see FIG. 20.2).
Since FNPTR is now incremented, it points to what should be a command byte rather than a first byte. Accordingly, function 863 checks to see if this entry is non-zero. If the entry is zero, then function 864 is performed to check bit zero of FSTBYT. If set, function 872 is performed to return to the default or initial setting of the display pointer and function 873 advances the FONT TABLE pointer. However, if bit zero was reset then function 865 is performed to check bit 1 FSTBYT. If set function 873 is performed to advance the FONT TABLE pointer, if reset an error return is performed.
In the normal course of events, the FONT TABLE entry is non-zero and therefore function 866 is performed to copy this byte into COMBYT. Thereafter, function 867 copies bits zero to four of COMBYT to DISBYT. Bits 5 and 6 of COMBYT are direction bits, and depending on the bit combinations they can direct the hardware pointer left, up, right or down and this is detected by one of functions 868-871. Depending on the bit combination processing skips to one of points DA-GA. For purposes of explanation we shall refer to point DA, assuming the bits 5 and 6 combination in COMBYT is pointing left.
Refer to FIG. 20.3. As shown, the processing provides a loop which is performed once for each count contained in DISBYT. In the course of this loop, function 872 first calls GRMODBYT; the processing for which is shown in FIG. 20.4.
Substantively, function 877 checks bit 7 COMBYT. If reset, no modification is required. If set, then one of functions 880 or 881 will be performed to either clear or set the bit at which the display pointer is pointing. Which function is performed is determined by checking bits 0 and 1 of FSTBYT. Depending upon the condition of those bits, detected by one of functions 878 or 879 the current bit is either set or reset and a return effected. Function 873 determines if an error was detected (the error return of FIG. 20.4) and assuming no error was detected function 874 is performed to call GRBKDSPT; the processing for which is shown in FIGS. 20.5 and 20.6. The processing in FIG. 20.5 is effected if the graphics routine is in the bit setting mode, and the processing of FIG. 20.6 is performed if the graphics routine is in the bit clearing mode. Referring first to FIG. 20.5, and assuming that function 882 has determined that the processing is in the set mode then function 883 determines if the display pointer is at the left edge. The display pointer points to a byte, and to select bits within the byte a mask is used, the mask normally pointing to the first bit of a byte, but after that bit has been operated on the mask may be shifted up to 7 times whereafter it returns to its original position and a display pointer is decremented. Accordingly, if the display pointer is not at the left edge, then function 884 shifts the mask to select a different bit, function 885 determines whether or not the mask has been shifted eight times; that concludes the typical processing. Once the mask has been shifted 8 times it is back at its original position and therefore, function 886 is performed to decrement the display pointer to point to a different byte. If the display pointer is at the left edge, determined in function 883, then function 887 is performed to determine whether or not the mask has been shifted 8 times. If it has not, then function 888 can be performed to shift the mask one bit position. If it has been shifted 8 times then it cannot be shifted any further and that concludes the processing.
Similar processing is effected for the processing of FIG. 20.6.
At the conclusion of GRBKDSPT, function 875 is performed to decrement DISBYT. This loop is performed a number of times indicated in DISBYT, and function 876 checks to see if the loop has been performed the required number of times. When it has the processing returns to function 862 (FIG. 20.1) to select the next byte from the FONT TABLE.
Accordingly, it should be apparent that the processing will allow the setting or clearing or skipping of selected bits in RAM. While the leftward motion has been discussed in detail essentially similar processing is performed for other possible directions of motion, i.e., up to the right and down.
With the ability to advance through memory, setting, clearing or skipping bits as determined by FIRSTBYTE, coupled with the ability to return to the initial setting and operating in a different direction, it should be apparent that this routine is capable of writing a vertical or horizontal line, or both, and writing the relatively short and slightly longer horizontal or vertical line indicating 1/2 inch and one inch markers, as shown, for example, in FIGS. 11A-11B.
In addition, in order to identify first writing point, last writing line, and the active left and right margins as well as the tab grids, it should also be apparent how the FONT TABLE can be loaded during the course of processing with the active parameters defining the locations of these points, so that when graphics routine at entry point 1, calls the FONT TABLE, those active parameters will be derived and used in a similar manner to locate the first writing point, last writing line, and active left and right margins as well as the active tab grid.
As has been mentioned above, the operator can, after initially keying in, or when editing, a document including some text, add one or more text matrices which have special characteristics, allowing the operator to move such a text matrix as a unit with respect to other text in the document, or to delete the text matrix in a very simple fashion. Such a text matrix is termed a block, and in order to explain block operations we will assume that the operator has a document with some text in memory and the document is being displayed in the page image mode.
In a manner described above the operator positions the cursor to an area which is outside of the main text, and accordingly, in a manner also described, the cursor is blinked.
When the operator begins keying in text with the machine in this condition, the insert mode key is depressed followed by the desired text. As the keyboard queue is processed, function 465 (FIG. 16.1) recognizes the insert mode code and processing skips via point Q (See FIG. 16.9A). At this point, function 643 recongnizes an entry to the insert mode, and function 644 determines that the machine is not in the text mode. Thereafter, function 926 determines that the cursor is not, not blinking and thus function 927 determines if DOCPTR is on EOD and the document memory is empty. Since, in our example, this is not the case function 928 is performed to determine if the cursor is between the first and last writing lines and there is text on the line and there is space in document memory. Assuming all these conditions are passed, function 929 is performed to scan the document memory to locate a paragraph boundary (following the position of the cursor) and at this point the document memory is opened, much as in the case of typical insert mode operation and a BLIB code is entered at the beginning of the open area. This code indicates the beginning of a block. Function 930, using the cursor position determines the document line number for the block and the horizontal position of the block, the latter parameter is stored as a temporary left margin for this block, the former is stored in the BLIB code. Thereafter, function 931 stores a temporary right margin, as the document right margin a temp tab register (with five unit settings), an insert code and a null code. The tab register referred to provides tabs at each fifth character space. This can be modified by the operator. The BLIE code is stored at the end of the "hole" in document memory. Inasmuch as this insert mode code on a keyboard queue has now been processed, function 932 decrements the keyboard queue pointer and function 933 stores a text mode code in the queue. In this condition, the processing returns via point U and determines that the keyboard queue is not empty, since at least the text mode code (inserted at function 933) is stored there. Since this code is that of the text mode key function 465 again recognizes the code and the processing skips via point Q again to FIG. 16.9A. At this point function 934 recognizes the text mode code. Function 935 determines that DOCPTR is not on a BLIE code (it was left on the insert code at function 931). Function 937 changes the CRT chip to display in text mode. Function 938 calls KBSENTEN (FIG. 17.1) to display the document in sentence or text mode. The processing then begins at start of the document and rewrites table A. Function 939 then determines whether or not the cursor is on the first screen of the document; if it is not function 940 is performed to call KBSNDOCR (SENTROLL) (see FIG. 17.6). At this point function 941 gets the TABAPTR and function 942 sets SENTPTR to the beginning of the display memory or refresh RAM. Function 943 obtains the cursor location address. Thereafter, the functions previously performed in relation to FIG. 17.6 are executed; this includes, at function 283 getting DOCPTR corresponding to the screen which will be displayed. The processing then skips via point A (see FIG. 17.1). The display proceeds much in the manner as previously described until a buried code, for example BLIB is detected. This provides, via function 944 and 945 (FIG. 17.1) appropriate entries in the text table. Since the BLIB is considered a buried code function 322 (FIG. 17.2) skips the processing to the buried code processor via point J (see FIG. 17.11). Function 801 checks WORDLEN. Since the buried code is considered a word ending, if it is not separated from a word by a space, processing skips to point E to determine if the word thus far developed can be displayed on the line which is then being displayed. If it can, the processing will return to the buried code processor to process the buried code. Accordingly, we will assume that at this point WORDLEN is equal to zero. Accordingly, functions 802, 813 and 816 determine if the buried code is a temporary right margin, temporary tab or temporary left margin. Since we have assumed that the buried code is a block insert begin, the processing would skip via point L (See FIG. 17.12). At this point function 831 checks to see if the buried code is a block insert begin. Assuming it is, function 832 sets the block insert indicator, and loads LPAD with the quantity 6. Function 833 checks to see if we have gone past the end of the display. Assuming we have not, function 834 effects the special display (for block insert) starting line. This is accomplished by calling BLKINS10 . That processing is shown in FIG. 17.10. As shown, 845 moves SENTPTR TO position 4 on the line. Function 846 displays the special character (the downwardly directed arrow) and function 847 continues the line by displaying a sequence of hyphens, function 848 terminates the line with another downwardly directed arrow (for an example of this display see FIG. 11E). That concludes BLKINS10. Function 835 indicates a special starting line in text table (to prevent the text cursor from entering this line) and function 836 skips past the block insert code and the processing then skips to point E1 (see FIG. 17.3). At this point functions 347-349 set up to display the next line. Since back at functions 930 (FIG. 16.9B) a temporary left margin code was inserted, in the course of the processing in text mode function 293 (FIG. 17.1) would identify that buried codes were active. Since the temporary margin is another buried code function 322 (FIG. 17.2) directs the processing again to the buried code processor (FIG. 17.11). At this point, function 816 identifies a buried code which is a temporary left margin. Function 817 sets the TL indicator in the text table. Function 819 provides for the display of the vertical line (again see FIG. 11E) and assuming that the temporary left margin is greater than the global left margin function 821 shows the rightward directed arrow. On the other hand, if the temporary left margin was less than the global left margin then function 822 and 823 would result in the display of a leftward directed arrow. After displaying these special graphics, function 830 bumps the DOCPTR past the buried codes and the processing skips via point A4 (See FIG. 17.2) to display the text in the block. After displaying the text, DOCPTR reaches the buried codes corresponding to BLIE. This again directs the processing to the buried code processor wherein function 837 (FIG. 17.12) identifies a block terminator, BLIE. Accordingly, function 838 resets the block insert indicator, the temporary left, temporary right indicators, resets LPAD and LINELEN to the global parameters. Function 839 resets the indent tab amount. If the end of the display has not yet been passed (detected at function 840) function 841 displays the block insert ending line with up arrows by again calling BLKINS10. Function 832 indicates a special line ending in the text table and function 843 skips past the block terminating codes to display the remainder of text on the line.
Accordingly, the preceding description should suffice to indicate the manner in which the block insert and associated text results in a display shown for example in FIG. 11E.
Furthermore, the description has indicated that when a block insert is detected an automatic switch to the text mode is effected so that the operator's entry of text in the block mode is made in connection with a text mode display. At the conclusion of keyboarding the inserted text, the operator is free to switch back to the page image mode.
In that event, and assuming that the cursor has not been moved off the page including the block insert, when the page image mode keycode is recognized in the the keyboard queue, function 465 directs the processing via point Q to function 650 (FIG. 16.9A). Since this is the page image mode key, function 651 determines if it is a coded function. Assuming it is not, function 652 changes the hardware display chip to display in page image by calling KBPAGEIM passing to it the identification of the current page. That processing, has already been discussed and is shown at FIG. 16.12. Essentially, the only processing is to call KBPAGEPRG (FIG. 19.1). As discussed, the initial processing to write the refresh RAM skips block insert codes, at the conclusion of writing the refresh RAM main text function 467 (FIG. 16.12) determines if a flag was set to terminate page image processing early, if such a flag was not set then function 589 calls KBFAXBI, to process block inserts. That processing is shown in FIG. 19.2, and is discussed above.
To effect a block move, the operator first positions the cursor to coincide with any character in a block desired to be moved. When the cursor is so located, DOCPTR is altered so as to point to the address in document memory of the character at which the cursor is located. Thereafter, if the cursor is moved wholly outside of the text filled area (that is it coincides neither with another block nor with any main text) then DOCPTR remains pointing at the character in the block from which the cursor was moved. At this point, when the operator has located the cursor in the desired new position for the block, key 133 is actuated concurrently with the code key 137 (see FIG. 2). This is a coded M or a block move instruction. When that code is reached in the keyboard queue function 447 will recognize the coded function and processing skips to point G (FIG. 16.2) wherein a number of coded functions are searched for. Processing continues through FIG. 16.3 until, at FIG. 16.4, function 912 recognizes the coded M concurrent with the fact that the machine is in page image mode and that DOCPTR is located within a block. Accordingly, processing skips via point AB (FIG. 16.15). At this point, function 913 determines that the cursor is located between the first and last writing lines. If it is not, processing terminates since the block cannot be moved beyond the last writing line or before the first writing line. Assuming however, that the cursor is properly located, function 914 determines if there is text on this line. This is accomplished by reference to Table B. Assuming there is, function 915 obtains the line number from the block (in document memory) and modifies it to match the vertical location of the cursor.
This code, stored in document memory, along with the links in Table B identify the location for the block, and thus function 915 has taken the first step to moving this block. When the block was stored, its left margin was determined by the location at which the cursor had been located when the block insert was recognized. Since the block is now being moved, this left margin value, which is also stored in document memory, may have to be modified. Therefore, function 916 retrieves the left margin value and calculates the difference between the left margin as stored in document memory, and the present horizontal location of the cursor. With this difference, function 917 begins searching through the block for a temporary left or a temporary right margin, when found, each is modified with the quantity calculated in function 916. When no further temporary left or temporary right margins are found, function 918 is performed to call KBPAGEIM to display the now modified page. The main function of that processing (see FIG. 16.12) is to call KBPAGEPRG. When first called, (see FIG. 19.1) main text is written. Assuming the running was not terminated early, then at the completion of main text function 589 calls KBFAXBI to display the blocks in the document. That processing is shown in FIG. 19.2. In the course of this processing function 592 starts DOCPTR at the start of memory and function 593 locates block inserts. For each block insert found, function 622 gets the document line number and assuming it is located on the page being displayed, then functions 627 et seq. make appropriate entries in Table B. In view of the preceding discussion, however, note that since that the block's document line number has been changed as a consequence of the block move function (see function 915--FIG. 16.15), the links written at function 627 et seq. will now be different than they had originally been because the document line number for the block is now different. Significantly, the location at document memory at which the block codes and associated text are stored, has not been altered. However, because the document line number for this block has been altered, it will be displayed in a new location.
Another function to which blocks are especially suited is deletion. For this function, the operator posiitons the cursor to lie somewhere within a block and then actuates the delete key 106 with key 133 depressed (see FIG. 2). This places a code in the keyboard queue corresponding to the delete condition. In the course of processing the keyboard queue, when this code is recognized at function 459 (FIG. 16.1) processing skips via point I wherein several tests are encountered for the particular code and associated machine conditions. Of particular interest is function 919 which identifies a delete mode, page image mode, cursor not blinking and DOCPTR located in a block. In that event, function 920 is conditionally performed to close the document memory if the machine is in insert mode; in our example, however, the machine is not. Function 921 is then performed to scan the document memory to locate a block insert begin code and a block insert end code straddling the DOCPTR. When those codes are found function 922 then closes the document memory for the number of bytes found between the codes, in effect wiping out the block insert codes and all codes lying therebetween. Function 923 is then performed to call KBPAGEIM to in effect rewrite the screen. Since the block has been deleted from document memory, on rewriting the screen, that block will of course, not be found and therefore will not be displayed.
In order to explain the manner in which reformatting can be simply effected we will assume that the operator has a document in document memory, the machine is in the page image mode and the operator simultaneously depresses key 118 and 137 (See FIG. 2) resulting in a coded 5. When this code is recognized in the keyboard queue, function 447 will first recognize the coded entry and the processing will skip to point G (See FIG. 16.2). At this point function 925 will recognize the code and the fact that the machine is in page image and function 924 will set a bit calling for the format line and column display. The resulting key actuations will call up a display for rewriting of the refresh RAM and with the bit set, the format line and column will be displayed. This concludes the processing; however the operator may now change document parameters. For example, assume the operator now positions the cursor to a position at which the left margin should be changed to. With the cursor thus set the operator simultaneously actuates keys 132 and 137 to produce a coded L. This code is recognized at function 946 (FIG. 16.2). Thereafter, function 947 checks to see if the machine is in page image and the cursor is to the left of the right margin, any other position of the cursor is not legal for a left margin. Assuming the test imposed by functon 947 is passed function 948 changes the left margin to the current horizontal position of the cursor. Function 949 (FIG. 16.3) calls KBPAGEIM passing to it the direction to start on page 1 or the beginning of the document. KBPAGEIM has been previously discussed and its function is to reformat the entire document with the new left margin. Function 950 thereafter locates the original position of the cursor on the screen, so that at the conclusion of the operation DOCPTR can be determined by reference to Table B. A similar result is effected if a coded R is asserted, to change the right margin (see functions 951-953) FIG. 16.2.
Instead of changing the left or right margins the operator could have changed a temporary left or right margin; this is effected by either a coded Q or a coded W (see FIG. 2). In that event functions 954 or 955 would recognize the entry, ensure the machine is in page image and the cursor is in a legal position for the new temporary margin. If any of the tests are not passed then the code is not recognized and no action results. However, assuming that the function is recognized one of functions 956 or 957 saves an indication as to whether or not it is the temporary left or temporary right margin which is being modified. Thereafter, function 958 calls KBSTRTMP, to store a new temporary code. That processing is shown in FIG. 16.14.
Firstly, function 959 is conditionally performed to close document memory if the machine is in insert mode. Assuming it is not, function 960 scans the document memory backward for a block insert end, block insert begin, carrier return or required carrier return. Since temporary margins are effective from such a point forward, that point must be found in order to locate the temporary margin which is to be changed. When such a code is found function 961 determines if the point located is in the correct order for the temporary code; the format of some of these codes differ and therefore, it may be necessary to relocate the scan point to point to the desired code to be changed. If the scan point must be changed function 962 is performed, each time it is performed the scan point is changed one code unit (such as BLIB, TL, etc.) until the proper byte is located. Function 963 determines if the code to be modified is already there. If it is not, function 964 stores a code in document memory, moving all the codes up to make room for the code. Thereafter, function 965 writes the code in the document memory, either temporary left or temporary right is stored as the cursor horizontal position. Temporary tabs are treated somewhat differently, and will be explained below. Function 966 determines if the temporary code is in a block. If it is not, function 967 must be performed to scan back in the page queue to locate the page where the temporary code is, since in reformatting, we must start from that location, note that function 960 scans document memory, whereas function 967 is scanning the page queue to locate a page number. If the code is in a block, then reference to the page queue is unnecessary since blocks do not cross page boundaries. Accordingly, function 968 sets a bit so that the page image program will only move the cursor vertically when DOCPTR is found to maintain it within the paragraph. Function 969 then calls KBPAGEIM to write the display. After the display is rewritten function 970 determines whether the cursor is on this page, and if not, function 971 is performed looking for the page located at function 967, so that the finally written display will include the cursor.
The coded D is associated with a temporary tab register and is associated with functions 972 and 973 (see FIG. 16.4) in much the same fashion. A temporary tab is larger than the code for a temporary left or temporary right margin and thus, when function 965 is performed firstly, the five unit tab grid is stored in the temporary tab register. Setting or clearing tab points is effected with keys 129 and 131, respectively (see FIG. 2). The coded S (tab set) or coded K (tab clear) is recognized by the main program (see FIG. 16.4) to call KBFXTBST (TABSET) or KBFXTBCL (TABCLR) (see FIG. 19.27). The particular bit of the tab grid operated on is determined by cursor location, the tab register is located in document memory via a pointer to the active tab register. As shown in FIG. 19.27 setting or clearing a tab merely requeues setting or resetting (clearing) the appropriate bit.
Another reformatting process is effected by changes in the first writing position or first writing line. This is effected by the operator simultaneously depressing keys 116 and 137, a coded 1. This is recognized at function 974 (see FIG. 16.2). If the machine is not in the page image mode, the command is not recognized and no action results. If the machine is in the page image mode then function 976 alters the first writing point to equal the cursor location by changing the first writing lines left margin, right margin and all TLM and TRM parameters. Since all other temporary margins, etc., are relative to the global parameters, no other changes need be effected. Thereafter, function 977 calls KBPAGEIM to display the first page in the new format. Note that the number of lines per page remains unaltered, although the last writing line changes in the same direction and amount as the change in the first writing line, i.e. the text relationship to itself is unaltered, the only change is in the relationship between the text and the page outline. | https://patents.google.com/patent/US4495490 | CC-MAIN-2018-09 | refinedweb | 42,931 | 57.61 |
in reply to Re^3: scripting frameworks - a cursory glancein thread scripting frameworks - a cursory glance
Finally, what happens when each command needs a cache in addition to the application-wide cache? It can no longer have a local cache via $self->cache because the Application decided for the Command what names it would use within its own object.
You're right it wouldnt. That was a mis-wording. But: the command object should access application object resources via $self->app->$resource not the current way $self->$resource where $resource might be a slot in the application object or a slot in the command object.
It sounds like you are misunderstanding the conceptual relationship between Commands and Applications. Commands, in general, are not supposed to access Application object resources because Commands are usually to remain decoupled from Applications entirely. Commands could conceivably be used outside of an Application. There are exceptions, and for those relatively uncommon cases, the Metacommand is provided (see an example of a Metacommand in use).
The obviously-evil $self->$resource (where $self is a Command and $resource is a slot in the application object) is certainly not allowed. For Metacommands, access to Application object resources is supported exactly the way you endorse (though with slightly different syntax $self->get_app->$resource).
I agree; that's what the cache provides -- one well-defined interface (albeit using two methods instead of one): Cache interface. The commands should (and must) always use that interface. The Application will only interact with the command using this mechanism...
This code (in CLI::Framework::Application):
# Share session data with command... # (init() method may have populat
+ed
# global session data in cache # for use by all commands) $command->se
+t_cache(
# $app->cache );
[download]
No. Here, I'm sharing ALL session data with the command. The cache is NOT one example of potentially many future resources. It is a container for all resources that will ever be shared between the application and its commands.
Any given Application will set shared data in the cache (during init(), for example) and the Application will automatically pass on a reference to the cache so the Command has access to it (without requiring that the Command be strongly-coupled to the Application).
n-fold fanout Just think of what you're doing... let's just say there are 3, as you put it, "instance variables kept in both Application and Command. ", cache1, db_handle, interprocess_communication. Now, you have 4 commands. With your approach, you install all 3 of these accessors in all 4 commands, for a total of 12 accessors for sharing data between application and command. With my approach, only the app accessor is shared between application and command, for a total of 4 accessors. As the number of shared resources grow, your "accessor bloat" grows linearly. My "bloat" is constant at 1.
Wrong. You've mis-quoted me here. I said "...an instance variable (a reference to the Cache object) [is] kept in both Application and Command." Once again, that Cache object is a single container for all shared data. Keeping a single reference in the Application and Command objects and providing access to it via one accessor in each superclass is quite a bit different than keeping numerous accessors in every single command subclass.
Because all shared data is managed via the Cache and all commands implement a common interface to the Cache (by virtue of inheritance from CLI::Framework::Command), there is no "accessor bloat" -- the only accessor related to shared data is the one documented here: cache accessor in Command class.
Finally, what happens when each command needs a cache in addition to the application-wide cache? It can no longer have a local cache via $self->cache because the Application decided for the Command what names it would use within its own object.
No, CLI::Framework::Command defines an interface that includes the set_cache() method, which is used by the Application. If some future command needs a private cache for some reason, there's no reason it could not implement one. Sure, it couldn't use the method names in its parent class, but that's an elementary namespace issue, an intrinsic property of object inheritance.
I believe all of these things are explained in the docs. If not, I'd gladly accept suggestions on how to clarify. I think that the reason for the disconnect is on both ends -- it's tough to explain all these details and it takes patience to read the explanations. C'est la v. | http://www.perlmonks.org/?node_id=831070 | CC-MAIN-2016-40 | refinedweb | 755 | 54.12 |
Is This Content Helpful?
We're glad to know this article was helpful.
Why do Python scripts fail on a machine with both ArcGIS for Server and Desktop installed?
Starting at version 10.1, ArcGIS for Server is a 64-bit application. ArcGIS for Desktop is still a 32-bit application, meaning if both products are installed on the same machine, there are 32-bit and a 64-bit installations of Python. If a Python script is executed against the 64-bit install of Python and import arcpy, it uses the ArcGIS Server install of arcpy. Some scripts written to work with ArcGIS Desktop fail to execute when run against the ArcGIS Server install of arcpy due to unsupported tools or data sources.
By default, Windows associates the .py file with the last installed version of Python, so if 32-bit Python is installed (Desktop) first, and then 64-bit Python is installed (Server), the 64-bit version is associated with the file extension. File associations are used to determine which executable should be used to execute the file in many scenarios. For example, a .py file is opened in Windows Explorer, the file association determines which version of Python is used to execute the script. In the case described above, this means the 64-bit version is used. In addition, if a Python script is called from the command line by just passing in the path to the .py file, file associations are used to determine which version of Python is used.
When calling a script from the command line, this can be resolved by explicitly calling the correct version of Python and then passing in the path to the script followed by any arguments to the script.
To change which version of Python is associated with the .py file, right-click any .py file in Windows Explorer and click Open with > Choose Default Program. From the 'Open with' dialog, browse to and select the 32-bit install of Python. (When using the default install location with ArcGIS, this path is C:\Python27\ArcGIS10.1\python.exe). Select OK to commit the change. Now when file associations are relied upon to select the right version of Python, the 32-bit install is used.
IDEs also target a specific version of Python. IDEs like PythonWin or PyScripter generally have specific setups for 32-bit or 64-bit. If it is desired to work against the 32-bit install of Python, then the 32 bit-version of the IDE must be installed.
Since IDLE is installed by default with Python, 32-bit and 64-bit versions of IDLE are installed. To run a script in IDLE against the 32-bit version of Python it is necessary to run the 32-bit version of IDLE. IDLE can be launched by double-clicking idle.bat from <PythonInstallLocation>\Lib\idlelib.
If unsure which version is being run by the script, the following code can be used to determine this:
Code:
import sys
print sys.version | http://support.esri.com/en/technical-article/000011711 | CC-MAIN-2017-43 | refinedweb | 501 | 73.47 |
18 October 2012 15:58 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
In looking at why Ukraine's September industrial production sank 7% year on year — the largest single-month decline recorded in the past three years — Raiffeisen noted that strong production growth in the country's chemical industry had come to an abrupt halt.
The Ukrainian chemical industry grew 10.3% year on year in August, but September saw a mere 0.8% year on year increase, the bank said in its monthly Economic & Risks Monitoring Review for Ukraine.
Machinery building, meanwhile, decreased by 20.1% year on year in September, it added.
Chemical | http://www.icis.com/Articles/2012/10/18/9605315/chemical-industry-slowdown-behind-output-slump-in-ukraine-bank.html | CC-MAIN-2014-52 | refinedweb | 105 | 66.03 |
hi,
just wondering if someone could assist me modify this script to display a random amount of quotes?
i.e. sometimes it might return and display 2 quotes, then 1 quote, then maybe 3 quotes?
rather than it is now, 1 quote per time.
thanks!!!thanks!!!Code:
<script type="text/javascript">
<!--
//I. Array of banner that will randomly appear
var randomquote = new Array ( );
randomquote[0] = '<em>"Nothing is more fairly distributed than common sense: no one thinks he needs more of it than he already has."</em><br/><br/><span class="alignright">-- Rene Descartes</span>';
randomquote[1] = '<em>"Our virtues and our failings are inseparable, like force and matter. When they separate, man is no more."</em><br/><br/><span class="alignright">-- Nikola Tesla</span>';
randomquote[2] = '<em>"Better to illuminate than merely to shine, to deliver to others contemplated truths than merely to contemplate."</em><br/><br/><span class="alignright">-- Thomas Aquinas</span>';
//II. function to generate number from 0 to n
function randomzero (n)
{
return ( Math.floor ( Math.random ( )*0.9999999999999999* (n + 1)) );
}
//III. assign any random number from 0 to 2 to x.
x = randomzero(2);
//IV. display the text
document.write(randomquote[x]);
//-->
</script> | http://www.webdeveloper.com/forum/printthread.php?t=271151&pp=15&page=1 | CC-MAIN-2014-15 | refinedweb | 198 | 59.7 |
Opened 8 years ago
Last modified 5 years ago
#1022 new defect
Wrong mime type for SVG
Description
GraphvizPlugin sends wrong mime type for SVG file. Thats why I couldn't see a SVG object with Firefox's build-in SVG plugin.
I've fixed it.
Just replace this line
return req.send_file(img_path)
by this one
req.send_file(img_path, mimeview.get_mimetype(img_path))
And set import
from trac import mimeview
Attachments (0)
Change History (4)
comment:1 Changed 8 years ago by pkropf
comment:2 Changed 8 years ago by pkropf
- Resolution set to fixed
- Status changed from new to closed
comment:3 Changed 6 years ago by jholg
- Resolution fixed deleted
- Status changed from closed to reopened
- Trac Release changed from 0.10 to 0.11
Using GraphvizPlugin 0.7.2 I ran into this problem so I suppose this fix has been lost somewhere on the way (it fixes things for me). So
replacing
return req.send_file(img_path)
by
return req.send_file(img_path, get_mimetype(img_path))
and changing the import
from trac.mimeview.api import IHTMLPreviewRenderer, MIME_MAP
to
from trac.mimeview.api import IHTMLPreviewRenderer, MIME_MAP, get_mimetype
is still needed.
Regards, Holger
comment:4 Changed 5 years ago by cboos
- Owner changed from pkropf to cboos
- Priority changed from normal to high
- Status changed from reopened to new
Note: See TracTickets for help on using tickets.
Fixed w/ changeset:1731. | http://trac-hacks.org/ticket/1022 | CC-MAIN-2014-49 | refinedweb | 230 | 67.15 |
Tobiasz Lorenc3,645 Points
Hello again, i have problem with end my code.
Bummer: Didn't get the expected output
def combiner(*args): sums = 0 words = "" for arg in args: if isinstance(arg,str) == True: words += arg elif isinstance(arg,float) == True: sums += arg elif isinstance(arg,int) == True: sums += arg answer=words+str(sums) return(answer)
1 Answer
Simon .8,947 Points
The input provided by the challenge is a list, not multiple arguments. For this challenge, there is no need to (and you shouldn't) unpack with *.
With your code, you expect: combiner("apple", 5.2, "dog", 8).
But what you actually get is: combiner(["apple", 5.2, "dog", 8]).
James Joseph2,438 Points
James Joseph2,438 Points
What Simon said, also you could make this less lines of code by only using one elif statement: | https://teamtreehouse.com/community/hello-again-i-have-problem-with-end-my-code | CC-MAIN-2020-10 | refinedweb | 138 | 75.2 |
Pyjaco in a real app: Todos with local storage
I didn’t get the memo, but there appears to be a movement to demonstrate emerging web technologies with a simple todo list application, much as hello world is used to introduce programming languages.
In my last post, I introduced using jQuery with Pyjaco, the PYthon JAvascript COmpiler. Since then, I’ve made several contributions to the project and have been involved in diverse discussions with Pyjaco developers regarding the current and future status of the project. This post goes further by acting as a tutorial for writing a basic todos app using Pyjaco.
Pyjaco is alpha software. It is hard to write valid code, and harder to debug. I’ve managed to both lock up Firefox and hard crash it while using the Pyjaco library.
On the positive side, Pyjaco is under active, rapid development. The head developer, Christian Iversen is extremely responsive to both questions about Pyjaco, and to code contributions. This is a project with a lot of potential, and I see it as the current best bet for Python programmers hoping to avoid javascript one day in the future.
In spite of the hiccups, it is possible to generate a working javascript app using just Pyjaco. Here’s how.
Let’s start:
First we create a directory to work in and install a virtualenv. Pyjaco does not currently work with python 3, so in Arch Linux, I use the virtualenv2 command. We then activate the virtualenv and install the pyjaco package. Here I am installing from my personal fork, as it contains some changes for generating the built-in standard library that have not yet been merged upstream. You should normally install directly from chrivers’s git repository using
pip install git+git://github.com/chrivers/pyjaco.git.
Now let’s create a basic HTML 5 page with jQuery loaded:
<!DOCTYPE html> <html> <head> <title>PyJaco Todo List Example</title> <script type="text/javascript" src=""></script> </head> <body> <h1>PyJaco Todo List Example</h1> </body> </html>
We can load this in our web browser using a file:// URL. This is the only HTML page in our app, and it can be refreshed to load our changes as we work.
Pyjaco doesn’t simply translate Python into Javascript. Rather, it creates a basic standard library of Python-like objects that are utilized in the compiled javascript code. I wasn’t too keen on this idea when I first heard it, as it introduces a dependency that currently weighs in at 65K before minification and compression. While this is not a terribly heavy library, there are efforts under way to shrink the builtins or to dynamically generate it to contain only those builtins that your code actually touches. At any rate, we need to ensure this library is available to our code. First we generate the library:
pyjs.py is the name of the pyjaco command. It is expected to be renamed to pyjaco in the future. The
--builtins=generate option tells pyjaco to generate the standard library, while the
--output flag provides the filename for the new library file:
We then need to load this library in the head of our html file. Let’s also load the future
pyjados.js script at this time:
<head> <title>PyJaco Todo List Example</title> <script type="text/javascript" src=""></script> <script type="text/javascript" src="py-builtins.js"></script> <script type="text/javascript" src="pyjados.js"></script> </head>
Now, before we start coding the Python file that will be compiled to Javascript, I want to discuss what I consider to be the most confusing aspect of Pyjaco development. There are basically two types of variables in Pyjaco, Javascript variables, and Python variables. Javascript variables refer to “normal” variables that you would call in Javascript. These include
alert,
window,
document and the like, as well as variables in third-party Javascript libraries, such as the ubiquitous
jQuery. Further, any attributes on those objects are also Javascript variables, and the return value of any methods will also be Javascript variables.
Python variables, on the other hand, refer to any variables that you define in your Python source code. If you create a dict or a list, for example, it will be compiled to a list or dict object from the standard library we just generated. In the compiled script, of course these Python variables are represented by Javascript objects, but from the point of view of a Pyjaco coder, it is important to keep the two types of files separate. Almost all the bugs I have encountered in my Pyjaco code have been caused by confusing the two types of variables.
The distinction between python and javascript variables introduces a couple of complications to writing Pyjaco compatible python code. First we need to flag all of our Javascript variables using a decorator on methods that access them. Second, we need to explicitly convert our variables between Javascript and Python any time we access one from the other. I’m told that this conversion can — and one day will — be done automatically by the pyjaco multiplexer, but in the meantime, we need to make it explicit. We do this by using two javascript functions supplied with the standard library we just generated, appropriately named
js() and
py(). You will see examples of these shortly.
When I finally figured out the distinction, my first thought was, “ok, let’s prefer to always work with python variables.” Therefore, in my initialization code, I tried
jQ=py(jQuery). Unfortunately, jQuery is a rather large object, and the
py function apparently recursively converts all attributes from javascript to python. I ended up with a stack overflow.
Now, let’s create our first python code and watch it compile to Javascript. Name the file
pyjados.py:
First we write a python function named
setup. This function is a python object.
jQuery is a javascript object that expects a javascript object as input. Therefore, we wrap
setup in a
js() call and pass the result into the jQuery function. jQuery will now run
setup when
document.ready is fired.
Now we compile the code using the following command inside our activated virtualenv:
You’ll notice the command doesn’t exit. That is the
--watch option at work. If you now make a change to
pyjados.py and save it, it will automatically recompile it. The output file
pyjados.js is regenerated each time. This is the file we included in our html file. So now, open that html file in a web browser using a
file:// url. Make sure the Javascript console is displayed and reload the page. You should see the words “Pyjados Hello World” printed on the console. Pyjaco automatically compiles
console.log output.
Before we start implementing our Todo list, let’s look at an example of accessing a javascript variable inside a function. Change
setup.py to utilize
alert, as follows:
Did you look closely at that code? There is a missing close bracket on the alert line. You’ll see the syntax error in your console where pyjs.py is watching the compiled code. Add the bracket and let it automatically recompile itself:
Let’s analyze this snippet. First, notice how we told the compiler that
alert is a Javascript variable when used inside
setup(). This is a bit odd, since the JSVar decorator is never actually imported into the namespace. This is a bit of magic in the Pyjaco compiler, just pretend it has been imported.
Second, notice that since
alert has been flagged as a JSVar, it must accept a Javascript variable. However, the string “Pyjados Hello Alert” is a Python variable. Therefore, we convert it using
js() as we pass it into the
alert call.
Now let’s prepare to create some working todo-list code. Start by adding a form for submitting todos and a list to render the todos to the html body:
<body> <h1>PyJaco Todo List Example</h1> <form id="add_todo_form"> <input type="text" id="add_box" placeholder="Add Todo", <button id="add_button">Add Todo</button> </form> <ul id="todo_items"></ul> </body>
Nothing too exciting here. Note the ids on the elements, since we’ll be utilizing these from Pyjaco using jQuery selectors.
Now back into the python file. Let’s create a class to manage the various todo elements:
The
__init__ function hooks up the form’s submit button to a method on the object. Notice that we need to flag not just
jQuery, but also
js_add_form as a javascript variable. Pyjaco does not (currently) know that a javascript variable is returned when calling a method on an existing javascript variable. I like to add the
js_ prefix to variable names to help remind myself that this is a javascript variable.
In an ideal world, we could convert this variable to a Python variable using
py(), but as noted earlier, calling
py on a jQuery object results in a stack overflow or browser crash.
Also pay attention to the way we wrap the
self.add_todo method name in a
js() call when we pass it into the
submit handler. The
submit method is a javascript function expecting a javascript object.
The
def add_todo method has its single parameter flagged as a
@JSVar, since the method is being called internally by jQuery when the event occurs. We also wrap the
False return value (to prevent event propogation on the submit handler) in a
js() call so that jQuery recognizes it as a javascript
false rather than a (true) object named
False.
Try the code. Ensure the compiler recompiled it, and reload the html file. Enter some characters into the text box and use the
Enter key or the
Add Todo button to submit the form. The words
form submitted should be displayed in the javascript console.
Now let’s actually store and render a newly added todo. The todos are stored in memory in a python dict object. Initialize this object by adding the following two lines of code to the end of
__init__:
And rewrite
add_todo as follows as well as a new method named
render
Note that the
todos dict is a Python object, so when we insert the value of the
js_add_box into it, we must convert it from a javascript object using
py(). Also note how, because we are writing in a python function, manipulating the python value
self.next_id requires no conversion, and calling the python function
self.render is also clean.
In the render function itself, I think it’s pretty cool that string formatting using % is supported by pyjaco (as an aside, the str.format method introduced in python 2.6 is not yet available) and that the python
sorted() function is available. Note also how we can loop over
items() on the
self.todos dictionary just as if we were using a normal python dictionary.
Now let’s add the ability to complete todos. Let’s start by adding a template string as a class variable, and use that string inside the render function. This illustrates that pyjaco supports class variables:
and we change the for loop in
render to:
Reload the page again and notice how checkboxes have been displayed beside each todo. The next step is to make clicking these boxes actually complete the todos. We add a couple lines to our
__init__ method to connect a live click event to the checkbox items, which now looks like this:
Don’t forget to add
js_checkbox to the
JSVar decorator.
The
complete_todo method looks like this:
The first line is using exclusively javascript arguments, and returns the
<li> element containing the checkbox that was clicked. The
id = line converts the javascript string id attribute of this element (which looks like “
todo_5“, as defined in
list_item_template) into the python integer id of the todo. The remaining lines simply remove that todo from the internal list and from the DOM, after a 1.5 second delay.
In fact, we now have a fully functional todo list that allows adding todos and checking them off. Now, as a bonus, let’s try hooking this up to the HTML 5
localStorage object so that the list is maintained across page reloads. We start by adding a
store() method to our class:
The main line of code is easiest to read from the inside out. First we convert the
self.todos dict to a normal javascript object using the
js() function. Then we call
JSON.stringify on this object to create a string suitable for insertion into
localStorage.
Now add this call to the end of the two methods that manipulate the todo list,
add_todo and
complete_todo:
.
Refresh the page, add a couple todos, and inspect the
localStorage object in your console. You should see the stringified dict in the
todolist value.
Now all we have to do is ensure the
self.todos dict is loaded from
localStorage when the app is initialized. Add the following to the end of the
__init__ method (make sure to add
js_stored_todos to the
JSVars decorator):
Note that calling
py() on the output of
JSON.parse creates a python object, not a python dict. The code is therefore wrapped in a call to
dict(), which converts the object to a dictionary.
Unfortunately, the resultant dict contains keys that are strings, whereas our original dict used integer keys. So a pure-python list comprehension is used to convert the dictionary to one with integer keys. This line is a bit hard to read, but I wanted to include it to demonstrate that Pyjaco can parse list comprehensions. Finally, we set
self.next_id using the python
max() call, which Pyjaco also automatically translates into javascript.
Try it out. Load the pyjados HTML file, add some todos, check a few of them off, then close and reload the web browser. Your todos will be stored!
I hope you’ve enjoyed this introduction to Pyjaco. It is a nice tool with a lot of potential. Currently, I find writing Pyjaco code to be approximately equally tedious to writing Javascript code. However, I feel that as I learn the ins and outs of Pyjaco, and as the developers continue to refine and improve the compiler, Pyjaco may one day be a perfectly viable alternative to writing pure Javascript or to the rather too Ruby-esque, but otherwise excellent Coffeescript. | http://archlinux.me/dusty/tag/pyjaco/ | CC-MAIN-2014-42 | refinedweb | 2,396 | 64 |
{-# LANGUAGE PatternSignatures #-} module Data.Storable.Instances () where import Data.Storable instance (StorableM a, StorableM b) => StorableM (a, b) where sizeOfM (a, b) = do sizeOfM a sizeOfM b -- For single constructor datatypes, we can use irrefutable patterns to -- avoid the need for scoped type variables. alignmentM ~(a, b) = do alignmentM a alignmentM b peekM = do a <- peekM b <- peekM return (a, b) pokeM (a, b) = do pokeM a pokeM b -- For some real life applications, it may be sufficient to use Int16 for the -- list length. If that is the case, don't use Data.Storable.Instances. instance (StorableM a) => StorableM [a] where sizeOfM l = do sizeOfM (0 :: Int) mapM_ sizeOfM l alignmentM ~(a:_) = do alignmentM (undefined :: Int) alignmentM a peekM = do n :: Int <- peekM mapM (const peekM) [1..n] pokeM l = do pokeM (length l :: Int) mapM_ pokeM l | http://hackage.haskell.org/package/storable-0.1/docs/src/Data-Storable-Instances.html | CC-MAIN-2015-35 | refinedweb | 140 | 77.67 |
App::BCVI::Plugins - Documentation about the bcvi plugin API
BCVI plugins are .pm files (Perl modules) in the user's BCVI config directory ($HOME/.config/bcvi).
Plugins can:
bcvi(including removing functionality)
Ideally you should be able to customise the behaviour of
bcvi in pretty much any way you want without needing to edit the
bcvi script itself.
Here's a silly plugin (that no sane person would ever want to use) which overrides the 'vi' command handler and instead of launching
gvim it launches
gedit (the GNOME text editor) - I did warn you it was a silly example:
package App::BCVI::Gedit; use strict; use warnings; sub execute_vi { my($self) = @_; my $alias = $self->calling_host(); my @files = map { "s{alias}$_" } $self->get_filenames(); system('gedit', '--', @files); } App::BCVI->hook_server_class(); 1;
This file should be saved as $HOME/.config/bcvi/Gedit.pm. Let's go through it line-by-line.
Each plugin must have a unique package name. The App::BCVI namespace is there for plugins to use. By convention, the filename should match the last part of the package name, with '.pm' appended.
The
use strict; and
use warnings; are good practice in any Perl module.
The
execute_vi subroutine was copy/pasted from the
bcvi script itself and then modified to work with
gedit rather than
gvim.
The
hook_server_class line is a method call that pushes this class onto the inheritance chain for the object class that implements the listener process. When the listener process calls
execute_vi in response to a request from a client, our method is called instead of the standard method. In some plugins, it might make sense to delegate to the standard method using the syntax
$self->SUPER::execute_vi(@args), but in our case we're replacing the standard method rather than augmenting it.
Plugin files are never loaded from anywhere except the user's BCVI config directory. In particular,
bcvi never loads any modules from the system lib/App/BCVI directory. If you get plugin modules from CPAN, you'll need to copy the .pm files into your plugin directory (or possibly symlink to the .pm file in the system lib directory).
Some plugins enhance the listener process and therefore only need to be installed on your workstation. Other plugins enhance the client so they need to be installed on the hosts where you use bcvi. Client-side plugins can register themselves to be included in the set of files that get deployed to a host when you run
bcvi --install HOSTNAME.
The BCVI application is built from four classes:
Implements the listener process as a forking server. Listens on a socket, when an incoming connection is received, a child process is forked off to handle it.
Implements the client process which establishes a TCP connection to the listener process, sends a request and waits for a response.
A base class implements common methods used by both the client and the server.
A helper class used by both the client and the server to render POD to text in response to the
--help option.
A plugin can push its package name onto the inheritance chain for the server by calling:
App::BCVI->hook_server_class();
or for the client by calling
App::BCVI->hook_client_class();
There are currently no hook methods for either the base class or the POD class because that didn't seem very useful (just ask if you really need this).
The example plugin above had a package name of
App::BCVI::Gedit and it called
hook_server_class(). This has two effects:
App::BCVI::Geditclass
@ISAarray in the
App::BCVI::Geditpackage will be adjusted to point to
App::BCVI::Serverso that all the existing methods of the server class will be inherited
If another package calls
hook_server_class() then its
@ISA array will be adjusted to point to the
App::BCVI::Gedit class and when the listener starts it will be an instance of the second plugin class. Usually the order of loading would not be significant, but the plugin filenames are sorted alphanumerically before loading so you can rename the
.pm files to have them load in a specific order.
If your plugin calls a hook method it should not explicitly set up any other inheritance relationship (either through
use base or by directly altering @ISA).
Sometimes it might not be immediately obvious whether you need to hook the client class or the server class. For example if your code modifies the behaviour of the
--install option then it would not be a part of the listener process but it also might not run on a remote host. The rule in these cases is: If your code does not run in the listener then it should hook the client class.
A single plugin should not call both
hook_server_class() and
hook_client_class() - no good can come of that.
In addition to being able to hook into the inheritance chains, a plugin can also choose to call one of the registration methods:
register_option(key => value, ...)
This method is used to register a new command-line option. The arguments are key => value pairs, for example:
App::BCVI->register_option( name => 'command', alias => 'c', arg_spec => '=s', arg_name => '<cmnd>', summary => 'command to send over back-channel', description => <<'END_POD' Use C<cmnd> as the command to send over the back-channel (default: vi). Recognised commands are described in L<COMMANDS> below. END_POD );
The recognised keys are (*=mandatory parameter):
*name the long form of the option name (without the initial '--') alias optional single character alias arg_spec if the option takes a value use '=s' for string '=i' for int etc arg_name how the option value should be rendered in the POD dispatch_to name of a method to be called if this option is present *summary one-line summary of the option for the synopsis *description longer POD snippet providing a full description of the option
The command line options are parsed using Getopt::Long so you can refer to that module's documentation for more details (of the
arg_spec in particular).
If your plugin registers a command-line option then your summary and description should be visible immediately when you run
bcvi --help.
Only specify a
dispatch_to method if
bcvi should exit immediately after your method is called.
After you have registered a command-line option, code in your plugin methods can check the value of the option (or any other option) with:
$self->opt($option_name)
If you are unsure about the usage of any of the parameters listed above, please refer to the numerous examples in
bcvi itself.
register_command(key => value, ...)
This method is used to register a handler for a new command in the listener. The arguments are key => value pairs, for example:
App::BCVI->register_command( name => 'scpd', description => <<'END_POD' Uses C<scp> to copy the specified files to the calling user's F<~/Desktop>. END_POD );
The recognised keys are (*=mandatory parameter):
*name the 'command' name which will be sent from the client dispatch_to name of the handler method *description POD snippet providing a full description of the command
If you don't provide a method name as an argument to the
dispatch_to parameter, then the default handler method name will be the command name with 'execute_' prepended.
See "COMMAND HANDLERS" below for details of how the handler method is called.
register_aliases(alias, ...)
This method is used to register shell alias definitions that should be added to the user's local shell startup script with
bcvi --add-aliases or to the shell startup script on a remote host with
bcvi --install.
One call can register a list of aliases, for example:
App::BCVI->register_aliases( 'test -n "${BCVI_CONF}" && alias vi="bcvi"', 'test -n "${BCVI_CONF}" && alias bcp="bcvi -c scpd"', );
register_installable()
A client-side plugin should call this method to indicate that the plugin file is required on the remote hosts and should be copied over by
bcvi --install.
This method call requires no arguments:
App::BCVI->register_installable();
When the listener receives a command it looks up the registered commands to locate a handler method and then calls that method (with no arguments).
If the handler method expects a list of filenames, it can get them by calling:
$self->get_filenames()
Alternatively, if the handler method expects string data rather than filenames, it can call:
$self->read_request_body()
for non-ASCII text data you may want to decode the bytes to characters using the Encode module:
decode('utf8', $self->read_request_body())
The handler can also access the request headers via the hashref returned by:
$self->request()
If for some reason the handler method needs direct read or write access to the client socket, it can get the socket filehandle with:
$self->sock();
You probably don't need to worry about this section - usually a handler does not need to worry about returning a status code at all.
On successful completion, a command handler method should simply return (the return value is not significant). The listener process will send a
200 Success status response.
On failure, a command handler may choose to die and the message will go to the user's X Session log. The client will see the socket close and will advise the user that the "Server hung up".
There are a small number of predefined status codes that can be returned to the client (but most command handlers will never need to use them):
200 Success 300 Response follows 900 Permission denied 910 Unrecognised command
You can send a response by calling:
$self->send_response($code) # eg: $code = 900
There is currently no way to register additional codes, but of course a handler routine could make up its own status code, write it directly to the socket (using
$self->sock->write) and then exit.
The '300' response is useful for the situation where the client sent a request and is expecting data in the body of the response. If you want to see an example of this functionality, look at the built-in 'commands_pod' message that the
bcvi client uses to retrieve the POD for all commands supported by the listener. A 300 response must be followed by one or more headers - terminated by a blank line. A 'Content-length' header must be included to indicate how many bytes of data follow the headers.
For examples of plugins, look for these modules on CPAN:
Implements the client-side of the Desktop notification plugin. Registers a shell alias and registers as an installable file.
Implements the server-side of the Desktop notification plugin. Registers a new command, hooks the server class and implements a command handler.
Hooks the client class to track which hosts
bcvi has been installed to (using
bcvi --install). Wraps the handler for the existing
--install option handler and also adds a new
--update-all option.
The source of
bcvi itself is also a good place to look for examples of how to register options and commands and how to implement a command handler.
<grantm at cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~grantm/App-BCVI-3.09/lib/App/BCVI/Plugins.pod | CC-MAIN-2016-18 | refinedweb | 1,835 | 54.76 |
This document explains some of the key changes proposed for JAXB 2.1 maintenance release, in the hope of getting feedback. Everything written here is a proposal not final and is subject to change.
It is a fairly common for a schema to be developed as a 'module', to be used by other schemas. The idea is for the party X to develop a 'core' schema, then the party Y will develop additional schema on top of the core. Examples of this can be seen in many places, including W3C XML Schema itself, WSDL, WS-Addressing, SOAP, UBL, ...
When people develop corresponding Java libraries for these schemas, there's often a need to compile the core schema and the additional schema separately. That is, the party X generates (or even hand-write) Java classes for the core schema, then the party Y compiles the additional schema, in such a way that the generated classes refer to the classes generated earlier (or hand-written) by X.
A similar problem applies to schema generation. Sometimes your Java classes refer to other classes, which already have pre-generated (or hand-written) corresponding schemas somewhere. In this case, it's desirable to simply refer to that schema, as opposed to generate definitions.
On the schema compiler side, we'll expand the
<jaxb:class> customization so that the references
to the existing classes can be specified. When such customizations are
seen, a schema compiler must not generate a class, and instead simply
refer to the referenced classes.
So for example, given the following schema:
a schema compiler will simply assume that there's already such a class called "org.acme.foo.Foo", and will not generate another class. All the generated classes that reference this type will refer to "org.acme.foo.Foo".
The same annotation can be applied on simple types.
We'll also define the map attribute on <schemaBindings> customization, to disallow the code generation for the entire namespace (unless otherwise overriden by <jaxb:class). While we can conceptually define such attributes on smaller units (such as on <class> customization), given that a common use case of separate compilation happens at the namespace level, it's unlikely to be useful.
This allows a library to "hide" certain definitions that are globally defined in XSD. (In schema, it's a common practice to define almost everything as global types, which restricts Java library designer's ability to design classes.
With this, one could write a schema like the following:
The schemaBindings statement prevents any class generation from this package, reference to the complex type "foo" will become a reference to "org.acme.foo.Foo" type, and reference to the complex type "bar" will be an error.
On the schema generator side, we'll add a new 'location' annotation element to @XmlSchema annotation. When this attribute is present, it points to the URI of the existing schema document that defines the namespace. For example,
@XmlSchema(namespace="foo") package foo; @XmlType class Foo { @XmlElement Bar zot; } @XmlSchema(namespace="bar",location="") package bar; @XmlType class Bar { ... }
The most common customization that people specifies is the globalBindings customization, and probably the schemaBindings customization. But the current syntax for doing this is somewhat verbose.
For example, to specify a global customization, you need to do this:
The @schemaLocation and @node is pointless given that this is a globalBindings. A similar problem applies to schemaBindings, where @node is pointless.
This change rectifies this situation by not requiring those attributes for globalBindings and schemaBindings. The user will be able to write the same customization as follows:
This also allows those customizations to be applied to other schemas without any changes, as customization files no longer hard-code any path names.
Currently there is no way to enforce the presence of an @XmlElementWrapper, the wrapper is always generated with minOccurs=0.
So let's add @XmlElementWrapper.required(). This defaults to false to be backward compatible, and when true, it will cause the wrapper element to be generated minOccurs=1.
For JAXB to operate, it needs to know a list of classes that it's going to handle up front. This is the list of classes that are passed to JAXBContext.newInstance(). While the implementation of this method does a transitive type reference analysis, this analysis is unable to find subclasses due to the way Java works.
At the runtime, the list of classes that JAX-WS knows to be bound by JAXB is primarily those types that appear in the SEI. Any types that are not transitively reachable from these classes will not be a part of JAXBContext, and as such they'll fail to marshal/unmarshal.
What's needed here is for a portable way for the WSDL compiler tool to pass a list of additional classes to the runtime. If such a mechanism exist, then wscompile can capture all the generated types (by communicating with xjc) to be used by runtime.
Since the JAX-WS implementation used at the development time and the JAX-WS implementation at runtime might differ, any mechanism that captures the list of classes need to be portable. That means this requires a spec change.
Define an annotation in JAXB that instructs JAXB runtime to bind other classes. The feature is intended so that classes that are not otherwise statically reachable will become reachable for the JAXB runtime.
@XmlSeeAlso({FooBeanEx.class,FooBean2.class,...}) class FooBean { ... }
The semantics is that this would extend the transitive reference closure computation. When the closure includes FooBean, this annotation will add all the referenced classes into this closure.
To make this work with JAX-WS, JAX-WS spec will allow this annotation to be placed on the web service class, and JAX-WS implementation needs to be involved in passing this information to JAXB implementation (as JAXB won't see the web service class itself as a bindable class.)
So for example,.)
The JAX-WS 2.1 spec proposes to allow @XmlSeeAlso on the SEI or endpoint implementation, like this:
@XmlSeeAlso(Foo.class) interface SEI { Object echo(Object o); }
JAX-WS implementation will be responsible for making sure that the JAXBContext it creates includes all the classes listed in @XmlSeeAlso. This annotaion is allowed but not required.
Java type hierarchy and XML type hierarchy may not always match one to one. One such example is where you define a common base class between various classes to share some utility code, yet such a base class should be considered as an implementation detail.
class AbstractModelObject { public void toString() { // use commons-lang to define a toString() for all return ToStringBuilder.reflectionToString(this); } } class Person extends AbstractModelObject { ... } class Computer extends AbstractModelObject { ... }
The user do not wish to create a complex type that corresponds to the 'abstractModelObject', since it's pointless. It's often even actively harmful, because such a base class blocks the use of @XmlValue annotation on Person and Computer classes.
Allow @XmlTransient to be placed on a class, to indicate that it doesn't have the corresponding XML representation. So in the above example, one can do as follows to achieve the desired effect of not defining 'abstractModelObject' complex type:
@XmlTransient class AbstractModelObject { public void toString() { // use commons-lang to define a toString() for all return ToStringBuilder.reflectionToString(this); } } class Person extends AbstractModelObject { } class Computer extends AbstractModelObject { } <complexType name="person" /> <complexType name="computer" />
The following example illustrates more general case:
class Foo { @XmlElement String a; } @XmlTransient class Bar extends Foo { @XmlElement String b; } @XmlType(propOrder={"c","b"}) class Zot extends Bar { @XmlElement String c; }
A few things to note: | http://weblogs.java.net/blog/kohsuke/archive/20060822/JAXB%202.1%20Proposed%20Changes.html | crawl-002 | refinedweb | 1,257 | 52.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.