text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
If you came to JavaOne 2012 or watched they keynote online you would have seen a cool proof of concept we did along with Canoo and Navis. In case you missed it, its on YouTube:
It was built on a early JavaFX prototype with added 3D mesh, Camera and Lighting support. The first public build of JavaFX 8 with the official support for this is now out for you to download, yay!
Download Java 8 EA b77 (including 3D) …
At the moment there is only support for Windows but a OpenGL version for other platforms is being worked on.
For a list of 3D features that are being worked on then check out the Open JFX wiki:
wikis.oracle.com – OpenJDK – 3D Features
Here is a very simple example to help you get started:
import javafx.application.Application; import javafx.scene.*; import javafx.scene.paint.Color; import javafx.scene.paint.PhongMaterial; import javafx.scene.shape.*; import javafx.stage.Stage; public class Shapes3DViewer extends Application { @Override public void start(Stage stage) { PhongMaterial material = new PhongMaterial(); material.setDiffuseColor(Color.LIGHTGRAY); material.setSpecularColor(Color.rgb(30, 30, 30)); Shape3D[] meshView = new Shape3D[] { new Box(200, 200, 200), new Sphere(100), new Cylinder(100, 200), }; for (int i=0; i!=3; ++i) { meshView[i].setMaterial(material); meshView[i].setTranslateX((i + 1) * 220); meshView[i].setTranslateY(500); meshView[i].setTranslateZ(20); meshView[i].setDrawMode(DrawMode.FILL); meshView[i].setCullFace(CullFace.BACK); }; PointLight pointLight = new PointLight(Color.ANTIQUEWHITE); pointLight.setTranslateX(800); pointLight.setTranslateY(-100); pointLight.setTranslateZ(-1000); Group root = new Group(meshView); root.getChildren().add(pointLight); Scene scene = new Scene(root, 800, 800, true); scene.setFill(Color.rgb(10, 10, 40)); scene.setCamera(new PerspectiveCamera(false)); stage.setScene(scene); stage.show(); } public static void main(String[] args) { launch(args); } }
I can’t wait to see what sort of cool things you guys will be able to do with it 🙂
Really wonderful news! Can’t wait for the Mac version so that I can actually try it out. 😉
Jasper,
Congrats!
Thanks for the release! So many toys to play with and so little time. I just downloaded it.
I can’t wait to take it out for a spin.
Keep up the awesome work!
Carl
Hmm doesn’t seem to work on Mac OS X 10.8.2, all I get is background color with no shapes. Is it just me?
Ah never mind, was too lazy to read the whole article before trying. ;d
I was so excited that I, too, completely missed the “Windows only” note until after a bit of head scratching. I’m on Linux, so here’s hoping that if I hold my breath, JavaFX 3D arrives before I turn blue. 😉
It’s looking really exciting now, though. I like what I’m seeing!
Had the same issue. I guess I got so exited by the demo that I forgot to read the last part of the post.
I hope they release an OSX version soon…
Would be nice to get some core components working properly(HTMLEditor,…) bevore new hype-functions get started.
Different teams work on the graphics and UI controls, so they are working to different schedules and have different features that they focus on. My team (UI Controls) is working hard on our area, and always welcome bug reports in the JavaFX Jira tracker at.
It’s calming me down to read that 🙂 I used your Jira already several times – keep up the good work!
Very good, great work! 🙂
THanks
Wow!! This is what i have been waiting for
Note that JDK68 Early Access build b77 only includes an implementation on Windows. Mac and Linux will follow in upcoming builds
I hope this will soon be available for Mac OS, so looking forward to play around with this…
Nice. I can rotate a camera now! Starting up a new project to rewrite my game now. 🙂
I´m been working with Javafx since version 1.2 and I was not wrong about the future of the technology. JavaFX is getting better every day. Simply GREAT!
I have sadly not. We only use CentOS or Fedora in our environment and OpenJFX aint available for that platform yet. However I suspect it will be as soon as they release enough of JavaFX to have something compile and run on top of OpenJDK.
3D images looks quite promising and surprisingly, amount of coding is also less in Java standard 🙂
I really like the work that the Oracle team is doing here. GREAT JOB! I just miss in the example code a demo of a Triangle mesh as it was in the conference:
Is there any example about it?
Looks nice, but too limiting already. It’d be better if we had access to OpenGL directly. ?
“Why not work with the JOGL team to expose OpenGL to JavaFX officially?”
Because there are serous security consideration with exposing OpenGL. For example shaders do not context switch so a shader can block your GPU which is quite bad.
Therefore OpenGL have to be wrapped into an OOP model that do not allow for stuff like that. For example a bytecode to shader compiler could verify that the shader is not harmful.
JOGL should be part of the Java API, but sandboxed applications should NOT be able to call it directly. However if you are writing a desktop application security may not be an issue.
Good catch. I hadn’t thought of security. However I’m only concerned with desktop applications, so as you said, security is less of an issue in that case.
Would still be great if non-sandboxed Java apps could access the “raw” OpenGL API.
I believe it is the proper long term investment for JavaFX: rather than to make yet another 3D wrapper that can only benefit a portion of all users, expose the bare metal, and build on top later.
A 3d wrapper is a dead-end, not a platform to build on top.
And who wants to invest heavily in an API solely used by Java ?
If OpenGL was exposed, it would immediately attract lots of attention, since all that OpenGL experience could be used right away.
I am guessing that JavaFX 3D is based on JOGL which have to be exposed…
A 3d wrapper is not a dead end. JavaFX 3D is easily ported to other languages and the sourcecode will be available.
Just a short comment on the “API solely used by Java”. That’s not really true – “API solely used by JVM languages” would be an appropriate statement.
To me, that’s enough. I’m using JavaFX entirely from Scala.
I have started learning javaFx and man its amazing eagerly waiting for javaFX 3D..Great job
What’s the best way to stay informed about the availability of these 3D features on Mac OS X?
Will you let us know on this blog when they arrive?
I just tried build 79, but no dice. I’m almost ready to install a Windows VM just to see this in action. 🙂
Can we look forward to JavaFX 3D in the near future (for Linux & Mac) or is it more likely going to be closer to the Java8 release date?
In other words, might it be safe to start holding my breath? 🙂
OS X support is now available (since Build 86).
I really hope that this is not the main approach to 3D support in JavaFX. OpenGL ES 2.0 is supported on basically all platforms now. The security issues are solvable, it’s already enabled by default in Firefox and Chrome! Having a basic API like Apple’s SceneKit for beginners is fine, but like SceneKit that is 1% of use cases vs 99% of use cases.
For those waiting (as I was) for an OS X release: I just discovered that JDK 8 Build 87 has 3D support for OS X! 🙂
(Apparently, even Build 86 had it, but I didn’t notice.)
Works for me on Linux, too. It has some redraw issues (scene flickers and sometimes does not draw the objects when I resize the window), but I’m happy with what I see so far! 🙂
Yay, Build 89 fixes the redraw issue, too. Sooo nice! Going to get busy with it this weekend!
hi guys:
where can get the library javafx.scene.paint.PhongMaterial?
Is there a possibility to use JavaFX3D also with Java 7?
some of these needed libraries are not installed with my javafx!!!!!!!!!
Hello,
where can I find the container demo? Is there “only” the video or is a java demo also aviable?
|
http://fxexperience.com/2013/02/javafx-3d-early-access-available/
|
CC-MAIN-2019-43
|
refinedweb
| 1,430
| 74.69
|
GETPEERNAME(2) BSD Programmer's Manual GETPEERNAME(2)
getpeername - get name of connected peer
#include <sys/types.h> #include <sys/socket.h> int getpeername(int s, struct sockaddr *name, socklen_t *namelen);
getpeername() returns the address information of the peer connected to socket s. One common use occurs when a process inherits an open socket, such as TCP servers forked from inetd(8). In this scenario, getpeername() is used to determine the connecting client's IP address. getpeername() takes three parameters: s contains the file descriptor of the socket whose peer should be looked up. name points to a sockaddr structure that will hold the address informa- tion for the connected peer. Normal use requires one to use a structure specific to the protocol family in use, such as sockaddr_in (IPv4) or sockaddr_in6 (IPv6), cast to a (struct sockaddr *). For greater portability, especially with the newer protocol.
If the call succeeds, a 0 is returned and namelen is set to the actual size of the socket address returned in name. Otherwise, errno is set and a value of -1 is returned.
On failure, errno is set to one of the following: [EBADF] The argument s is not a valid descriptor. [ENOTSOCK] The argument s is a file, not a socket. [ENOTCONN] The socket is not connected. [ENOBUFS] Insufficient resources were available in the system to per- form the operation. [EFAULT] The name parameter points to memory not in a valid part of the process address space.
accept(2), bind(2), getpeereid(2), getsockname(2), socket(2)
The getpeername() function call appeared in 4.2BSD. MirOS BSD #10-current July.
|
http://www.mirbsd.org/htman/sparc/man2/getpeername.htm
|
CC-MAIN-2014-35
|
refinedweb
| 268
| 57.37
|
#include <programs/mdrun/tests/moduletest.h>
Helper object for running grompp and mdrun in integration tests of mdrun functionality.
Objects of this class must be owned by objects descended from MdrunTestFixtureBase, which sets up necessary infrastructure for it. Such an object may own more than one SimulationRunner.
The setup phase creates various temporary files for input and output that are common for mdrun tests, using the file manager object of the fixture that owns this object. Individual tests should create any extra filenames similarly, so that the test users's current working directory does not get littered with files left over from tests.
Any method in this class may throw std::bad_alloc if out of memory.
By default, the convenience methods callGrompp() and callMdrun() just prepare and run a default call to mdrun. If there is a need to customize the command-line for grompp or mdrun (e.g. to invoke -maxwarn n, or -reprod), then make a CommandLine object with the appropriate flags and pass that into the routines that accept such.
|
https://manual.gromacs.org/current/doxygen/html-full/classgmx_1_1test_1_1SimulationRunner.xhtml
|
CC-MAIN-2021-39
|
refinedweb
| 173
| 52.9
|
On Sun, Apr 21, 2002 at 01:23:22AM -0400, eichin-lists@boxedpenguin.com wrote: > Are the sources that give these errors the current "apt-get source > heimdall", or something else? Yes, thats it. I have made some changes, but nothing to produce these errors. I wonder if autoconf is somehow interpreting an expression as an RegExp, even though it isn't... I am really find it surprising that this returns no results: [520] [scrooge:bam] /tmp/bam/030_autotools/heimdal-0.4e >find -type f | xargs grep AH_OUTPUT the line number returned just gives a reference to the rk_ROKEN macro, which is defined in cf/roken-frag.m4, and I can't see anything like that error here. That code quoted in the error seems to come from ./cf/misc.m4 This looks OK to me... dnl $Id: misc.m4,v 1.2 2000/07/19 15:04:00 joda Exp $ dnl AC_DEFUN([upcase],[`echo $1 | tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ`])dnl AC_DEFUN([rk_CONFIG_HEADER],[AH_TOP([#ifndef RCSID #define RCSID(msg) \ static /**/const char *const rcsid[] = { (const char *)rcsid, "\100(#)" msg } #endif #undef BINDIR #undef LIBDIR #undef LIBEXECDIR #undef SBINDIR #undef HAVE_INT8_T #undef HAVE_INT16_T #undef HAVE_INT32_T #undef HAVE_INT64_T #undef HAVE_U_INT8_T #undef HAVE_U_INT16_T #undef HAVE_U_INT32_T #undef HAVE_U_INT64_T /* Maximum values on all known systems */ #define MaxHostNameLen (64+4) #define MaxPathLen (1024+4) ])]) (although why this is in a autoconf macro instead of a *.h file rather puzzles me) -- Brian May <bam@debian.org> -- To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
|
https://lists.debian.org/debian-devel/2002/04/msg01603.html
|
CC-MAIN-2015-27
|
refinedweb
| 259
| 56.96
|
Hi Chris! On Tuesday 07 December 2004 00:56, Christopher R. Hertel wrote: > There's a thread started on the Samba-Technical mailing list that has > some discussion regarding cluster filesystems. I'm learning fast, > but I'm not the right one to answer this: > > >tml > > An excerpt: > > The other network-like filesystems - Lustre, SANFS, GPFS, and > RedHat's GFS do differ a little.. They differ in that they would > attempt stricter posix semantics A filesystem either complies with Posix semantics (understood to mean local filesystem semantics) or it doesn't, there is no "stricter". The filesystems listed all do attempt to achieve local filesystem semantics for a filesystem shared by a cluster (and they all rely on shared disk access). >"). Whoops, we've gotten off track mark here. I think the author may have confused "cluster" with "distributed" filesystems (Coda, AFS) which do have a notion of "nearby", and support disconnected operation. Cluster filesystems do not support disconnected operation: a disconnected machine can't see the shared filesystem at all, and any data it may have cached is invalid. In return for giving up disconnected operation, cluster filesystems offer inexpensive scaling of bandwidth and compute power, and they offer local filesystem semantics. > If they had a good standards > story with the IETF and were inkernel in 2.6, perhaps no one would > care, I'm not sure what he means. The only standard that matters to a cluster filesystem is Posix/SUS. Perhaps he is thinking about operating a shared filesystem over the internet, which is explicitly not a goal of any cluster filesystem that I know of. > but it seems odd - when you can make AFS or CIFS or NFSv4 do > the same with rather more trivial changes. Ahem. the task of adding local filesystem semantics to any existing network filesystem is far from trivial. Also, these are all really stacking filesystems: each of them exports portions of a local filesystem. A cluster filesystem is not stacked: it directly accesses a shared storage device without using some other filesystem and an intermediary. (Lustre actually uses a modified Ext3 on the storage device, however, this intermediate filesystem is used only to keep track of data blocks and does not provide namespace operations.) So, cluster and network filesystems are very different animals. > Somehow I think that the above doesn't quite capture what GFS is all > about. Indeed. The importance of GFS to Samba is that you can stack a clustered Samba on top of it, though as we have discussed, GFS will need some hacking to support things like oplocks and weirdo Windows naming semantics. Have we gotten to the point of discussing exactly how yet, or are we still digesting concepts? > I'm not trying to start a flamewar, but I'd certainly like to > see someone provide a clearer explanation than I could do. > > Chris -)----- Hope this helps, and that I haven't moosed anything up :-) Regards, Daniel
|
https://www.redhat.com/archives/linux-cluster/2004-December/msg00038.html
|
CC-MAIN-2015-11
|
refinedweb
| 491
| 59.84
|
class Solution(object): def maxEnvelopes(self, envs): def liss(envs): def lmip(envs, tails, k): b, e = 0, len(tails) - 1 while b <= e: m = (b + e) >> 1 if envs[tails[m]][1] >= k[1]: e = m - 1 else: b = m + 1 return b tails = [] for i, env in enumerate(envs): idx = lmip(envs, tails, env) if idx >= len(tails): tails.append(i) else: tails[idx] = i return len(tails) def f(x, y): return -1 if (x[0] < y[0] or x[0] == y[0] and x[1] > y[1]) else 1 envs.sort(cmp=f) return liss(envs) # Runtime: 100ms
The idea is to order the envelopes and then calculate the longest increasing subsequence (LISS). We first sort the envelopes by width, and we also make sure that when the width is the same, the envelope with greater height comes first. Why? This makes sure that when we calculate the LISS, we don't have a case such as [3, 4] [3, 5] (we could increase the LISS but this would be wrong as the width is the same. It can't happen when [3, 5] comes first in the ordering).
We could calculate the LISS using the standard DP algorithm (quadratic runtime), but we can just use the tails array method with a twist: we store the index of the tail, and we do leftmost insertion point as usual to find the right index in
nlogn time. Why not rightmost? Think about the case [1, 1], [1, 1], [1, 1].
@agave i guess you could also have written
envelopes.sort(key=lambda x: (x[0], x[1]))
Is the same, just using lambda and in one line :)
BTW. just loved your right shift instead of dividing by 2 :D Really sweat
Shorter version using bisect
import bisect class Solution(object): def maxEnvelopes(self, envelopes): """ :type envelopes: List[List[int]] :rtype: int """ if not envelopes: return 0 envelopes.sort(key=lambda x:(x[0],-x[1])) h=[] for i,e in enumerate(envelopes,0): j=bisect.bisect_left(h,e[1]) if j<len(h): h[j]=e[1] else: h.append(e[1]) return len(h)
Hey can you please explain what you're doing or what is an algorithm captured in
h=[] for i,e in enumerate(envelopes,0): j=bisect.bisect_left(h,e[1]) if j<len(h): h[j]=e[1] <---- what does it do?? else: h.append(e[1])
@pshaikh
e[1] is the height of the current envelop. h stores the heights of the envelopes before current envelop. And because envelopes are sorted by (width, -height), so current envelop's width is equal or larger than the envelopes before it. h[j] = e[1] is trying to put current envelop in the right place.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/48160/python-o-nlogn-o-n-solution-beats-97-with-explanation
|
CC-MAIN-2017-39
|
refinedweb
| 477
| 69.31
|
I have a lot of lines with
# TODO
# TODO
# TODO
TODO
def destroy
# TODO Alter route (Tell git to ignore this)
@comment.destroy
end
def destroy
@comment.destroy # TODO Alter this line. (Tell git to ignore from '# TODO' onwards.
end
You could use a 'clean' filter. It is run before a file is staged.
First, define the file types that you want to use the filter for, and put them in your
.gitattributes file:
.rb filter=removetodo
Then, adjust your config so that you tell git what the "removetodo" filter is supposed to do on clean:
git config filter.removetodo.clean "sed '/TODO/d'"
sed command taken from Delete lines in a text file that containing a specific string
Then, when you do a git add, git will silently remove every line containing the word TODO from your .rb files, without affecting your working area.
The downside is that you will loose all your TODO's if you accidentally merge in any changes that touch these files.
|
https://codedump.io/share/QQpii10pKuAl/1/how-do-i-tell-gitignore-or-git-itself-to-ignore-lines-containing--todo
|
CC-MAIN-2016-50
|
refinedweb
| 167
| 73.68
|
Hi,
With JCR-996 and the goal of eventually integrating SPI with
Jackrabbit core there comes a question of where and how we should
bundle the various "commons" classes we have. Currently
jackrabbit-jcr-commons is the throw-in default of all utility classes
that might be useful also outside jackrabbit-core.
To better define the scope of jackrabbit-jcr-commons and related
packages, I'd like to propose the following:
1) The jackrabbit-jcr-commons package should contain generic utility
classes that work *on top of* the JCR API. The classes should handle
names and paths in string format, and use the namespace methods on
javax.jcr.Session where name manipulation is needed. The package can
also contain JCR base classes and other utilities that implement JCR
interfaces and methods in terms of other JCR API calls. Optimally the
only dependencies should be the Java 1.4 and JCR 1.0 (later 2.0).
2) The jackrabbit-jcr2spi package should contain everything that is
needed to bridge between the JCR and SPI interfaces. Optimally the
only dependencies would be Java 1.4, JCR 1.0 (later 2.0), SPI, and
possibly jackrabbit-jcr-commons and other generic things like
commons-collections and slf4j. Note that much of the functionality in
jackrabbit-jcr2spi should be usable as a generic dependency also for
"native" transient space implementations like the current
jackrabbit-core.
3) The jacrabbit-spi-commons package should contain generic utility
classes that work *below* the SPI. Optimally the only dependencies
would be Java 1.4 and SPI. This package should be usable by both
jackrabbit-core and any SPI-based JCR connectors.
I would optimally place all the string-name conversion functionality
in jcr2spi, but since at least the query parser needs that
functionality on the SPI implementation side, it might be necessary to
push that functionality down to spi-commons.
WDYT?
BR,
Jukka Zitting
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200707.mbox/%3C510143ac0707260252o2a6743bcn5445c6611cfad789@mail.gmail.com%3E
|
CC-MAIN-2016-44
|
refinedweb
| 316
| 57.67
|
This is the WccCoolButton; it is a multifunction composition based composite control. It has three operation modes: text, image and button. Each mode conceals an underlying LinkButton, Button or ImageButton control respectively. This composite control was created to be used inside future composite controls, and to give the end users of those future controls additional flexibility over the look and feel of the way the future control will operate. This will be the first article in a series of articles in which I will create a double pane picklist, which will use the WccCoolButton control instead of normal buttons. To start with, I had a lot of problems determining the order in which the controls are rendered in a composite control. The books that I was working off of had light examples, and didn't really cover the eventing of a composite control. As there are many examples of simple composite controls, I want to focus on the eventing and control "packing order" that the framework follows when rendering a Composite style control. For some examples of simple web custom controls, see This MSDN Article.
This control example uses Composition style, which means that this control doesn't need to handle the IPostBackEvent interface, or Post Back data. Rendering style, while giving greater flexibility over the control output also forces you to handle all the nuts and bolts of event handling and data post back. Better to let the framework handle that chore, and use Composition when you can. Composition type controls do have extra performance overhead, check out This MSDN Article to learn more.
One of the most painful lessons I had to learn was that Composition based controls like this one render the controls first, and then set the properties later. This caused many problems, as I attempted to set the properties first, and then call CreateChildControls() to render the control. This produced many random rendering errors, and ViewState troubles. So, the best tip I can pass on is to declare all your controls as private members in your Custom Control Class, and then allocate ALL your controls inside CreateChildControls(). After that, put your custom actions inside the property Get/Set sections. Your goal is to actually affect the created controls that are already in position in your Controls collection from CreateChildControls(). All your properties will then work as expected when you used them from the designer and at runtime.
Here is the order that your custom control is created (This is a conceptual order, as I understand it, the actual internal mechanics may be different):
When you create your child controls inside CreateChildControls(), you can wire them up to event handlers. In WccCoolButton, I wire three Button types to three different event handlers, and then have each event handler call the same exposed Click event if a delegate has been assigned to it. This way, I can change the WccCoolButton to any of the three button types, and have one exposed click event. To do this, you must have each of your controls that will be wired to events created inside CreateChildControls() and wired to their respective event handlers. If your controls are not present in the Controls collection, then the framework will not fire the events. The best way I have found to handle this is to create all the controls I will need, and then set the ones I will not use right away to .Visible = false. This way, the frame work handles tracking the view state information, and ensures that the events stay hooked up to the controls. Then, I change the visibility of the controls with exposed properties in the set {} section.
Starting at the top, one thing I noticed that people have been asking about on usenet was "How do I get a pull down to appear in the designer for my custom property?" I think it's worth a quick note here; just define a public enum for all the values that will appear in the pulldown for your control. Then, make a property that will return that enum type, and the designer takes care of the rest.
namespace ChameleonLabs.CustomControls.WccCoolButton
{
public enum CoolButtonMode
{
Text = 0,
Button = 1,
Image = 2
}
<Bindable(false), Category("Appearance"),
Description("The mode of the button, use Text for a decorated
text button, Button for a normal submit button style,
or Image for an Image button style.")>
public CoolButtonMode CoolButtonMode
{
get
{
...
Here is the top of the WccCoolButton class. Notice that I am declaring all the controls, but NOT allocating them. Allocation of the controls must take place in CreateChildControls(). Don't define a constructor and set the defaults in there, do that in CreateChildControls() as well. Also notice that I am exposing one event that can be delegated: Click. If you register an event handler with the designer, or at runtime, then it will be called when any of the three buttons is clicked.
/// <span class="code-SummaryComment"><summary>
</span>
Here is a simple property example that most of the data properties follow. Of note is the way that this property is "guarded" with EnsureChildControls(). EnsureChildControls makes sure that the CreateChildControls() method has been called before proceeding. This also ensures that all those private variables at the top of the object are created in CreateChildControls(). I try to use the built in base controls (like this Label control) as much as possible to capitalize on their existing view state control features.
<Bindable(true), Category("Appearance"), DefaultValue("["),
Description("The text decoration that is displayed on
the left side of the button when the CoolButtonMode
is set to 'Text'.")>
public string LeftDecoration
{
get
{
this.EnsureChildControls();
return lblLeftDecoration.Text;
}
set
{
this.EnsureChildControls();
lblLeftDecoration.Text = value;
}
}
Here is an example of a more complex property where we affect the WccCoolButton control's look by setting the button mode. When the user changes the button mode in the designer or runtime, then the control is updated by hiding the child controls that we don't want shown, and setting the proper child control to visible to cause the framework to draw it. Note that just because a control is not drawn doesn't keep it from being included in the host page's ViewState collection. I am storing the CoolButtonMode in the ViewState in this example, so that it will persist across button clicks. Remember, each button click causes a page postback to occur, so we must keep the state of all the variables that will persist across postback in ViewState. Note that the base controls like Button, Label and the rest manage their own internal view state, so we only need worry about non-web control variables like this eCoolButtonMode state variable.
<Bindable(false), Category("Appearance"),
Description("The mode of the button, use Text for a decorated text button,
Button for a normal submit button style,
or Image for an Image button style.")>
public CoolButtonMode CoolButtonMode
{
get
{
CoolButtonMode retVal;
if(ViewState["eCoolButtonMode"] == null)
retVal = CoolButtonMode.Text;
else
retVal = (CoolButtonMode) ViewState["eCoolButtonMode"];
return(retVal);
}
set
{
ViewState["eCoolButtonMode"] = value;
this.EnsureChildControls();
HtmlTable t = (HtmlTable) Controls[0];
switch(CoolButtonMode)
{
case CoolButtonMode.Button:
this.btnLink.Visible = false;
this.btnButton.Visible = true;
this.btnImage.Visible = false;
t.Rows[0].Cells[0].Visible = false;
t.Rows[0].Cells[2].Visible = false;
break;
case CoolButtonMode.Image:
this.btnLink.Visible = false;
this.btnButton.Visible = false;
this.btnImage.Visible = true;
t.Rows[0].Cells[0].Visible = false;
t.Rows[0].Cells[2].Visible = false;
break;
default: // The control is in CoolButtonMode.Text mode.
this.btnLink.Visible = true;
this.btnButton.Visible = false;
this.btnImage.Visible = false;
t.Rows[0].Cells[0].Visible = true;
t.Rows[0].Cells[2].Visible = true;
break;
}
}
}
These next few slides all deal with my overridden CreateChildControls(). First, I am allocating all the controls.
Next, we wire up the event handlers so that all the controls will be registered with the framework for PostBack event handling.
...
// Setup the events on the page.
btnLink.Click += new EventHandler(this.OnBtnLink_Click);
btnButton.Click += new EventHandler(this.OnBtnButton_Click);
btnImage.Click += new ImageClickEventHandler(this.OnBtnImage_Click);
...
Now, we create the control's table to layout the child controls, and put the child controls into the table. This is pretty straight forward table building. One thing to note is the use of the styles on the table. This is done to get the table to sit inline with surrounding html elements, otherwise the table will be kicked down a line.
...
HtmlTable table = new HtmlTable();
HtmlTableRow newRow;
HtmlTableCell newCell;
// Make sure that the composite control flows with
// the surrounding text properly.
table.Border = 0;
table.Style.Add("DISPLAY", "inline");
table.Style.Add("VERTICAL-ALIGN", "middle");
newRow = new HtmlTableRow();
newCell = new HtmlTableCell();
newCell.Controls.Add(lblLeftDecoration);
newRow.Cells.Add(newCell);
newCell = new HtmlTableCell();
newCell.Align = "center";
// Add all the buttons to the control, so that if they are switched
// programatically, the event handlers will stay linked. If the controls
// are not included in the Controls collection, then the event handling
// doesn't persist. We will use the visibility to determine which one is
// actually rendered for the user to see.
newCell.Controls.Add(btnLink);
newCell.Controls.Add(btnButton);
newCell.Controls.Add(btnImage);
newRow.Cells.Add(newCell);
newCell = new HtmlTableCell();
newCell.Controls.Add(lblRightDecoration);
newRow.Cells.Add(newCell);
if(newRow.Cells.Count > 0)
table.Rows.Add(newRow);
Controls.Add(table);
...
Now that the table is allocated, and all the controls are in place, it's time to set the defaults. You must set the defaults for your controls, simply specifying the DefaultValue attribute will not set them. The DefaultValue attribute will only cause the designer to put your property in bold if you change the property value from the value specified in the DefaultValue attribute. Setting the CoolButtonMode property also sets the visibility on the child controls so that only the child controls that make up the Text mode buttons will be shown.
...
// Setup the defaults for the controls.
this.LeftDecoration = "[";
this.RightDecoration = "]";
this.CoolButtonMode = CoolButtonMode.Text;
...
Once the table is built, and the controls are allocated and wired to events, we need to define the event handlers. The only thing special about these is that they check to see if there is a registered event handler to kick the event up to. If there is, then they call the delegate's click method to send the event up the line. This is how the control will expose its Click event to the designer so that you can wire it into an OnClick event on a hosting web page, or other composite control. All three event handlers call the same Click delegate, which means that no matter which button control is clicked, the same event is sent to the registered delegate. I could have used the same delegate for both the OnBtnLink_Click and OnBtnButton_Click events but for clarity's sake I wanted to use individual delegates for all three controls.
Finally, there is the Designer class. The Designer class is covered in This MSDN Article, so I won't bother here. One thing noteworthy is the setting of the WccCoolButton.Text member to the controls UniqueID. This provides the same "default naming" functionality that you get when you first place the label control on a WebForm inside the designer. The Label control receives a default name like Label1. In this case, if there are no controls in the Controls collection then the Text property is set to the UniqueId. Another important aspect is that setting the Text property has the side effect of calling EnsureChildControls(), and creating the rest of the controls. If this didn't happen, the control would be created AFTER the designer renders it when it is first dropped on the page which would result in the control being created fine, but appearing to be empty. Setting a property with EnsureChildControls() solves this issue. If you are making a Composite control that doesn't have a Text property to set, make a public member that calls EnsureChildControls() which you can call from the GetDesignTimeHtml() override.
/*********************************************************************
*
* The control designer.
*
*********************************************************************/
namespace ChameleonLabs.CustomControls.WccCoolButton.Design
{
public class WccCoolButtonDesigner : ControlDesigner
{
/// <span class="code-SummaryComment"><summary>
</span>
One thing that I had issues with was getting the icon to be associated with the control in the designer toolbox. Here is a link to The MSDN Article that describes how to assign a custom icon to your control. They left out one crucial item, which is what to do if you change the namespace of your control. You must also change your project's default namespace to reflect your control's namespace. Note that this means that if you are developing two or more controls under one project, then they must use the same namespace, or their icons will not associate properly. Here is why: when you compile your project, VS.NET helps you out by changing the name of your embedded icon to include the default namespace of your project. If your namespace of your project is not the same as your class, then the framework can't find the icon to tie to your control in the toolbox, and gives up. So, change that default namespace in your project settings, and it will work!
A Hex Editor reveals what happened to the bitmap name after VS.NET got done with it.
My goal for this example was to show you how to create a Web Custom Control that could do the following:
This was a very basic example, stay tuned for the PickList example where I will incorporate this WccCoolButton control into a multi-paned, data-bindable pick list control complete with sorting, filtering, and more fun design time support!
I would like to say thanks to everyone who asks/answers questions on UseNet, it was an invaluable stomping ground for research into Web Custom Control design. I would also like to thank The Code Project, where I found some of the best examples of Custom Control code (especially those awesome articles by Shawn Wilde).
1/17/2002 - Int.
|
http://www.codeproject.com/Articles/3569/Eventing-Within-Composite-Web-Custom-Controls
|
CC-MAIN-2016-40
|
refinedweb
| 2,324
| 54.63
|
(this post)
- Part 9 - Combination Key Input
- Part 10 - Testing
- Part 11 - Netlify Deployment
- diff:
- ellie:
It would be nice if users could input numbers into our calculator using their number pad on their keyboard.
We need to tell our program what to do when a user presses these keys. So we need to add some more messages to our application.
We will be using the package Gizra/elm-keyboard-event.
First we need to install the package so we can import it into our code. The ellie-app version of this chapter has the package installed. Open the side menu and look at the installed packages. You should see I have added 3 new packages as dependencies.
To install a package in ellie-app use the search bar.
Or you can install a package locally on the command line.
elm install Gizra/elm-keyboard-event
We will also need to import SwiftsNamesake/proper-keyboard which is a dependency of elm-keyboard-event. We need this because we will be using these types directly in our application.
elm install SwiftsNamesake/proper-keyboard
And lastly we need
elm/json to decode the JSON message coming from the browser for our events. Don't worry. This is not as scary as it sounds.
elm install elm/json
Refactor to use Elm subscriptions
Since these keyboard events are going to be coming from the browser, our Elm application needs to subscribe to these events. In order to do subscriptions we need to refactor our application.
Right now we have been using
Browser.sandbox in our main function.
main : Program () Model Msg main = Browser.sandbox { init = initialModel , view = view , update = update }
Change to
Browser.element
We need to change it to
Browser.element so that we can add subscriptions.
main : Program () Model Msg main = Browser.element { view = view , init = \_ -> init , update = update , subscriptions = subscriptions }
This is going to impact our whole application. We'll let the compiler guide us through this refactor. The biggest change will be to our
update function.
It will need to change from
update : Msg -> Model -> Model
to
update Msg -> Model -> ( Model, Cmd Msg )
We need to return the model and a command message. We won't be using command messages in this application so don't worry about it. We just need to fix the code so it will compile again.
Most of the time we can just return the tuple
( model, Cmd.none ).
update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of SetDecimal -> if String.contains "." model.currentNum then ( model, Cmd.none ) ...
The
init also needs to return a command message.
init : ( Model, Cmd Msg ) init = ( { stack = [] , ... } , Cmd.none )
Add subscriptions
Now we can work on adding the subscriptions. First we need to import some stuff.
import Browser.Events exposing (onKeyDown) import Json.Decode as D import Keyboard.Event as KE exposing (KeyboardEvent, decodeKeyboardEvent) import Keyboard.Key as KK
We are now going to subscribe to the
onKeyDown event in the browser. After our application gets the event we then need to decode the event into something Elm can deal with. Since events come in the from the browser as a JSON message we need to decode the JSON into an Elm type.
Luckily for us, we don't need to worry about writing a decoder for these events. The
Gizra/elm-keyboard package provides this for us.
subscriptions : Model -> Sub Msg subscriptions model = onKeyDown (D.map HandleKeyboardEvent decodeKeyboardEvent)
This
subscriptions function is going to return a subscription message. In this case a
HandleKeyboardEvent type.
We need that in our message type.
type Msg = ... | HandleKeyboardEvent KeyboardEvent
Handle the key events
Now we can put this message and handle it in our update function.
update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of ... HandleKeyboardEvent event -> case event.keyCode of KK.Add -> update (InputOperator Add) model KK.Subtract -> update (InputOperator Sub) model KK.NumpadZero -> update (InputNumber 0) model ... -- ignore anything else _ -> ( model, Cmd.none )
I used a small trick here. When we get a key matching one of our cases, just call the appropriate update function again.
What I hope you take away from this chapter is how nice it is to refactor things in Elm. We made some sweeping changes to our application and the compiler was able to help us out.
Now that we have keyboard input, it would be nice if the user had some more options for deleting the stack frames. The next chapter will cover using key combinations such as ctrl-shift-delete to clear out the frames.
|
https://pianomanfrazier.com/post/elm-calculator-book/08-keypad-input/
|
CC-MAIN-2020-24
|
refinedweb
| 761
| 68.26
|
Dec 12, 2019 07:09 AM|Zerimar|LINK
I need to fill an object list with stuff, and later on i want it immutable for a certain part of a code that modifies it in a loop.
What i have NOW, cannot do the job because i am ignorant to the immutable concept, i have never used it before, i need it now, however.
how do i turn my code and re-work it so that it's mutable to fill data, and later on immutable for only a certain kind of code?
Here's my code:
public List<UBTDataTable> UBTDataList { get; set; } public class UBTDataTable { public decimal BALANCE { get; set; } public int TIME { get; set; } public string EXPIRATION { get; set; } public int EXPIRATIONTIME { get; set; } public int MONTH { get; set; } public int DAY { get; set; } public int YEAR { get; set; } }
----------------------------------------------------
Here is what i looked on google but i dont know what it means:
public class Writer1 { // Read-only properties. public string Name { get; } public string Article { get; private set; } // Public constructor. public Writer1(string authorName, string articleName) { Name = authorName; Article = articleName; } }
Participant
1690 Points
Dec 12, 2019 08:30 AM|PaulTheSmith|LINK
Zerimari want it immutable for a certain part of a code that modifies it in a loop
immutable means that something cannot be changed. You seem to be asking for something that cannot be changed to be changed
Do you want it immutable or do you want to modify it? Can't have both.
Also, when you are thinking about your design you should be clear about what whether you want an immutable list and/or immutable elements of the list.
That is do you want to stop clients adding/removing entries from UBTDataList (have a look at ReadOnlyCollection<T>)
… or do you want to stop clients changing some/all of the properties of elements of the list. Think about changing the setter access for particular properties.
Dec 12, 2019 09:56 AM|Zerimar|LINK
How do i explain this?
I want to build up my list with values and in so modify them here and there.
Next, i want to test the values by modifying/reading them, but the test ends up with the modified results.
I want to create my list of values, test them by modifying them again, BUT, keep the original values AFTER the test, does that make sense?
All-Star
48940 Points
Dec 12, 2019 10:37 AM|PatriceSc|LINK
Hi,
The idea is that you can only define property values when you create an object. So technically speaking you won't modify an existing object but you'll create a new object with updated values... ie :
var a=new Writer1("Verlaine","How I wrote Les Misérables");
a.Name="Hugo"; // doesn't work as the property is read only
var b=new Writer1("Hugo",a.Article); // works, you have to create a new object (possibly still keeping a reference to the old one)...
Is this enough ? You should find articles but from a quick search I find either basic stuff or maybe overkill libraries...
Edit: if for code testing you could also consider using a method that returns a new know list and restart from that for each test or to clone a source list etc...
Participant
1690 Points
Dec 13, 2019 06:33 AM|PaulTheSmith|LINK
Once you are happy with the state of your objects you can create a new set of objects which can be altered without affecting the originals. A 'copy constructor' is quite useful.
The idea is that you have a constructor which takes an existing object as the parameter and produces a new object with property values that are all the same.
Add a constructor to your class
public class UBTDataTable { public decimal BALANCE { get; set; } public int TIME { get; set; } public string EXPIRATION { get; set; } public int EXPIRATIONTIME { get; set; } public int MONTH { get; set; } public int DAY { get; set; } public int YEAR { get; set; } public UBTDataTable(UBTDataTable original) { BALANCE = original.BALANCE; TIME = original.TIME; EXPIRATION = original.EXPIRATIOn; MONTH = original.MONTH; DAY = original.DAY; YEAR = original.YEAR; } }
Once UBTDataList is in the state that you want then you can easily create a copy
var myPlayground = UBTDataList.Select(d => new UBTDataTable(d));
You can change myPlayground and it's elements as much as you want but the elements of UBTDataList will not be affected.
4 replies
Last post Dec 13, 2019 06:33 AM by PaulTheSmith
|
https://forums.asp.net/t/2162352.aspx?Help+with+an+immutable+class
|
CC-MAIN-2021-43
|
refinedweb
| 741
| 59.94
|
On May 12, 2009, at 10:53 AM, Thomas Lotze wrote: > Buildout as well as a number of buildout recipes need to download > files > from the net. They need to take care of doing the actual download, > using the download cache and honouring the offline option. A couple of > years ago, we also wrote the gocept.download recipe that does all that > as well as MD5 checks and some other stuff. > > In order to simplify this situatin, we propose adding a download API > to > buildout itself. A function like this: > > def download(url, use_cache=True, md5sum=None) > ... > return filename > > might be sufficient to remove the download logic from recipes and make > optional checksum testing available without needing to depend on a > separate download recipe. > > As a consequence, zc.recipe.cmmi would be able to do MD5 checks, which > would basically make gocept.cmmi redundant. Using gocept.download's > capabilities is the one feature of it which still currently keeps us > from dropping it in favour of zc.recipe.cmmi. > > If there's no objection against a download API being added to > buildout, > we'd like to implement one soon. > > Otherwise, we'd propose implementing MD5 testing in zc.recipe.cmmi > since we consider it a good thing to reduce the zoo of cmmi-related > recipes. We do think that a reusable API would be the better choice, > though. +1 Thanks! You should add a "namespace" option to the api. Right now we have at least 2 namespaces, dist and cmmi. (My download cache has a "minitage" directory. I wonder where that came from. :) Jim -- Jim Fulton Zope Corporation
|
https://mail.python.org/pipermail/distutils-sig/2009-May/011843.html
|
CC-MAIN-2014-15
|
refinedweb
| 268
| 66.54
|
Try to compile and run this program with your C compiler and with your C++ compiler:
#include <stdio.h>
struct empty {};
int main()
{
printf("nothing is %d\n", sizeof(struct empty));
}
struct empty
C gets this right: bytes. But C++ requires it to take 1 byte. That's so that various object identity weirdnesses work out correctly in more complicated cases. But for struct empty it's just plain crazy...
As Gorgonzola points out, it gets crazier than this. Suppose I had to write a class that implements 10 interfaces. Each interface is an empty base class, so it has size 1. Does it follow that the derived class needs at least 10 bytes, just to store 10 copies of nothing?
No. The empty base-class optimization is to remove all storage for empty base classes. This is required for any reasonable quality of implementation, but also adds to the craziness. It is commonly assumed that if I declare
class C: public A, protected B { ... };
sizeof(C) >= sizeof(A)+sizeof(B)
In C++, derived classes can be smaller than the sum of their base classes. But, for obvious reasons, a class cannot be smaller than the sum of its contents -- even if its contents are all empty.
Something interesting in C (but not C++):
#include <stdio.h>
struct empty {};
int main (void)
{
struct empty a[20], *i;
for(i=a; i<=&a[19]; ++i) {
printf("%p\n", i);
}
return 0;
}
This program will, if your compiler does not pad struct empty, be an infinite loop. Pointer addition in C and C++ works kind of like the following, where `type' can be any addressable type:
type *ptr_add(type *base, ptrdiff_t offset) {
unsigned long addr, b = (unsigned long)base;
addr = b + offset*sizeof(type);
return (type *)addr;
}
In the first program we had ++i, which in that context means i = ptr_add(i, 1).
Because sizeof(struct empty) == 0, offset*sizeof(type) == 0 and i remains unchanged. ++ applied to a pointer to struct empty will do nothing.
Even more weird:
#include <stdio.h>
struct empty {};
int main(void) {
struct empty x[20];
printf("%d\n", &x[10] - x);
return 0;
}
Pointer differences work like:
ptrdiff_t ptr_diff(type *minuend, type *subtrahend) {
unsigned long a = (unsigned long)minuend;
unsigned long b = (unsigned long)subtrahend;
long diff= (a - b)/sizeof(type);
return (ptrdiff_t)a-b;
}
Computing pointer differences with a zero-byte structure thus involves division by zero, undefined behaviour in C. The C++ behaviour may seem strange at first, but it is to keep perfectly reasonable pointer arithmetic from failing for some types. As magicmanzach points out below, this is especially important because C++'s template system allows generic programming. It often makes sense to write generic code using pointer arithmetic, but it is safe to do so only for types with positive sizes. Rather than require template writers (or template users) to explicitly check for empty types, the designers of C++ instead declared that all types have positive size.
Such things happen when you try to cross a glorified macro assembler with Simula, in the meanwhile adding a (completely optional thanks to casts) type system almost as strict as that of Modula-2---so strict that many C programmers moving to C++ learn to ignore type-safety warnings from the compiler, especially when const is involved. Not to mention template processing that rivals PHP in power and unlambda in comprehensibility.
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com
|
http://everything2.com/title/C%252B%252B%253A+how+big+is+nothing%253F
|
CC-MAIN-2016-40
|
refinedweb
| 582
| 60.55
|
Hello, My name is Simone Basso, and I study Computer Science at Politecnico di Torino, Italy. In the three past weeks I've been working to a project that involves Qemu, as a part of an exam, together with another student, Gaspare Scherma. The requested task was to add to Qemu 0.9.0 support for AMBA ARM PL031 RTC chip, to avoid to have the emulated system (it was debian 4.0) always starting in 1970-01-01, which may be _very_ annoying :-). The patch that we attach provides the following features: - The chip is recognized by the kernel; - It is possible to read the current time from the RTC; - It is possible to request an interrupt in the next 24 hours; - It is possible to acknowledge such interrupt, to lower the IRQ signal. To implement such patch we've used the following knowledge sources: - The implementation of rtc-pl031 driver in the linux kernel; - The emulated device hw/pl011.c in qemu sources; - Documentation in the linux kernel. Needless to say, any error in the patch was certainly introduced by us, and does not depends on them :-). The reason why we used linux kernel driver as a source of documentation were two: - We were not able to find complete documentation of the chip[1]; - Another part of the exam task was to study Linux's RTC framework, so we were yet a bit confident with the driver's code. Having used the Linux kernel as the main source of information, we've also taken some code from the kernel itself, to implement some bits. Anyway, it should not be a problem, since the new file hw/pl031.c, added by the patch, is in the GPL. I need also to point that, to test linux kernel with our patch, you also need to patch the kernel itself: infact we've find out a problem in linux's rtc-pl031 interrupt handler. For this reason, a _very_ little patch for linux kernel is also attached[2]. I write this email to make you, qemu developers, aware of our effert, in the hope that the patch could be a base for PL031 support with Qemu. If I had missed some information useful to revise the patch, please tell me. Regards, -Simone Basso -- Footnotes ---[1] We've just found which
says something about PL031 on page 222.[2] The patch is for 2.6.21.3, however I known for sure that it applies without
any problem to 2.6.18 (I guess that it won't create any problem for any kernel >= 2.6.18, but I can't say anything about less than .18). We've yet send via email the patch to the proper driver mantainer, who said that it seemed ok and that he'll forward it upstream.This happened a week ago. However, at the moment now, no stable kernel from
kernel.org has the patch, therefore it is necessary to apply it.
--- linux-2.6.21.3/drivers/rtc/rtc-pl031.c.orig 2007-05-24 23:22:47.000000000 +0200 +++ linux-2.6.21.3/drivers/rtc/rtc-pl031.c 2007-06-06 22:07:53.000000000 +0200 @@ -49,9 +49,14 @@ static irqreturn_t pl031_interrupt(int irq, void *dev_id) { - struct rtc_device *rtc = dev_id; + struct pl031_local *ldata = dev_id; - rtc_update_irq(&rtc->class_dev, 1, RTC_AF); + rtc_update_irq(&ldata->rtc->class_dev, 1, RTC_AF); + + /* at page + 222 tells that "The interrupt is cleared by writing any data + value to the interrupt clear register RTCICR." */ + __raw_writel(1, ldata->base + RTC_ICR); return IRQ_HANDLED; } @@ -173,8 +178,11 @@ goto out_no_remap; } - if (request_irq(adev->irq[0], pl031_interrupt, IRQF_DISABLED, - "rtc-pl031", ldata->rtc)) { + /* Pass ldata to the interrupt handler, so we're able to write + the register that clears the interrupt: we need ldata->base + for that. */ + if (request_irq(adev->irq[0], pl031_interrupt, IRQF_DISABLED, + "rtc-pl031", ldata)) { ret = -EIO; goto out_no_irq; }
--- qemu-0.9.0/hw/versatilepb.c.orig 2007-06-06 21:45:40.000000000 +0200 +++ qemu-0.9.0/hw/versatilepb.c 2007-06-06 21:45:40.000000000 +0200 @@ -214,6 +214,9 @@ that includes hardware cursor support from the PL111. */ pl110_init(ds, 0x10120000, pic, 16, 1); + /* Add PL031 Real Time Clock. */ + pl031_init(0x101e8000,pic,10); + /* Memory map for Versatile/PB: */ /* 0x10000000 System registers. */ /* 0x10001000 PCI controller config registers. */ --- qemu-0.9.0/vl.h.orig 2007-06-06 21:45:40.000000000 +0200 +++ qemu-0.9.0/vl.h 2007-06-06 21:45:40.000000000 +0200 @@ -1307,6 +1307,9 @@ /* smc91c111.c */ void smc91c111_init(NICInfo *, uint32_t, void *, int); +/* pl031.c */ +void pl031_init(uint32_t base, void * pic, int irq); + /* pl110.c */ void *pl110_init(DisplayState *ds, uint32_t base, void *pic, int irq, int); --- qemu-0.9.0/Makefile.target.orig 2007-06-06 21:45:40.000000000 +0200 +++ qemu-0.9.0/Makefile.target 2007-06-06 21:45:40.000000000 +0200 @@ -399,7 +399,7 @@ endif ifeq ($(TARGET_BASE_ARCH), arm) VL_OBJS+= integratorcp.o versatilepb.o ps2.o smc91c111.o arm_pic.o arm_timer.o -VL_OBJS+= arm_boot.o pl011.o pl050.o pl080.o pl110.o pl190.o +VL_OBJS+= arm_boot.o pl011.o pl050.o pl080.o pl110.o pl190.o pl031.o VL_OBJS+= versatile_pci.o VL_OBJS+= arm_gic.o realview.o arm_sysctl.o VL_OBJS+= arm-semi.o --- qemu-0.9.0/hw/pl031.c.orig 2007-06-06 21:56:59.000000000 +0200 +++ qemu-0.9.0/hw/pl031.c 2007-06-06 21:52:15.000000000 +0200 @@ -0,0 +1,480 @@ +/* + * ARM AMBA PrimeCell PL031 RTC + * + * Copyright (c) 2007 Politecnico di Torino, Italy. + * Initially written by Simone M.Basso <address@hidden>, + * Gaspare Scherma <address@hidden> + * + * This file is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * References to implement this emulated device were + * linux-2.6.22-rc3/drivers/rtc/{rtc-pl031.c,class.c,interface.c,rtc-lib.c} + * qemu-0.9.0/hw/{versatilepb.c,pl011.c,arm_timer.c,pl*.c} + * [pag. 222] + * + * PATCH's ChangeLog + * + * version_2 : fixed the problem with interrupts: it was a problem in + * linux 2.6.21.3 rtc-pl031.c driver. updated the code + * to have interrupt in next 24 hours working [2007/06/06] + * + * version_1 : the patch works, but crashes the kernel when an + * interrupt is triggered [2007/06/04] + */ + +#define PL031_DEBUG /* Print some debug messages */ + +#include"vl.h" + +#define RTC_DR 0x00 /* Data read register */ +#define RTC_MR 0x04 /* Match register */ +#define RTC_LR 0x08 /* Data load register */ +#define RTC_CR 0x0c /* Control register */ +#define RTC_IMSC 0x10 /* Interrupt mask and set register */ +#define RTC_RIS 0x14 /* Raw interrupt status register */ +#define RTC_MIS 0x18 /* Masked interrupt status register */ +#define RTC_ICR 0x1c /* Interrupt clear register */ + +struct pl031_state { + uint32_t base; /* Base I/O memory */ + void * pic; /* Reference to pic */ + int irq; /* Assigned IRQ */ + + uint32_t boot_time; + + struct { + uint32_t time; + uint32_t enabled; + uint32_t pending; + + QEMUTimer * _timer; + } alarm; +}; + +static const unsigned char pl031_id[] = { + 0x31, 0x10, 0x14, 0x00, /* Device ID */ + 0x0d, 0xf0, 0x05, 0xb1 /* Cell ID */ +}; + +static void pl031_interrupt(void * opaque) +{ + struct pl031_state * state = opaque; + + pic_set_irq_new(state->pic, state->irq, 1); +#ifdef PL031_DEBUG + fprintf(stderr, "qemu: pl031: tick!\n"); +#endif +} + +static inline uint32_t read_current_time(struct pl031_state * state, + int64_t *ticks) +{ + /* + * WARNING! state->boot_time is _approx_ the second since Epoch + * when the emulation was started. The vm_clock counts in ticks, + * as specified in vl.h, from the time when the vm was started. + * So, if we sum these two values, we _approx_ the needed time, + * which is ok since RTC usually have a resolution of one second. + * + * state->boot_time is saved during pl031_init(). + */ + int64_t myTicks = qemu_get_clock(vm_clock); + uint32_t cur_time = state->boot_time; + + cur_time += (uint32_t)(myTicks / ticks_per_sec); + if (ticks) + *ticks = myTicks; + return cur_time; +} + +static uint32_t pl031_read(void * opaque, target_phys_addr_t offset) +{ + struct pl031_state * state = opaque; + + offset -= state->base; + + if (offset >= 0xfe0 && offset < 0x1000) + return pl031_id[(offset - 0xfe0) >> 2]; + /* + *_DR: + return read_current_time(state, NULL); + break; + + case RTC_MR: + return state->alarm.time; + break; + case RTC_IMSC: + return state->alarm.enabled; + break; + case RTC_RIS: + return state->alarm.pending; + break; + + case RTC_LR: + case RTC_CR: + case RTC_MIS: + case RTC_ICR: + fprintf(stderr, "qemu: pl031_read: Unexpected offset " + "0x%x\n", offset); + break; + + default: + cpu_abort (cpu_single_env, "pl031_read: Bad offset " + "0x%x\n", offset); + } + + return 0; +} + +static inline void write_current_time(struct pl031_state * state, + uint32_t value) +{ + /* + * XXX - We can store the offset between the virtual machine and + * the real one in a variable. But, how we can save such value + * to survive a machine restart? + */ + fprintf(stderr, "qemu: warning: pl031_write: can't change time\n"); +} + +static void rtc_time_to_tm(unsigned long time, struct tm *tm); +static int rtc_tm_to_time(struct tm *tm, unsigned long *time); + +/* + * WARNING! The Linux kernel, in driver rtc-pl031, sends us a time from a + * "strange" struct tm, which has the following parameters + * + * alarm.time.tm_mday = -1; + * alarm.time.tm_mon = -1; + * alarm.time.tm_year = -1; + * alarm.time.tm_wday = -1; + * alarm.time.tm_yday = -1; + * alarm.time.tm_isdst = -1; + * + * And has the right day, hour and minute. Such structure is then processed + * using the function that in this file is called ke_mktime(), which leads + * to the following value, as result, when day = hour = minute = 0; + * + * So, we subtract such value to the 32 bit number that we receive to have + * the right HHMMSS offset from Epoch. + */ +#define OFFSET 2051591296 + +static inline void save_alarm(struct pl031_state * state, uint32_t value) +{ + struct tm alarm, now; + + rtc_time_to_tm(read_current_time(state, NULL), &now); + value -= OFFSET; + rtc_time_to_tm(value, &alarm); + now.tm_hour = alarm.tm_hour; + now.tm_min = alarm.tm_min; + now.tm_sec = alarm.tm_sec; + + rtc_tm_to_time(&now, &state->alarm.time); +#ifdef PL031_DEBUG + fprintf(stderr,"qemu: pl031: alarm at %02d/%02d/%02d, %02d:%02d:%02d\n", + now.tm_mday, now.tm_mon +1, now.tm_year + 1900, + now.tm_hour, now.tm_min, now.tm_sec); +#endif +} + +static inline void enable_disable_alarm(struct pl031_state * state, + int disable) +{ + int64_t alarm; + uint32_t now; + + if (disable) { +#ifdef PL031_DEBUG + fprintf(stderr, "qemu: pl031: disabled alarm\n"); +#endif + qemu_del_timer(state->alarm._timer); + return; + } + + now = read_current_time(state, &alarm); +#ifdef PL031_DEBUG + fprintf(stderr, "qemu: pl031: current cpu ticks %lld\n", alarm); +#endif + alarm += ticks_per_sec * (state->alarm.time - now); +#ifdef PL031_DEBUG + fprintf(stderr, "qemu: pl031: alarm at cpu ticks %lld\n", alarm); +#endif + qemu_mod_timer(state->alarm._timer, alarm); + ++state->alarm.enabled; +} + +static void pl031_write(void * opaque, target_phys_addr_t offset, + uint32_t value) +{ + struct pl031_state * state = opaque; + + offset -= state->base; + + /* + *_LR: + write_current_time(state, value); + break; + + case RTC_MR: + save_alarm(state, value); + break; + case RTC_MIS: + enable_disable_alarm(state, value); + break; + + case RTC_ICR: + /* at + page 222 tells that "The interrupt is cleared by writing + any data value to the interrupt clear register RTCICR." */ + pic_set_irq_new(state->pic, state->irq, 0); + break; + + case RTC_DR: + case RTC_CR: + case RTC_IMSC: + case RTC_RIS: + fprintf(stderr, "qemu: pl031_write: Unexpected offset " + "0x%x\n", offset); + break; + + default: + cpu_abort (cpu_single_env, "pl031_write: Bad offset " + "0x%x\n", offset); + } +} + +static CPUWriteMemoryFunc * pl031_writefn[] = { + pl031_write, /* To access byte (index 0) */ + pl031_write, /* To access word (index 1) */ + pl031_write /* To access dword (index 2) */ +}; + +static CPUReadMemoryFunc * pl031_readfn[] = { + pl031_read, /* To access byte (index 0) */ + pl031_read, /* To access word (index 1) */ + pl031_read /* To access dword (index 2) */ +}; + +void pl031_init(uint32_t base, void * pic, int irq) +{ + int offset; + struct pl031_state * state; + + state = qemu_mallocz(sizeof (*state)); + if (!state) + goto err_no_mem; + + offset = cpu_register_io_memory(0, pl031_readfn, pl031_writefn, state); + if (offset == -1) + goto err_no_offset; + + cpu_register_physical_memory(base, 0x00000fff, offset); + + state->base = base; + state->pic = pic; + state->irq = irq; + /* + * Synchronize emulated RTC with host RTC. The precision of a second + * will suffice, so it's not important that we have booted some + * milliseconds ago: we just need an amount of seconds to add to + * the number of ticks since startup. + */ + state->boot_time = time(NULL); + + state->alarm._timer = qemu_new_timer(vm_clock, pl031_interrupt, state); + if (!state->alarm._timer) + goto err_no_mem; + return; + +err_no_mem: + cpu_abort(cpu_single_env, "pl031_init: Out of memory\n"); +err_no_offset: + cpu_abort(cpu_single_env, "pl031_init: Can't register I/O memory\n"); +} + +/* + * @from-file: linux-2.6.21.3/drivers/rtc/rtc-lib.c + * + * rtc and date/time utility functions + * + * Copyright (C) 2005-06 Tower Technologies + * Author: Alessandro Zummo <address@hidden> + * + * based on arch/arm/common/rtctime.c and other bits + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +static unsigned long +ke_mktime(const unsigned int year0, const unsigned int mon0, + const unsigned int day, const unsigned int hour, + const unsigned int min, const unsigned int sec); + +static const unsigned char rtc_days_in_month[] = { + 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 +}; + +static const unsigned short rtc_ydays[2][13] = { + /* Normal years */ + { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 }, + /* Leap years */ + { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 } +}; + +#define LEAPS_THRU_END_OF(y) ((y)/4 - (y)/100 + (y)/400) +#define LEAP_YEAR(year) ((!(year % 4) && (year % 100)) || !(year % 400)) + +/* + * The number of days in the month. + */ +static +int rtc_month_days(unsigned int month, unsigned int year) +{ + return rtc_days_in_month[month] + (LEAP_YEAR(year) && month == 1); +} + +/* + * The number of days since January 1. (0 to 365) + */ +static +int rtc_year_days(unsigned int day, unsigned int month, unsigned int year) +{ + return rtc_ydays[LEAP_YEAR(year)][month] + day-1; +} + +/* + * Convert seconds since 01-01-1970 00:00:00 to Gregorian date. + */ +static +void rtc_time_to_tm(unsigned long time, struct tm *tm) +{ + register int days, month, year; + + days = time / 86400; + time -= days * 86400; + + /* day of the week, 1970-01-01 was a Thursday */ + tm->tm_wday = (days + 4) % 7; + + year = 1970 + days / 365; + days -= (year - 1970) * 365 + + LEAPS_THRU_END_OF(year - 1) + - LEAPS_THRU_END_OF(1970 - 1); + if (days < 0) { + year -= 1; + days += 365 + LEAP_YEAR(year); + } + tm->tm_year = year - 1900; + tm->tm_yday = days + 1; + + for (month = 0; month < 11; month++) { + int newdays; + + newdays = days - rtc_month_days(month, year); + if (newdays < 0) + break; + days = newdays; + } + tm->tm_mon = month; + tm->tm_mday = days + 1; + + tm->tm_hour = time / 3600; + time -= tm->tm_hour * 3600; + tm->tm_min = time / 60; + tm->tm_sec = time - tm->tm_min * 60; +} + +/* + * Does the rtc_time represent a valid date/time? + */ +static +int rtc_valid_tm(struct tm *tm) +{ + if (tm->tm_year < 70 + || ((unsigned)tm->tm_mon) >= 12 + || tm->tm_mday < 1 + || tm->tm_mday > rtc_month_days(tm->tm_mon, tm->tm_year + 1900) + || ((unsigned)tm->tm_hour) >= 24 + || ((unsigned)tm->tm_min) >= 60 + || ((unsigned)tm->tm_sec) >= 60) + return 1; + + return 0; +} + +/* + * Convert Gregorian date to seconds since 01-01-1970 00:00:00. + */ +static +int rtc_tm_to_time(struct tm *tm, unsigned long *time) +{ + *time = ke_mktime(tm->tm_year + 1900, tm->tm_mon + 1, tm->tm_mday, + tm->tm_hour, tm->tm_min, tm->tm_sec); + return 0; +} + +/* + * @from-file: linux-2.6.21.3/linux/kernel/time.c + * + * Copyright (C) 1991, 1992 Linus Torvalds + * + * This file contains the interface functions for the various + * time related system calls: time, stime, gettimeofday, settimeofday, + * adjtime + */ + +/* Converts Gregorian date to seconds since 1970-01-01 00:00:00. + * Assumes input in normal date format, i.e. 1980-12-31 23:59:59 + * => year=1980, mon=12, day=31, hour=23, min=59, sec=59. + * + * [For the Julian calendar (which was used in Russia before 1917, + * Britain & colonies before 1752, anywhere else before 1582, + * and is still in use by some communities) leave out the + * -year/100+year/400 terms, and add 10.] + * + * This algorithm was first published by Gauss (I think). + * + * WARNING: this function will overflow on 2106-02-07 06:28:16 on + * machines were long is 32-bit! (However, as time_t is signed, we + * will already get problems at other places on 2038-01-19 03:14:08) + */ +static unsigned long +ke_mktime(const unsigned int year0, const unsigned int mon0, + const unsigned int day, const unsigned int hour, + const unsigned int min, const unsigned int sec) +{ + unsigned int mon = mon0, year = year0; + + /* 1..12 -> 11,12,1..10 */ + if (0 >= (int) (mon -= 2)) { + mon += 12; /* Puts Feb last since it has leap day */ + year -= 1; + } + + return ((((unsigned long) + (year/4 - year/100 + year/400 + 367*mon/12 + day) + + year*365 - 719499 + )*24 + hour /* now have hours */ + )*60 + min /* now have minutes */ + )*60 + sec; /* finally seconds */ +} +
|
http://lists.gnu.org/archive/html/qemu-devel/2007-06/msg00210.html
|
CC-MAIN-2016-36
|
refinedweb
| 2,606
| 53.71
|
How to: Run Tests from Microsoft Visual Studio
This topic is about how to use Visual Studio to run automated tests, which include unit tests, coded UI tests, ordered tests, generic tests, and load tests. You can run automated tests from both the Visual Studio integrated development environment (IDE) and at a command prompt. For more information about how to run tests at a command prompt, see Running Automated Tests from the Command Line.
Note
When you run one or more tests in Visual Studio, if the test contents are new or have been changed but not saved, they are automatically saved before the test is run. Similarly, if the code of a unit test has been edited but the project that contains the test has not been re-built, Visual Studio builds the project before you run the test.
However, if you want to plan out your testing effort and run your tests as part of a test plan, you can use Microsoft Test Manager. For more information about how to use Microsoft Test Manager, see Defining a Test Plan.
Note
Microsoft Test Manager is provided as part of Visual Studio Ultimate, Visual Studio Premium and Visual Studio Test Professional products.
Running Automated Tests in Visual Studio
Visual Studio provides different ways to run tests. You can choose the way that best suits your current needs:
Run Tests From Test Explorer. You can run automated tests including unit, coded UI, ordered, and generic in your solution from Test Explorer. Test Explorer easily lets you run and monitor the status of all the automated tests in your solution.
Run load tests from the load test editor. Load tests and web performance tests are run from either the Load Test Editor, the Web Performance Test Editor, or from the Visual Studio Ultimate LOAD TEST menu. For more information, see Running Load and Web Performance Tests.
Run Tests From Your Source Code Files. By using the keyboard, you can run tests from any text-based file in your solution. In particular, you can run tests while editing a file that contains your code under test. This lets you change source code and immediately test it without using a window or a menu.
Run Tests From Files in Your Test Code Files. By using the mouse or the keyboard, you can run tests from the file that contains your test code. This lets you change a test and then run it immediately without using a window or a menu.
Note
After you run a test in Visual Studio, the results of all the tests that were executed in that run are saved automatically on your computer in a test run file. How many test runs are saved depends on a setting in the Options dialog box.
Run Tests In a Specific Order
You can also run tests in a specific order if you create an ordered test. For more information about ordered tests, see Setting Up Your Test Run Sequence Using Ordered Tests.
Run Tests from Test Explorer
To run tests from Test Explorer
In Test Explorer, choose Run All. Or, select the tests you want to run, right-click, and then choose Run Select Tests.
The automated tests will run and indicate if they passed or failed.
Tip
You can also choose the drop-down list under Run for other options including Run Failed Tests, Run Not Run Tests, Run Passed Tests, Repeat Last Run, and Analyze Code Coverage.
Note
To view Test Explorer from the Test menu, point to Windows and then choose Test Explorer.
Run Tests from Your Source Code Files
To run tests from source code files in your solution, by using the keyboard
In Visual Studio, open a source code file anywhere in your solution.
You can use the following keyboard shortcuts to run tests from that file.
Note
You can use these shortcuts in your source code file that contains the test methods.
Run Tests from Files in Your Test Code Files
To run tests from your test code files, by using the keyboard
In Visual Studio, open the source-code file that contains your test methods.
Place the cursor in the file and press Ctrl + R, then press C.
To run tests from your test code files by using the mouse
In Visual Studio, open the source-code file that contains your test methods.
Right-click in a test method, in a test class, or outside the scope of a test class, and then choose Run Tests.
This command runs the tests in the current scope. That is, it runs the current test method, all the tests in the current test class, or all the tests in the current namespace, respectively.
See Also
Tasks
How to: Debug while a Test is Running
Concepts
Running Automated Tests from the Command Line
Other Resources
Running Unit Tests with Test Explorer
|
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/ms182470(v=vs.110)
|
CC-MAIN-2019-04
|
refinedweb
| 812
| 68.5
|
Created on 2015-12-24 20:40 by yan12125, last changed 2016-09-08 17:34 by steve.dower. This issue is now closed.
Originally reported at
Steps to reproduce:
1. Build 99665:dcf9e9ae5393 with Visual Studio 2015
2. Download and extract PsTools [1]
3. PsExec.exe -l python.exe
4. In Python, run:
import _ssl
_ssl.enum_certificates("CA")
_ssl.enum_crls("CA")
Results:
Python 3.6.0a0 (default, Dec 25 2015, 02:42:42) [MSC v.1900 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import _ssl
>>> _ssl.enum_certificates("CA")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PermissionError: [WinError 5] Access is denied
>>> _ssl.enum_crls("CA")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PermissionError: [WinError 5] Access is denied
>>>
Windows Vista and above have a security mechanism called "Low Integrity Level". [2] With that, only some specific registry keys are writable. In the original _ssl.c, both enum_certificates() and enum_crls() calls CertOpenSystemStore(). At least on my system CertOpenSystemStore() tries to open HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\CA with read/write permissions. (Observed with Process Monitor [3]) The request fails in Low Integrity Level processes as it's not in the range of writable registry keys.
Here I propose a fix: open certificate stores with the read-only flag. There are some points I'm not sure in this patch:
1. CERT_STORE_PROV_SYSTEM_A: I guess strings are bytestrings in C level?
2. CERT_SYSTEM_STORE_LOCAL_MACHINE: In accounts of Administrators, CertOpenSystemStore() tries to open keys under HKLM only, while in restricted (standard) accounts, this function tries to open keys under HKCU with R/W permission and keys under HKLM read-only. I think open system global stores is OK here.
A different perspective: Wine developers always open keys under HKCU in CertOpenSystemStore()
Environment: Windows 7 SP1 (6.1.7601) x86, an account in Administrators group. Tested with python.exe Lib\test\test_ssl.py both in a normal shell and within `PsExec -l`
Ref: issue17134, where these codes appear the first time
[1]
[2]
[3]
[4]
Looks good to me.
Is it worth dropping psexec.exe into the test suite so we can add a test for this (or maybe into tools so we can run it from a build without redistributing the exe)? It'll probably be helpful elsewhere too (symlink tests, for example).
psexec.exe can be run from the the live server.
>>> subprocess.call(r'\\live.sysinternals.com\tools\psexec.exe -s whoami')
PsExec v2.11 - Execute processes remotely
Sysinternals -
nt authority\system
whoami exited on THISPC with error code 0.
0
But the executable could also be cached on the test system.
PsExec.exe seems not redistributable. PAExec is an alternative but I've not tried it. [1] Another option is re-implementing a tiny program for lowering the integrity level based on example codes provided in [2], which I've not tried yet, either. The latter option seems better to me as I didn't find codes for lowering the integrity level in PAExec's source code. [3]
[1]
[2]
[3]
OK I've just succeeded in creating a low integrity level process with my own codes. Now the problem is: how can I integrate this tool into the test system? Seems the integrity level is per-process, while all tests are run in the same process.
Added tests for ssl.enum_certificates() and ssl.enum_crls() within low integrity processes.
Can we use ctypes in the test suite? That would probably be simpler here, as the C code is straightforward. (Question is for the other core devs, not Chi, as I know we're not supposed to use ctypes in the stdlib.)
ctypes in the test suite is fine, just be sure it's guarded properly (with either `ctypes = test.support.import_module("ctypes")` or `if sys.platform == 'win32'): ...`.
Testing based on integrity level doesn't require creating a child process. I'm attaching a ctypes-based example that defines a context manager that temporarily sets the integrity level of the current thread's impersonation token.
To get the impersonation token, I initially tried using ImpersonateSelf / RevertToSelf, but I was unhappy with how it fails for nested contexts since RevertToSelf always switches back to the process token. I opted to instead call OpenThreadToken / OpenProcessToken, DuplicateTokenEx, and SetThreadToken.
I chose to use the WELL_KNOWN_SID_TYPE enum values to get the label SIDs via CreateWellKnownSid. Note that I omitted the GetLengthSid call when passing the size of the TOKEN_MANDATORY_LABEL to SetTokenInformation. It only needs the size of the primary buffer. The SID it points at is a sized structure (i.e. SubAuthorityCount).
Example:
import winreg
HKLM = winreg.HKEY_LOCAL_MACHINE
subkey = r'SOFTWARE\Microsoft\SystemCertificates\CA'
access = winreg.KEY_ALL_ACCESS
>>> key = winreg.OpenKey(HKLM, subkey, 0, access)
>>> print(key)
<PyHKEY:0x0000000000000178>
>>> key.Close()
Repeat with low integrity level:
>>> with token_integrity_level('low'):
... winreg.OpenKey(HKLM, subkey, 0, access)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
PermissionError: [WinError 5] Access is denied
A context manager like this could be added to the test helper module that was proposed in issue 22080. It could also add the ability to impersonate with a restricted copy of the process token -- like what UAC creates. psexec -l does this by calling CreateRestrictedToken followed by SetInformationToken for the TokenIntegrityLevel and TokenDefaultDacl.
New changeset 8ff4c1827499 by Benjamin Peterson in branch '3.5':
merge 3.4 (closes #25939)
New changeset d6474257ef38 by Benjamin Peterson in branch 'default':
merge 3.5 (closes #25939)
The patch itself seems fine, so I committed that. It doesn't seem like how best to test this has been figured out, so leaving the issue open.
Thanks for accepting my patch. I'm curious: any reason not applying to 2.7 branch? We're building youtube-dl.exe with py2exe on Python 2.7 as py2exe on 3.x sometimes fails. ()
It was fixed in 2.7 -
The issue number wasn't in the commit, so it didn't appear here.
Didn't see it. Sorry for bothering.
Benjamin, Steve, can we close this ticket?
Yes, and done.
|
https://bugs.python.org/issue25939
|
CC-MAIN-2021-39
|
refinedweb
| 1,011
| 60.41
|
Copy pixel data from one buffer to another Function Type: Delayed Execution
#include <screen/screen.h>
int screen_blit(screen_context_t ctx, screen_buffer_t dst, screen_buffer_t src, const int *attribs)
A connection to screen
The buffer which data will be copied to.
The buffer which the pixels will be copied from.
A list that contains the attributes that define the blit. This list must consist of a series of token-value pairs terminated with a SCREEN_BLIT_END token. The tokens used in this list must be of type Screen blit types.
This function requests pixels from one buffer be copied to another. The operation is guaranteed not to be submitted until a flush is called, or until the application posts changes to one of the context's windows.
0 if the blit operation was queued, or -1 if an error occurred (errno is set).
|
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_blit.html
|
CC-MAIN-2019-47
|
refinedweb
| 140
| 64.71
|
TmVirtualTime reads the insamples information to advance the timer. More...
#include <TmVirtualTime.h>
TmVirtualTime reads the insamples information to advance the timer.
Definition at line 39 of file TmVirtualTime.h.
empty constructor. The read source MarSystem and control are zero values and must be updated using setReadCtrl(...) Given the default name: Virtual as in "TmSampleTime/Virtual"
Definition at line 26 of file TmVirtualTime.cpp.
named constructor. The read source MarSystem and control are zero values and must be updated using setReadCtrl(...). Given the default name: "TmSampleTime/name"
Definition at line 31 of file TmVirtualTime.cpp.
main constructor. Has identifier "TmSampleCount/Virtual"
Definition at line 36 of file TmVirtualTime.cpp.
copy constructor
Definition at line 41 of file TmVirtualTime.cpp.
Definition at line 46 of file TmVirtualTime.cpp.
convert the given interval into a number of samples.
The interval must fall within sample time which can include standard time units: us, ms, s, m, h, d. The sample rate used for this function is the value of the mrs_real/israte control of the source MarSystem.
Definition at line 83 of file TmVirtualTime.cpp.
get the difference between the current source control value and its value since it was last read.
Definition at line 62 of file TmVirtualTime.cpp.
set the MarSystem that contains the read control.
This method sets the read source to the parameter without checking. It then attempts to get the MarControlPtr for the control unless the read source is NULL or the read ctrl path is "". No warnings are produced.
Definition at line 49 of file TmVirtualTime.cpp.
update timer values.
Allowable control values for this timer are: MarSystem/source, and mrs_string/control.
Reimplemented from TmTimer.
Definition at line 91 of file TmVirtualTime.cpp.
|
http://marsyas.info/doc/sourceDoc/html/classMarsyas_1_1TmVirtualTime.html
|
CC-MAIN-2018-47
|
refinedweb
| 286
| 52.05
|
Every once in a while, I’ll encounter a developer that thinks that WCF is too complicated to use. Whereas in fact, the basics of WCF are incredibly simple. MSDN’s tutorial hides this fact by making their introduction to WCF a fairly long and painful 6-step process, but you need much less to write your first WCF application. In fact, you don’t need Visual Studio and you don’t even need svcutil. All you need is notepad, and an install of .NET 3.0.
I have three steps for you:
1. Copy the following code into a file called “WCF.cs” using your text editor of choice:
using System;
using System.ServiceModel;
class Program
{
static void Main()
{
ServiceHost host = new ServiceHost(typeof(Echo));
host.AddServiceEndpoint(typeof(IEcho), new BasicHttpBinding(), "");
host.Open();
ChannelFactory<IEcho> client = new ChannelFactory<IEcho>(new BasicHttpBinding(), "");
Console.WriteLine(client.CreateChannel().Echo("hello"));
}
}
[ServiceContract]
interface IEcho
[OperationContract]
string Echo(string s);
class Echo : IEcho
string IEcho.Echo(string s) { return s + " echoooOOooo"; }
2. Compile the code by running the C# compiler csc.exe against your new WCF.cs file. You can usually just run this in a command prompt:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\csc.exe WCF.cs
3. Still in the command prompt, run WCF.exe. You should get the output:
hello echoooOOooo
indicating that you started a service, created a client, and made a call to your service’s “Echo” method.
Voila! That’s all you need to use WCF. Not too bad, right?
Great WCF example!
Couple of points..
a) Command prompt must be run in 'admin' mode (Vista and 7)
b) If you already have IIS running, try localhost:8080
Lovely - I wish MS would start with "minimal" examples.
Overengineered "best-practices" examples only serve to encourage cargo-cult programming; and MS's idea of best-practices are sometimes slightly strange!
My only comment, is that you are not closing your client connection. This is misleading to a new developer. They will run into the 10 connection timeout issue soon and be very confused. See the following link for a cheat sheet of WCF communication states.
In response to Raj, if you are running IIS7 you can decorate the class (Echo) with the following attribute and as long as your url is unique you will be able to run your app on port 80:
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
I had a friend once who wrote a hello world application in C. When he was done he said "man, C is easy. I don't see how anyone has trouble with this."
And there's the rub. The basics as you've shown them, are indeed quite simple but provide no value. Once you have to actually get any work done, WCF quickly becomes a huge, brittle, waste of time.
i executed the above code but cmd prompt is flashing and closing i cant able to see what is on the screen.
Except that I get an addressAccessDeniedException: HTTP could not resister .... blah blah
|
http://blogs.msdn.com/b/youssefm/archive/2009/11/02/wcf-101-the-simplest-wcf-example-you-ll-ever-see.aspx
|
CC-MAIN-2015-48
|
refinedweb
| 505
| 67.45
|
To
Create, Read, Update and Delete information are some of the main
operations in all IT applications. Login, Search, Validations etc are some
common functionalities in all IT applications. or e!ample
http"##irctc.com#displaySer$let is a %eb application and you can chec& all
operations and functionalities in this %ebsite. Three Tier 'rchitecture is the
common architecture used in modern IT applications. (resentation, )usiness
Logic and Database %ill be in three di*erent physical entities.
Di*erent types of 'pplications
Des&top 'pplication + is a computer soft%are designed to help the user to
perform speci,c tas&s. -g" VLC .edia (layer, .icrosoft /0ce.
1eb 'pplication 2 is an application that is accessed o$er a net%or& such as
the Internet or an Intranet. -g" %%%.'ma3on.com, %%%.faceboo&.com.
.obile 'pplication 2 .obile application soft%are is de$eloped for small lo%2
po%er handheld de$ices such as personal digital assistants, enterprise digital
assistants or mobile phones. -g" 1eather orcast, 4e%s Reader, 5ames,
1hatsapp etc.
problem sol$ing "
1hat is a problem66
' problem is a di0culty or a challenge, or situation that needs a solution.
-g" Dri$ing a car, .a&ing Tea, Sol$ing a cross%ord pu33le
Sample problem scenario
1e can discuss a problem scenario faced by Rail%ay department.
's part of ser$ice and de$elopment in nation lots of trains %ere added to the
Rail%ay department. 's a result the passenger ser$ices also increased.
Rail%ay stations increased. La&hs of passengers and customers started
a$ailing $arious ser$ices li&e regular tic&et boo&ings, reser$ation of tic&ets,
tic&et cancellations, en7uires etc. Ser$ices for passengers and customers at
rail%ay stations resulted in long 7ueues. This created unhappy passengers
and customers. Rail%ay employees %ere also o$erburdened %ith paper and
manual %or&. Dissatis,ed customers and o$erburdened employees %as the
end result of de$elopment.
Rail%ay %ants to impro$e sta&e holder satisfaction. This %as the problem
scenario faced by Rail%ay department. This problem can be sol$ed in 8 %ays
+ manually or automated.
Rail%ay department hired ne% sta* and increased the number of customer
ser$ice counters. (assenger count and train ser$ices are steadily increasing,
so this solution is not going to %or& for a long term. .anual errors are also
creeping up during calculations, leading to customer dissatisfaction, and
%astage of time to correct errors.
.anagement is no% thin&ing %hy not automate the entire system.
Tic&et boo&ing and seat a$ailability trac&ing
(assenger data management
Regular tic&eting and so on9
They decided to come up %ith a computeri3ed system that handles $arious
day to day acti$ities at their o0ce.
Computer (rograms
Computer is a machine. It needs to be instructed ho% to perform a tas&, ho%
to handle data, %here to store it and so on.
(rograms + Set of instructions that is carried out by the computer
Soft%are + Collection of computer programs and data that tells computer
%hat to do
In a computer based solution, business re7uirement of a user, ultimately get
translated to lines of code that instructs computer to %or& in such a %ay as to
meet the re7uirement
:o% to sol$e a problem
1hene$er a problem occurs, people tend to concentrate too much on the
solution and sometimes forget the essence of the problem. It is essential for
the indi$idual to ha$e a right approach to the problem itself in order to ,gure
out the best solution.
(eople %ho are really good at sol$ing problems go about it systematically.
They ha$e a %ay of placing the problem in conte!t and do not ;ump to
conclusions. They e$aluate all alternati$es.
Steps in (roblem Sol$ing
Step < 2 'naly3ing the problem
This in$ol$es understanding the input and output, $arious alternati$es to
reach the output, listing the assumptions
Step 8 2 Designing an algorithm
De,ning a step by step procedure to sol$e the problem
Step = 2 Testing the design for correctness
Chec&ing for the correctness of algorithm by %al&ing through the algorithm
manually
Step > 2 Implementing the solution ?Translating to code@
/nce the correctness of algorithm is chec&ed, it can be translated to any
programming language
Step A 2 Testing the program
/nce the program is de$eloped it needs to be tested against $arious test
cases %hich chec&s if the program is correct for all e!pected range of inputs.
Step B + Deployment and .aintenance
In this module, the product or the output is gi$en to the customer or end user
for their purpose.
&ey elements re7d for problem sol$ing and programming "
5ood analytical s&ills
Declarati$e and imperati$e &no%ledge about the solution
'bility to apply abstraction
Cno%ledge in a programming language
ollo%ing good programming practices
<. 5ood analytical s&ills
'nalytical s&ill is the ability to use good reasoning in analy3ing a situation and
also the ability to sol$e the problem. 'nalytical s&ill can also be said as the
ability to organise a mass of data and dra% proper correlations, and then
interpreting these trends in terms that are meaningful to others.
Suppose you ha$e rain data for your to%n for the past thirty years, gi$en to
you at random, %ith no comments. Dou can organise the data chronologically,
then dra% a graph to demonstrate the data and then , by e!tending the
graph along its closest ,tting cur$e, you can ma&e reasonable predictions
about the e!tent of rain ne!t year, assuming that all other factors remain
ST-'DD.
8. Declarati$e and imperati$e &no%ledge about the solution
Declarati$e &no%ledge is the &no%ledge of %hat to do and imperati$e
&no%ledge is the &no%ledge of ho% to do. If you ha$e problem in hand and
%hen you &no% the solution of the problem then you ha$e declarati$e
&no%ledge. So E1hat isF type of &no%ledge is called declarati$e &no%ledge.
Imperati$e &no%ledge is a procedural &no%ledge %hich is the &no%ledge
e!ercised in the performance of some tas&. E:o% toF type of &no%ledge is
called Imperati$e Cno%ledge.
-!ample"
Declarati$e Cno%ledge"
This 1hat is type of &no%ledge is called Declarati$e &no%ledge
Imperati$e Cno%ledge"
This :o% to Cno%ledge is called Imperati$e Cno%ledge.
=. 'bility to apply abstraction
'bstraction %ill modulari3e the logic for a speci,c functionality in a computer
program. It %ill also hide comple!ity of implementation details of a function
behind a speci,c interface de,ned to in$o&e the function. It is recommended
that programmers use abstractions %hene$er suitable in order to a$oid
duplication.
>. Cno%ledge in a programming language
' programming language is a formal language designed to communicate
instructions to a machine, particularly a computer. (rogramming languages
can be used to create programs that control the beha$iour of a machine
and#or to e!press algorithms precisely.
-lements of a programming language
<. Synta!" structural elements of the language. (rograms must be
syntactically correct.
8. 5rammar" de,nes ho% syntactical elements need to be combined to form
programs
=. Semantics" de,nes meaning of the code
A. 5ood programming practices
' good programming practice is related to %riting e0cient and readable
code, a code %hich is easily maintainable.
cG intro
1hat is a $ariable6
' $ariable is nothing but a name gi$en to a storage area that our programs
can manipulate. -ach $ariable in CG has a speci,c type, %hich determines
the si3e and layout of the $ariableHs memory, the range of $alues that can be
stored %ithin that memory, and the set of operations that can be applied to
the $ariable.
The basic $alue types pro$ided in CG can be categori3ed as"
In CG, $ariables are categori3ed into the follo%ing types"
Value types
Reference types
Value Types
Value type $ariables can be assigned a $alue directly. They are deri$ed from
the class System.ValueType.
The $alue types directly contain data. Some e!amples are int, char, Ioat,
%hich stores numbers, alphabets, Ioating point numbers, respecti$ely. 1hen
you declare an int type, the system allocates memory to store the $alue.
The follo%ing table lists the a$ailable $alue types in CG 8J<J"
To get the e!act si3e of a type or a $ariable on a particular platform, you can
use the si3eof method. The e!pression si3eof?type@ yields the storage si3e of
the ob;ect or type in bytes. ollo%ing is an e!ample to get the si3e of int type
on any machine"
namespace DataType'pplication
K
class (rogram
K
static $oid .ain?stringLM args@
K
Console.1riteLine?NSi3e of int" KJON, si3eof?int@@P
Console.ReadLine?@P
O
O
O
1hen the abo$e code is compiled and e!ecuted, it produces the follo%ing
result"
Si3e of int" >
Reference Types
The reference types do not contain the actual data stored in a $ariable, but
they contain a reference to the $ariables.
In other %ords, they refer to a memory location. .ore than one $ariable can
point to the same memory location, the reference types can refer to a
memory location. If the data in the memory location is changed by one of the
$ariables, the other $ariable automatically reIects this change in $alue.
-!ample of built2in reference types are" ob;ect and string.
/b;ect Type
The /b;ect Type is the ultimate base class for all data types in CG Common
Type System ?CTS@. /b;ect is an alias for System./b;ect class. So ob;ect types
can be assigned $alues of any other types, $alue types, reference types,
prede,ned or user2de,ned types.
-!ample"
ob;ect ob;P
ob; Q <JJP
ob; Q NTomNP
String Type
The String Type allo%s you to assign any string $alues to a $ariable. The
string type is an alias for the System.String class. It is deri$ed from ob;ect
type. The $alue for a string type can be assigned using string literals as
String str Q NTutorials (ointNP
The user2de,ned reference types are" class, interface, or delegate, %hich
shall be discussed later.
|
https://www.scribd.com/document/236617250/New-Rich-Text-Document
|
CC-MAIN-2018-43
|
refinedweb
| 1,746
| 63.8
|
Non-native JS types (aka Scala.js-defined JS types)
A non-native JS type, aka Scala.js-defined JS type, is a JavaScript type implemented in Scala.js code. This is in contrast to native JS types, described in the facade types reference, which represent APIs implemented in JavaScript code.
Defining a non-native JS type
Any class, trait or object that inherits from
js.Any is a JS type.
Often, it will extend
js.Object which itself extends
js.Any:
import scala.scalajs.js import scala.scalajs.js.annotation._ // @ScalaJSDefined class Foo extends js.Object { val x: Int = 4 def bar(x: Int): Int = x + 1 }
Such classes are called non-native JS classes, and were previously known as Scala.js-defined JS classes.
All their members are automatically visible from JavaScript code.
The class itself (its constructor function) is not visible by default, but can be exported with
@JSExportTopLevel.
Moreover, they can extend JavaScript classes (native or not), and, if exported, be extended by JavaScript classes.
Being JavaScript types, the Scala semantics do not apply to these classes. Instead, JavaScript semantics apply. For example, overloading is dispatched at run-time, instead of compile-time.
Restrictions
Non-native JS types have the following restrictions:
- Private methods cannot be overloaded.
- Qualified private members, i.e.,
private[EnclosingScope], must be
final.
- Non-native JS classes, traits and objects cannot directly extend native JS traits (it is allowed to extend a native JS class).
- Non-native JS traits cannot declare concrete term members (i.e., they must all be abstract) unless their right-hand-side is exactly
= js.undefined.
- Non-native JS classes and objects must extend a JS class, for example
js.Object(they cannot directly extend
AnyRef with js.Any).
- Declaring a method named
applywithout
@JSNameis illegal.
- Declaring a method with
@JSBracketSelector
@JSBracketCallis illegal.
- Mixing fields, pairs of getter/setter, and/or methods with the same name is illegal. (For example
def foo: Intand
def foo(x: Int): Intcannot both exist in the same class.)
Semantics
What JavaScript sees
vals and
vars become actual JavaScript fields of the object, so JavaScript sees a field stored on the object.
defs with
()become JavaScript methods on the prototype.
defs without
()become JavaScript getters on the prototype.
defs whose Scala name ends with
_=become JavaScript setters on the prototype.
In other words, the following definition:
// @ScalaJSDefined class Foo extends js.Object { val x: Int = 5 var y: String = "hello" def z: Int = 42 def z_=(v: Int): Unit = println("z = " + v) def foo(x: Int): Int = x + 1 }
can be understood as the following ECMAScript 6 class definition (or its desugaring in ES 5.1):
class Foo extends global.Object { constructor() { super(); this.x = 5; this.y = "hello"; } get z() { return 42; } set z(v) { console.log("z = " + v); } foo(x) { return x + 1; } }
The JavaScript names are the same as the field and method names in Scala by default.
You can override this with
@JSName("customName").
private,
private[this] and
private[EnclosingScope] methods, getters and setters are not visible at all from JavaScript.
Private fields, however, will exist on the object, with unpredictable names.
Trying to access them is undefined behavior.
All other members, including protected ones, are visible to JavaScript.
super calls
super calls have the semantics of
super references in ECMAScript 6.
For example:
// @ScalaJSDefined class Foo extends js.Object { override def toString(): String = super.toString() + " in Foo" }
has the same semantics as:
class Foo extends global.Object { toString() { return super.toString() + " in Foo"; } }
which, in ES 5.1, gives something like
Foo.prototype.toString = function() { return global.Object.prototype.toString.call(this) + " in Foo"; };
For fields, getters and setters, the ES 6 spec is a bit complicated, but it essentially “does the right thing”. In particular, calling a super getter or setter works as expected.
Non-native JS object
A non-native JS
object is a singleton instance of a non-native JS class.
There is nothing special about this, it’s just like Scala objects.
Non-native JS objects are not automatically visible to JavaScript.
They can be exported with
@JSExportTopLevel, just like Scala object: they will appear as a 0-argument function returning the instance of the object.
Non-native JS traits
Traits and interfaces do not have any existence in JavaScript. At best, they are documented contracts that certain classes must satisfy. So what does it mean to have native and non-native JS traits?
Native JS traits can only be extended by native JS classes, objects and traits. In other words, a non-native JS class/trait/object cannot extend a native JS trait. They can only extend non-native JS traits.
Term members (
vals,
vars and
defs) in non-native JS traits must:
- either be abstract,
- or have
= js.undefinedas right-hand-side (and not be a
defwith
()).
// @ScalaJSDefined trait Bar extends js.Object { val x: Int val y: Int = 5 // illegal val z: js.UndefOr[Int] = js.undefined def foo(x: Int): Int def bar(x: Int): Int = x + 1 // illegal def foobar(x: Int): js.UndefOr[Int] = js.undefined // illegal def babar: js.UndefOr[Int] = js.undefined }
Unless overridden in a class or objects, concrete
vals,
vars and
defs declared
in a JavaScript trait (necessarily with
= js.undefined) are not exposed to JavaScript at all.
For example, implementing (the legal parts of)
Bar in a subclass:
// @ScalaJSDefined class Babar extends Bar { val x: Int = 42 def foo(x: Int): Int = x + 1 override def babar: js.UndefOr[Int] = 3 }
has the same semantics as the following ECMAScript 2015 class:
class Babar extends global.Object { // `extends Bar` disappears constructor() { super(); this.x = 42; } foo(x) { return x + 1; } get babar() { return 3; } }
Note that
z is not defined at all, not even as
this.z = undefined.
The distinction is rarely relevant, because
babar.z will return
undefined in JavaScript
and in Scala.js if
babar does not have a field
z.
Static members
When defining a non-native JS class (not a trait nor an object), it is also possible to define static members.
Static members must be defined in the companion object of the class, and annotated with
@JSExportStatic.
For example:
// @ScalaJSDefined class Foo extends js.Object object Foo { @JSExportStatic val x: Int = 5 @JSExportStatic var y: String = "hello" @JSExportStatic def z: Int = 42 @JSExportStatic def z_=(v: Int): Unit = println("z = " + v) @JSExportStatic def foo(x: Int): Int = x + 1 }
defines a JavaScript class
Foo with a variety of static members.
It can be understood as if defined in JavaScript as:
class Foo extends global.Object { static get z() { return 42; } static set z(v) { console.log("z = " + v); } static foo(x) { return x + 1; } } Foo.x = 5; Foo.y = "hello";
Note that JavaScript doesn’t have any declarative syntax for static fields, hence the two imperative assignments at the end.
Restrictions
- The companion object must be a Scala object, i.e., it cannot extend
js.Any.
lazy vals cannot be marked with
@JSExportStatic
- Static fields (
vals and
vars) must be defined before any other (non-static) field, as well as before any constructor statement.
As an example of the last bullet, the following snippet is illegal:
// @ScalaJSDefined class Foo extends js.Object object Foo { val x: Int = 5 @JSExportStatic val y: String = "hello" // illegal, defined after `x` which is non-static }
and so is the following:
// @ScalaJSDefined class Foo extends js.Object object Foo { println("Initializing Foo") @JSExportStatic val y: String = "hello" // illegal, defined after the `println` statement }
Anonymous classes
Anonymous JS classes are particularly useful to create typed object literals, in the presence of a non-native JS trait describing an interface. For example:
// @ScalaJSDefined trait Position extends js.Object { val x: Int val y: Int } val pos = new Position { val x = 5 val y = 10 }
Use case: configuration objects
For configuration objects that have fields with default values, concrete members with
= js.undefined can be used in the trait.
For example:
// @ScalaJSDefined trait JQueryAjaxSettings extends js.Object { val data: js.UndefOr[js.Object | String | js.Array[Any]] = js.undefined val contentType: js.UndefOr[Boolean | String] = js.undefined val crossDomain: js.UndefOr[Boolean] = js.undefined val success: js.UndefOr[js.Function3[Any, String, JQueryXHR, _]] = js.undefined ... }
When calling
ajax(), we can now give an anonymous object that overrides only the
vals we care about:
jQuery.ajax(someURL, new JQueryAjaxSettings { override val crossDomain: js.UndefOr[Boolean] = true override val success: js.UndefOr[js.Function3[Any, String, JQueryXHR, _]] = { js.defined { (data: Any, textStatus: String, xhr: JQueryXHR) => println("Status: " + textStatus) } } })
Note that for functions, we use
js.defined { ... } to drive Scala’s type inference.
Otherwise, it needs to apply two implicit conversions, which is not allowed.
The explicit types are quite annoying, but they are only necessary in Scala 2.10 and 2.11.
If you use Scala 2.12, you can omit all the type annotations (but keep
js.defined), thanks to improved type inference for
vals and SAM conversions:
jQuery.ajax(someURL, new JQueryAjaxSettings { override val crossDomain = true override val success = js.defined { (data, textStatus, xhr) => println("Status: " + textStatus) } })
Caveat with reflective calls
It is possible to define an object literal with the anonymous class syntax without the support of a super class or trait defining the API, like this:
val pos = new js.Object { val x = 5 val y = 10 }
However, it is thereafter impossible to access its members easily. The following does not work:
println(pos.x)
This is because
pos is a structural type in this case, and accessing
x is known as a reflective call in Scala.
Reflective calls are not supported on values with JavaScript semantics, and will fail at runtime.
Fortunately, the compiler will warn you against reflective calls, unless you use the relevant language import.
Our advice: do not use the reflective calls language import.
Run-time overloading
// @ScalaJSDefined class Foo extends js.Object { def bar(x: String): String = "hello " + x def bar(x: Int): Int = x + 1 } val foo = new Foo println(foo.bar("world")) // choose at run-time which one to call
Even though typechecking will resolve to the first overload at compile-time to decide the result type of the function, the actual call will re-resolve at run-time, using the dynamic type of the parameter. Basically something like this is generated:
// @ScalaJSDefined class Foo extends js.Object { def bar(x: Any): Any = { x match { case x: String => "hello " + x case x: Int => x + 1 } } }
Besides the run-time overhead incurred by such a resolution, this can cause weird problems if overloads are not mutually exclusive. For example:
// @ScalaJSDefined class Foo extends js.Object { def bar(x: String): String = bar(x: Any) def bar(x: Any): String = "bar " + x } val foo = new Foo println(foo.bar("world")) // infinite recursion
With compile-time overload resolution, the above would be fine, as the call to
bar(x: Any) resolves to the second overload, due to the static type of
Any.
With run-time overload resolution, however, the type tests are executed again, and the actual run-time type of the argument is still
String, which causes an infinite recursion.
Goodies
js.constructorOf[C]
To obtain the JavaScript constructor function of a JS class (native or not) without instantiating it nor exporting it, you can use
js.constructorOf[C], whose signature is:
package object js { def constructorOf[C <: js.Any]: js.Dynamic = <stub> }
C must be a class type (i.e., such that you can give it to
classOf[C]) and refer to a JS class (not a trait nor an object).
The method returns the JavaScript constructor function (aka the class value) for
C.
This can be useful to give to JavaScript libraries expecting constructor functions rather than instances of the classes.
js.ConstructorTag[C]
js.ConstructorTag[C] is to
js.constructorOf[C] as
ClassTag[C] is to
classOf[C], i.e., you can use an
implicit parameter of type
js.ConstructorTag[C] to implicitly get a
js.constructorOf[C].
For example:
def instantiate[C <: js.Any : js.ConstructorTag]: C = js.Dynamic.newInstance(js.constructorTag[C].constructor)().asInstanceOf[C] val newEmptyJSArray = instantiate[js.Array[Int]]
Implicit expansion will desugar the above code into:
def instantiate[C <: js.Any](implicit tag: js.ConstructorTag[C]): C = js.Dynamic.newInstance(tag.constructor)().asInstanceOf[C] val newEmptyJSArray = instantiate[js.Array[Int]]( new js.ConstructorTag[C](js.constructorOf[js.Array[Int]]))
although you cannot write the desugared version in user code because the constructor of
js.ConstructorTag is private.
This feature is particularly useful for Scala.js libraries wrapping JavaScript frameworks expecting to receive JavaScript constructors as parameters.
|
http://www.scala-js.org/doc/interoperability/sjs-defined-js-classes.html
|
CC-MAIN-2021-10
|
refinedweb
| 2,097
| 59.7
|
news.digitalmars.com - c++Dec 28 2005 [bug(s)] More ICEs (3)
Dec 26 2005 [bug] New to DMC++ 8.45 (2)
Dec 23 2005 [bug] DMC++ goes ga-ga when given different unions with similar definitions inside extern "C" (6)
Dec 20 2005 Crash in exception handler (2)
Dec 19 2005 [bug] Another static array size bug (1)
Dec 19 2005 [bug] Failed to take reference to array (1)
Dec 19 2005 Compilation problem (2)
Dec 16 2005 Watcom (1)
Dec 14 2005 program crashing...want to read and write to serial port in pure C functions (5)
Dec 14 2005 [bug] (4)
Dec 14 2005 error in compiling,... (2)
Dec 14 2005 Compiler in endless loop (1)
Dec 09 2005 [bug] DMC++ still not handling ADL (Koenig Lookup) correctly (1)
Dec 09 2005 C++ casts in std headers (3)
Dec 09 2005 [bug] Compiler flags multiple identical member type definitions as error; should be warning (8)
Dec 07 2005 [bug] Access scope specifier erroneously rejected (1)
Dec 07 2005 bug in dmc 8.42n (1)
Nov 29 2005 Cant write to fstream when file size is k*65535 (1)
Nov 29 2005 8.46.1 code gen bug (5)
Nov 28 2005 XMLSTL progress: Testers / users wanted in a week or two; some opinions wanted now (1)
Nov 16 2005 OPTLINK Warning 68: Too many relocs for EXEPACK (2)
Nov 10 2005 Symbol Undefined _snprintf (3)
Nov 09 2005 **WARNING** DMC 8.46 and STLport 4.6.2 (7)
Nov 09 2005 fstreams (1)
Nov 09 2005 Using unistd.h and getopt with dmc under win32 (1)
Nov 09 2005 Digital Mars C++ Linux Port (1)
Nov 04 2005 666'd (1)
Nov 02 2005 Help ! please with wain (1)
Nov 02 2005 Help ! please with wain (1)
Nov 02 2005 Help ! please with wain (1)
Nov 02 2005 Help ! please with wain (2)
Nov 01 2005 Optimizer Bug - bug.cpp (2)
Oct 23 2005 Problem with casts and conditional expressions (1)
Oct 21 2005 please help, patience appreciated. (7)
Oct 16 2005 386link.exe? (2)
Oct 14 2005 Using std::string? (6)
Oct 04 2005 Trouble using glut (2)
Oct 04 2005 A static attribute in a parent template class can't be created. - Example.zip (1)
Oct 03 2005 Compiler writing (3)
Sep 27 2005 Distribution (5)
Sep 26 2005 DMC 8.45 and new/delete (3)
Sep 25 2005 Internal error: el.c 1988 (1)
Sep 25 2005 link hello,,,user32+kernel32/noi; - sc.ini (3)
Sep 22 2005 Template linking bug. (1)
Sep 21 2005 DB8.45 and boost (1)
Sep 21 2005 DM8.45 and boost (2)
Sep 21 2005 DM8.45 and boost (1)
Sep 21 2005 DM8.45 and boost (1)
Sep 21 2005 DM8.45 and boost (1)
Sep 21 2005 DM8.45 and boost (10)
Sep 18 2005 Windows headers missing FILE_ATTRIBUTE_REPARSE_POINT (1)
Sep 15 2005 Constructor bug or me? (3)
Sep 15 2005 >64k Global types? (2)
Sep 10 2005 link error with /co (9)
Sep 10 2005 strange problem (2)
Sep 08 2005 DM C/C++ 8.45 (6)
Sep 08 2005 comparison operators !< !<> ... (3)
Sep 01 2005 Windows Error Code 0xc0000005 (4)
Aug 30 2005 error message: Fatal error: out of memory on a 768 MB machine? (3)
Aug 26 2005 STLSoft 1.8.7 beta 1 released (2)
Aug 24 2005 Odd behavior in toy program (4)
Aug 24 2005 Odd behavior in toy program (1)
Aug 18 2005 DM C++ and MIDL.EXE (1)
Aug 15 2005 CD update version 8.44 (3)
Aug 11 2005 STLSoft 1.8.6 released (1)
Aug 08 2005 link error with /co (1)
Aug 08 2005 link error with /co (2)
Aug 07 2005 Editor mangles source (12)
Aug 06 2005 [bug] throwing exception in destructor (1)
Aug 06 2005 [bug] two-phase name lookup (5)
Aug 05 2005 Broken using declaration when used on subscript operator (2)
Aug 02 2005 Late binding (1)
Aug 01 2005 STLSoft 1.8.5 released - fixes bug introduced with 1.8.4 (1)
Jul 28 2005 STLSoft 1.8.4 released (1)
Jul 25 2005 [bug] Linking to a library which depends onto a symbol in the exe (2)
Jul 25 2005 Literal Strings and tertiaries (1)
Jul 25 2005 Literal Strings as const char *? (1)
Jul 25 2005 Literal Strings as const char *? (3)
Jul 25 2005 Missing complex functions - oldcomplex/ctrig.cpp (1)
Jul 24 2005 Help needed! (3)
Jul 24 2005 Eliminate duplicated strings ... (4)
Jul 18 2005 80 bit long doubles, loss of precision (2)
Jul 16 2005 making a .dll file from .lib file (3)
Jul 15 2005 Linker error - not enough characters in symbol resolution?? (4)
Jul 14 2005 void * and C++ - aaggh! (13)
Jul 13 2005 How to ... (4)
Jul 03 2005 operator new() still returns NULL?! (3)
Jun 28 2005 [bug] string literals should be const (1)
Jun 27 2005 HOW DO I USE DM? (4)
Jun 26 2005 PSAPI & DMC++ (9)
Jun 26 2005 C++ & AppExpress (1)
Jun 24 2005 Compiler Error (1)
Jun 16 2005 Environment Variables (4)
Jun 15 2005 Compiler Error (1)
Jun 14 2005 Trying to use contracts (3)
Jun 13 2005 Internal error: token 879 (4)
Jun 10 2005 ShellExec (4)
Jun 08 2005 Preprocessor error: '#endif' found without '#if' (2)
May 31 2005 [bug] using namespace leaks into surrounding namespace. (1)
May 31 2005 [bug] Crash when generating debug info. (7)
May 30 2005 Processor optimizations available for Pentiums? (2)
May 30 2005 Win XP Home ed., Service Pak 2 OK? (2)
May 30 2005 [bug] Partial specialization & member function pointers (3)
May 25 2005 Trolltech Qt support for Digital Mars (1)
May 25 2005 Open-RJ 1.3.2 released (1)
May 23 2005 recls 1.6.2 released - recls's been on a diet (3)
May 22 2005 Open-RJ 1.3.1 released (16)
May 22 2005 STLSoft 1.8.3 released (3)
May 19 2005 pclint (2)
May 18 2005 CL translation needs to support /TP (2)
May 17 2005 Open a exe file to translate it - CienporCien_eCatalog.exe [3/3] (1)
May 17 2005 Open a exe file to translate it - CienporCien_eCatalog.exe [2/3] (1)
May 17 2005 Open a exe file to translate it - CienporCien_eCatalog.exe [1/3] (2)
May 17 2005 Environment issue with STLPort? (10)
May 14 2005 multi thread (3)
May 14 2005 finding struct offset (2)
May 14 2005 finding struct offset (2)
May 13 2005 Bug: sc.ini CFLAGS=fileName (4)
May 13 2005 Bug: sc.ini CFLAGS macro name length - Examples.txt (1)
May 12 2005 DMC Version question (2)
May 10 2005 Internal error: template 2643 (2)
May 09 2005 Fct ptr to exported member fct (2)
May 07 2005 bug? duplicate declaration? (7)
May 04 2005 MySQL API (2)
May 03 2005 -o quibble. (3)
Apr 30 2005 big file... (11)
Apr 30 2005 cd (3)
Apr 30 2005 problems with forward declarations (1)
Apr 29 2005 Some interesting impressions of DMC++, wrt warnings. (1)
Apr 26 2005 Bug: mistakes compile-time multiplication in template param list for VLA (3)
Apr 24 2005 can not open input file iostream (4)
Apr 24 2005 I've got a struct with a 12 bit variable and four 1 bit variables, (3)
Apr 21 2005 can't compile with iostream.h (4)
Apr 20 2005 typedef template: constant initializer expected error (1)
Apr 20 2005 _findfirt (2)
Apr 20 2005 286 or 386? (2)
Apr 19 2005 Bug: template explicit specialization with strcmp (2)
Apr 17 2005 Bug: failure to apply implicit conversion operator in subscript expression (3)
Apr 16 2005 explicit instantiation of specialized member template functions (3)
Apr 10 2005 afxtempl.h problems (2)
Apr 07 2005 DMC++ & ACE: anyone ever done anything on this? (4)
Apr 04 2005 Looking For Integration Test Tool (1)
Apr 01 2005 C bug: Incorrect handling of subscript operator (2)
Apr 01 2005 C++ bug: incorrect treatment of n-level member typedef access (3)
Apr 01 2005 Multi-faceted typedef conflict problem - any ideas?!? (5)
Apr 01 2005 Insufficient information in error (2)
Mar 28 2005 [bug] partial ordering of templates (5)
Mar 23 2005 Pointers? What is the use? (2)
Mar 23 2005 Bug: casts + scope symbol (4)
Mar 20 2005 Errorlevel after linking failure? (2)
Mar 18 2005 Error 42: Symbol Undefined _URLDownloadToCacheFileA 24 (9)
Mar 09 2005 Internal error: func 281 (1)
Mar 07 2005 Building Boost 1_32_0? (1)
Mar 07 2005 Template error only when windows.h is included (2)
Mar 07 2005 nest template class (4)
Mar 04 2005 How do I debug 16 bit Application (2)
Mar 02 2005 Okay, I'm stumped - Crashing on the first fprintf or fwrite after (3)
Feb 27 2005 Porting a 16-bit DOS program to windows, and inline asm which uses (4)
Feb 26 2005 widows error (3)
Feb 23 2005 WinSock and dmc (4)
Feb 22 2005 bug?: Error: reference must refer to same type or be const (2)
Feb 21 2005 running a script without the dos console showing (2)
Feb 14 2005 Missing init sequence in cpp (1)
Feb 14 2005 Internal error: cgcod 2247 (2)
Feb 10 2005 writing c code in a c++ app (6)
Feb 10 2005 writing c code in a c++ app (4)
Feb 06 2005 Multi-threaded library for linking? (5)
Feb 01 2005 template implements __uuidof (2)
Jan 28 2005 Eclipse IDE plugn for DMC (4)
Jan 27 2005 problem of embed struct ? (1)
Jan 25 2005 need help void(s::*member func)() (4)
Jan 24 2005 Need help linking (2)
Jan 23 2005 noob needs help (2)
Jan 22 2005 Internal Error type 600 (3)
Jan 19 2005 C style comments cause problem (4)
Jan 17 2005 Expression expected error (3)
Jan 14 2005 Eclipse plug-in input requested (7)
Jan 12 2005 Member Function Pointers and the Fastest Possible C++ Delegates (5)
Jan 12 2005 Minor Bug (6)
Jan 08 2005 linking with IMSLNT problems (1)
Jan 08 2005 Nested Functions (4)
Jan 06 2005 Regex (3)
Jan 03 2005 some help controlling excel file with C++ (1)
Jan 02 2005 Make headers??? (1)
Other years:
2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001
|
http://www.digitalmars.com/d/archives/c++/index2005.html
|
CC-MAIN-2015-22
|
refinedweb
| 1,731
| 77.47
|
Inline assembly
Introduction
While reading source code in the Linux kernel, I often see statements like this:
__asm__("andq %%rsp,%0; ":"=r" (ti) : "0" (CURRENT_MASK));
Yes, this is inline assembly or in other words assembler code which is integrated in a high level programming language. In this case the high level programming language is C. Yes, the
C programming language is not very high-level, but still.
If you are familiar with the assembly programming language, you may notice that
inline assembly is not very different from normal assembler. Moreover, the special form of inline assembly which is called
basic form is exactly the same. For example:
__asm__("movq %rax, %rsp");
or:
__asm__("hlt");
The same code (of course without
__asm__ prefix) you might see in plain assembly code. Yes, this is very similar, but not so simple as it might seem at first glance. Actually, the GCC supports two forms of inline assembly statements:
basic;
extended.
The basic form consists of only two things: the
__asm__ keyword and the string with valid assembler instructions. For example it may look something like this:
__asm__("movq $3, %rax\t\n" "movq %rsi, %rdi");
The
asm keyword may be used in place of
__asm__, however
__asm__ is portable whereas the
asm keyword is a
GNU extension. In further examples I will only use the
__asm__ variant.
If you know assembly programming language this looks pretty familiar. The main problem is in the second form of inline assembly statements -
extended. This form allows us to pass parameters to an assembly statement, perform jumps etc. Does not sound difficult, but requires knowledge of special rules in addition to knowledge of the assembly language. Every time I see yet another piece of inline assembly code in the Linux kernel, I need to refer to the official documentation of
GCC to remember how a particular
qualifier behaves or what the meaning of
=&r is for example.
I've decided to write this part to consolidate my knowledge related to the inline assembly, as inline assembly statements are quite common in the Linux kernel and we may see them in linux-insides parts sometimes. I thought that it would be useful if we have a special part which contains information on more important aspects of the inline assembly. Of course you may find comprehensive information about inline assembly in the official documentation, but I like to put everything in one place.
Note: This part will not provide guide for assembly programming. It is not intended to teach you to write programs with assembler or to know what one or another assembler instruction means. Just a little memo for extended asm.
Introduction to extended inline assembly
So, let's start. As I already mentioned above, the
basic assembly statement consists of the
asm or
__asm__ keyword and set of assembly instructions. This form is in no way different from "normal" assembly. The most interesting part is inline assembler with operands, or
extended assembler. An extended assembly statement looks more complicated and consists of more than two parts:
__asm__ [volatile] [goto] (AssemblerTemplate [ : OutputOperands ] [ : InputOperands ] [ : Clobbers ] [ : GotoLabels ]);
All parameters which are marked with squared brackets are optional. You may notice that if we skip the optional parameters and the modifiers
volatile and
goto we obtain the
basic form.
Let's start to consider this in order. The first optional
qualifier is
volatile. This specifier tells the compiler that an assembly statement may produce
side effects. In this case we need to prevent compiler optimizations related to the given assembly statement. In simple terms the
volatile specifier instructs the compiler not to modify the statement and place it exactly where it was in the original code. As an example let's look at the following function from the Linux kernel:
static inline void native_load_gdt(const struct desc_ptr *dtr) { asm volatile("lgdt %0"::"m" (*dtr)); }
Here we see the
native_load_gdt function which loads a base address from the Global Descriptor Table to the
GDTR register with the
lgdt instruction. This assembly statement is marked with
volatile qualifier. It is very important that the compiler does not change the original place of this assembly statement in the resulting code. Otherwise the
GDTR register may contain wrong address for the
Global Descriptor Table or the address may be correct, but the structure has not been filled yet. This can lead to an exception being generated, preventing the kernel from booting correctly.
The second optional
qualifier is the
goto. This qualifier tells the compiler that the given assembly statement may perform a jump to one of the labels which are listed in the
GotoLabels. For example:
__asm__ goto("jmp %l[label]" : : : : label);
Since we finished with these two qualifiers, let's look at the main part of an assembly statement body. As we have seen above, the main part of an assembly statement consists of the following four parts:
- set of assembly instructions;
- output parameters;
- input parameters;
- clobbers.
The first represents a string which contains a set of valid assembly instructions which may be separated by the
\t\n sequence. Names of processor registers must be prefixed with the
%% sequence in
extended form and other symbols like immediates must start with the
$ symbol. The
OutputOperands and
InputOperands are comma-separated lists of C variables which may be provided with "constraints" and the
Clobbers is a list of registers or other values which are modified by the assembler instructions from the
AssemblerTemplate beyond those listed in the
OutputOperands. Before we dive into the examples we have to know a little bit about
constraints. A constraint is a string which specifies placement of an operand. For example the value of an operand may be written to a processor register or read from memory etc.
Consider the following simple example:
int main(void) { unsigned long a = 5; unsigned long b = 10; unsigned long sum = 0; __asm__("addq %1,%2" : "=r" (sum) : "r" (a), "0" (b)); printf("a + b = %lu\n", sum); return 0; }
Let's compile and run it to be sure that it works as expected:
$ gcc test.c -o test ./test a + b = 15
Ok, great. It works. Now let's look at this example in detail. Here we see a simple
C program which calculates the sum of two variables placing the result into the
sum variable and in the end we print the result. This example consists of three parts. The first is the assembly statement with the add instruction. It adds the value of the source operand together with the value of the destination operand and stores the result in the destination operand. In our case:
addq %1, %2
will be expanded to the:
addq a, b
Variables and expressions which are listed in the
OutputOperands and
InputOperands may be matched in the
AssemblerTemplate. An input/output operand is designated as
%N where the
N is the number of operand from left to right beginning from
zero. The second part of the our assembly statement is located after the first
: symbol and contains the definition of the output value:
"=r" (sum)
Notice that the
sum is marked with two special symbols:
=r. This is the first constraint that we have encountered. The actual constraint here is only
r itself. The
= symbol is
modifier which denotes output value. This tells to compiler that the previous value will be discarded and replaced by the new data. Besides the
= modifier,
GCC provides support for following three modifiers:
+- an operand is read and written by an instruction;
&- output register shouldn't overlap an input register and should be used only for output;
%- tells the compiler that operands may be commutative.
Now let's go back to the
r qualifier. As I mentioned above, a qualifier denotes the placement of an operand. The
r symbol means a value will be stored in one of the general purpose register. The last part of our assembly statement:
"r" (a), "0" (b)
These are input operands - variables
a and
b. We already know what the
r qualifier does. Now we can have a look at the constraint for the variable
b. The
0 or any other digit from
1 to
9 is called "matching constraint". With this a single operand can be used for multiple roles. The value of the constraint is the source operand index. In our case
0 will match
sum. If we look at assembly output of our program: 01 d0 add %rdx,%rax
First of all our values
5 and
10 will be put at the stack and then these values will be moved to the two general purpose registers:
%rdx and
%rax.
This way the
%rax register is used for storing the value of the
b as well as storing the result of the calculation. NOTE that I've used
gcc 6.3.1 version, so the resulted code of your compiler may differ.
We have looked at input and output parameters of an inline assembly statement. Before we move on to other constraints supported by
gcc, there is one remaining part of the inline assembly statement we have not discussed yet -
clobbers.
Clobbers
As mentioned above, the "clobbered" part should contain a comma-separated list of registers whose content will be modified by the assembler code. This is useful if our assembly expression needs additional registers for calculation. If we add clobbered registers to the inline assembly statement, the compiler take this into account and the register in question will not simultaneously be used by the compiler.
Consider the example from before, but we will add an additional, simple assembler instruction:
__asm__("movq $100, %%rdx\t\n" "addq %1,%2" : "=r" (sum) : "r" (a), "0" (b));
If we look at the assembly output: c7 c2 64 00 00 00 mov $0x64,%rdx 400525: 48 01 d0 add %rdx,%rax
we will see that the
%rdx register is overwritten with
0x64 or
100 and the result will be
110 instead of
10. Now if we add the
%rdx register to the list of
clobbered registers:
__asm__("movq $100, %%rdx\t\n" "addq %1,%2" : "=r" (sum) : "r" (a), "0" (b) : "%rdx");
and look at the assembler output again: 4d f8 mov -0x8(%rbp),%rcx 40051a: 48 8b 45 f0 mov -0x10(%rbp),%rax 40051e: 48 c7 c2 64 00 00 00 mov $0x64,%rdx 400525: 48 01 c8 add %rcx,%rax
the
%rcx register will be used for
sum calculation, preserving the intended semantics of the program. Besides general purpose registers, we may pass two special specifiers. They are:
cc;
memory.
The first -
cc indicates that an assembler code modifies flags register. This is typically used if the assembly within contains arithmetic or logic instructions:
__asm__("incq %0" ::""(variable): "cc");
The second
memory specifier tells the compiler that the given inline assembly statement executes read/write operations on memory not specified by operands in the output list. This prevents the compiler from keeping memory values loaded and cached in registers. Let's take a look at the following example:
#include <stdio.h> int main(void) { unsigned long a[3] = {10000000000, 0, 1}; unsigned long b = 5; __asm__ volatile("incq %0" :: "m" (a[0])); printf("a[0] - b = %lu\n", a[0] - b); return 0; }
This example may be artificial, but it illustrates the main idea. Here we have an array of integers and one integer variable. The example is pretty simple, we take the first element of
a and increment its value. After this we subtract the value of
b from the first element of
a. In the end we print the result. If we compile and run this simple example the result may surprise you:
~$ gcc -O3 test.c -o test ~$ ./test a[0] - b = 9999999995
The result is
a[0] - b = 9999999995 here, but why? We incremented
a[0] and subtracted
b, so the result should be
a[0] - b = 9999999996 here.
If we have a look at the assembler output for this example:
00000000004004f6 <main>: 4004b4: 48 b8 00 e4 0b 54 02 movabs $0x2540be400,%rax 4004be: 48 89 04 24 mov %rax,(%rsp) ... ... ... 40050e: ff 44 24 f0 incq (%rsp) 4004d8: 48 be fb e3 0b 54 02 movabs $0x2540be3fb,%rsi
we will see that the first element of the
a contains the value
0x2540be400 (
10000000000). The last two lines of code are the actual calculations.
We see our increment instruction with
incq but then just a move of
0x2540be3fb (
9999999995) to the
%rsi register. This looks strange.
The problem is we have passed the
-O3 flag to
gcc, so the compiler did some constant folding and propagation to determine the result of
a[0] - 5 at compile time and reduced it to a
movabs with a constant
0x2540be3fb or
9999999995 in runtime.
Let's now add
memory to the clobbers list:
__asm__ volatile("incq %0" :: "m" (a[0]) : "memory");
and the new result of running this is:
~$ gcc -O3 test.c -o test ~$ ./test a[0] - b = 9999999996
Now the result is correct. If we look at the assembly output again:
00000000004004f6 <main>: 400404: 48 b8 00 e4 0b 54 02 movabs $0x2540be400,%rax 40040b: 00 00 00 40040e: 48 89 04 24 mov %rax,(%rsp) 400412: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp) 400419: 00 00 40041b: 48 c7 44 24 10 01 00 movq $0x1,0x10(%rsp) 400422: 00 00 400424: 48 ff 04 24 incq (%rsp) 400428: 48 8b 04 24 mov (%rsp),%rax 400431: 48 8d 70 fb lea -0x5(%rax),%rsi
we will see one difference here which is in the last two lines:
400428: 48 8b 04 24 mov (%rsp),%rax 400431: 48 8d 70 fb lea -0x5(%rax),%rsi
Instead of constant folding,
GCC now preserves calculations in the assembly and places the value of
a[0] in the
%rax register afterwards. In the end it just subtracts the constant value of
b from the
%rax register and puts result to the
%rsi.
Besides the
memory specifier, we also see a new constraint here -
m. This constraint tells the compiler to use the address of
a[0], instead of its value. So, now we are finished with
clobbers and we may continue by looking at other constraints supported by
GCC besides
r and
m which we have already seen.
Constraints
Now that we are finished with all three parts of an inline assembly statement, let's return to constraints. We already saw some constraints in the previous parts, like
r which represents a
register operand,
m which represents a memory operand and
0-9 which represent an reused, indexed operand. Besides these
GCC provides support for other constraints. For example the
i constraint represents an
immediate integer operand with know value:
int main(void) { int a = 0; __asm__("movl %1, %0" : "=r"(a) : "i"(100)); printf("a = %d\n", a); return 0; }
The result is:
~$ gcc test.c -o test ~$ ./test a = 100
Or for example
I which represents an immediate 32-bit integer. The difference between
i and
I is that
i is general, whereas
I is strictly specified to 32-bit integer data. For example if you try to compile the following code:
unsigned long test_asm(int nr) { unsigned long a = 0; __asm__("movq %1, %0" : "=r"(a) : "I"(0xffffffffffff)); return a; }
you will get an error:
$ gcc -O3 test.c -o test test.c: In function ‘test_asm’: test.c:7:9: warning: asm operand 1 probably doesn’t match constraints __asm__("movq %1, %0" : "=r"(a) : "I"(0xffffffffffff)); ^ test.c:7:9: error: impossible constraint in ‘asm’
when at the same time:
unsigned long test_asm(int nr) { unsigned long a = 0; __asm__("movq %1, %0" : "=r"(a) : "i"(0xffffffffffff)); return a; }
works perfectly:
~$ gcc -O3 test.c -o test ~$ echo $? 0
GCC also supports
J,
K,
N constraints for integer constants in the range of 0-63 bits, signed 8-bit integer constants and unsigned 8-bit integer constants respectively. The
o constraint represents a memory operand with an
offsetable memory address. For example:
#include <stdio.h> int main(void) { static unsigned long arr[3] = {0, 1, 2}; static unsigned long element; __asm__ volatile("movq 16+%1, %0" : "=r"(element) : "o"(arr)); printf("%lu\n", element); return 0; }
The result, as expected:
~$ gcc -O3 test.c -o test ~$ ./test 2
All of these constraints may be combined (so long as they do not conflict). In this case the compiler will choose the best one for a certain situation. For example:
unsigned long a = 10; unsigned long b = 20; void main(void) { __asm__ ("movq %1,%0" : "=mr"(b) : "rm"(a)); }
will use a memory operand:
main: movq a(%rip),b(%rip) ret b: .quad 20 a: .quad 10
instead of direct usage of general purpose registers.
That's about all of the commonly used constraints in inline assembly statements. You can find more in the official documentation.
Architecture specific constraints
Before we finish, let's look at the set of special constraints. These constrains are architecture specific and as this book is specific to the x86_64 architecture, we will look at constraints related to it. First of all the set of
a ...
d and also
S and
D constraints represent generic purpose registers. In this case the
a constraint corresponds to
%al,
%ax,
%eax or
%rax register depending on instruction size. The
S and
D constraints are
%si and
%di registers respectively. For example let's take our previous example. We can see in its assembly output that value of the
a variable is stored in the
%eax register. Now let's look at the assembly output of the same assembly, but with other constraint:
int a = 1; int main(void) { int b; __asm__ ("movq %1,%0" : "=r"(b) : "d"(a)); return b; }
Now we see that value of the
a variable will be stored in the
%rax register:
0000000000400400 <main>: 4004aa: 48 8b 05 6f 0b 20 00 mov 0x200b6f(%rip),%rax # 601020 <a>
The
f and
t constraints represent any floating point stack register -
%st and the top of the floating point stack respectively. The
u constraint represents the second value from the top of the floating point stack.
That's all. You may find more details about x86_64 and general constraints in the official documentation.
|
https://0xax.gitbooks.io/linux-insides/content/Theory/linux-theory-3.html
|
CC-MAIN-2019-43
|
refinedweb
| 3,040
| 60.45
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How can we use dynamic domains in new Odoo API?
How can we use dynamic domains in new Odoo API?
I'm trying to do it, I have an Many2one field, and a compute One2many field. Options to be select by Many2one field should be in this One2many field.
Here are my code lines:
days = fields.One2many('af.week.day', string='Class days', compute='_compute_days',
help='Week days in which this group has classes')
day_of_week = fields.Many2one('af.week.day', string='Day of week', domain="[('id','in',days)]")
In the new Odoo API it is the same as in Version 7.0. Look at this example:
@api.onchange('field')
def onchange_field(self):
if condition_a:
return {
'domain': {
'field_b': [('domain', '=', 'something')],
},
}
else:
return {
'domain': {
'field_b': [('domain', '=', 'something_else')],
},
}
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Any idea? In old api we can use something like this: def onchange_type(self,cr,uid,ids, type,context=None): product_obj = self.pool.get('product.product') product_ids = product_obj.search(cr,uid, [('type','=',type)]) return {'domain':{'product_id':[('id','in',product_ids)]}} But in new api... Thanks in advanced!
No anwers?
Send the real example what you did from your side. here in comment you have given the example of onchange method what you want to do it. you can not set dynamic domain on onchange method. give more description of your requirement. dynamic domain can be set by overriding search method of that particular model.
|
https://www.odoo.com/forum/help-1/question/how-can-we-use-dynamic-domains-in-new-odoo-api-71304
|
CC-MAIN-2017-13
|
refinedweb
| 283
| 59.9
|
User Name:
Brian Mains, Apr 28, 2008
Brian Mains explains the benefits of a 3-tier architecture.
Brian Mains, Jan 09, 2008
Brian Mains talks about abstractions.
Keyvan Nayyeri, Dec 24, 2007
Review of the NDepend tool.
Karl Seguin, Dec 05, 2007
The Foundations of Programming series looks at a number of key concepts, techniques and tools specifically designed to help developers meet the growing complexity of enterprise systems. Based on proven principals like unit testing, domain driven design, dependency injection and O/R Mappers, the series is aimed at developers interested in helping themselves.
Granville Barnett, Oct 24, 2007
A review of ANTS Profiler 3 from Redgate software.
Steve Orr, Oct 08, 2007
Overview of the free Process Explorer utility provided by Microsoft.
Alessandro Gallo, Sep 30, 2007
Review of the book: Patterns of Enterprise Application Architecture (PoEAA).
Steve Orr, Jul 16, 2007
Learn how to secure your applications against hacker attacks with Microsoft's freely downloadable Threat Analysis and Modeling tool.
Luciano Terra, Jul 11, 2007
This article will show you how easy it is to create a data driven template using MyGeneration, a powerful, free and open source code generator.
Imran Nathani, Jun 25, 2007.
Speednet ., May 30, 2007
SettingsManager is a JavaScript library that allows Windows Vista Sidebar gadgets to persist common settings that all gadget instances have access to.
Amr Elsehemy, May 28, 2007
Office 2007 offers great new features, one of them is the SuperTooltip which provides much more information about controls than standard old style tooltips. This article shows how to build tooltips in such a way.
Steve Orr, May 25, 2007
Embark on a tour of this free musical beat maker's source code. In the process you'll learn about user controls, embedded resources, and the VB.NET My namespace.
Steve Orr, May 07, 2007
Learn how to use MIDI to create dynamic music by touring the source code of this free musical beat maker sequencer program. In the process you'll learn about serialization, Xml, Windows API calls, and MIDI commands.
Luke Stratman, May 03, 2007
This article covers a general introduction to ORM concepts, the approach that .NET 3.5 takes, and how it compares to these other packages.
Granville Barnett, Feb 22, 2007
With the release of .NET 3.0 came four new technologies including Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), Windows CardSpace and Windows Workflow Foundation (WF).
Derek Smyth, Dec 27, 2006
Bitwise operations are a way to use the individual bits of an integer to represent
32 individual boolean flags. Combining this idea with enumerations and bitwise ORs can provide a more readable way to set each of the individual flags.
|
http://dotnetslackers.com/articles/NET/default.aspx
|
crawl-001
|
refinedweb
| 447
| 53.71
|
Auto fill
Say I wanted to auto fill info into a website. Could I do that in a python script?
it depends, but usually, yes.
There are a few options, such as mechanize, requests, or webbrowser (using javascript to simulate clicks, etc). I tend to use bs4 along with requests.
Ok also, how do I open a website in UI
This should get you started.
import ui,time w=ui.WebView() w.frame=(0,0,570,570) w.load_url('') w.present() # make sure the page has finished loading time.sleep(1) while not w.eval_js('document.readyState') == 'complete': time.sleep(1.) # using bs4 and requests to poke around, i know the textfield has a name of q # i can set the value using javascript w.eval_js('document.getElementsByName("q")[0].value="using javascript to fill in forms";') # i happen to know the name of the form is "f", i will submit the form time.sleep(1) w.eval_js('document.getElementsByName("f")[0].submit()')
Other good searches might be "using bs4 and requests to fill in forms", or "how to use mechanize to fill in forms".
|
https://forum.omz-software.com/topic/3489/auto-fill
|
CC-MAIN-2021-49
|
refinedweb
| 185
| 67.96
|
How many triangles (of all sizes) do you see in the equilateral triangle below?
Before we begin, let's get a bit of terminology out of the way. I'll be referring to each specific instance of this problem by the number of unit-triangles at its base. The unit-triangle is the smallest triangle in each image, and the base is the number of unit-triangles along the bottom edge. In the image above, there are five unit-triangles in the base. Let's call the number of unit-triangles in an image u(n), where n is the number of unit-triangles in the base, and the total number of triangles in an image T(n).
Starting at the beginning, we see that the simplest case consists of only one triangle.
Next we have a triangle that has two triangles at its base. Counting the unit-triangles we get four, plus the larger triangle they form, for a total of five.
Next we move up to a base-3 triangle. You can see from the image below that it's made up of 9 units.
In addition, there are three base-2 triangles (one in each corner, see images below), plus the base-3 triangle itself, for a total of 13.
At this point you should note that any base-n triangle consists of three base-(n-1) triangles, one for each corner. The only exception to this rule is at base-2, which we saw consists of four base-1 triangles, one for each corner and one in the middle. From here on, I won't illustrate the three corner base-(n-1) triangles.
A triangle with a base of four units consists of itself, 16 units, seven base-2 triangles, and the three base-3 triangles in each corner, for a total of 27.
I won't illustrate all seven of the base-2 triangles, but many people do miscount them, so I'll list them by their numbered units to help you visualize them.
{ 1, 2, 3, 4 }
{ 2, 5, 6, 7 }
{ 4, 7, 8, 9 }
{ 5, 10, 11, 12 }
{ 7, 12, 13, 14 }
{ 9, 14, 15, 16 }
{ 6, 7, 8, 13 }
{ 2, 5, 6, 7 }
{ 4, 7, 8, 9 }
{ 5, 10, 11, 12 }
{ 7, 12, 13, 14 }
{ 9, 14, 15, 16 }
{ 6, 7, 8, 13 }
Note that the last triangle listed, { 6, 7, 8, 13 }, is a downward-pointing triangle. This is important because there are more of these as you increase the size of the base, and because many people overlook them while counting. Make sure that you can see all the triangles listed before continuing on.
Finally, we come to the base-5 triangle that we set out to solve at the beginning. It's made up of 25 units, 13 base-2 triangles, six base-3 triangles, three base-4 triangles in the corners, and itself for a total of 48 triangles.
I won't go through and illustrate all 48 of them, but hopefully the visual cues I provided in the previous illustrations help you to arrive at the same count that I did.
A General Formula
Now that we have the solutions to several small instances of the triangle problem, we have enough information to search for a general formula. Other than the fact that for each instance the number of unit-triangles is the square of the base, I don't see any interesting patterns in the sequence, so I'll skip directly to searching for one. Searching for the sequence 1, 5, 13, 27, 48 in the Online Encyclopedia of Integer Sequences leads me to this resource.
The comments identify this problem as
Number of triangles in triangular matchstick arrangement of side n.which is a very general way of stating the original problem (where the problem was originally posed by laying out matchsticks on a table, rather than drawing triangles in an image editor).
The formula given for the sequence is:
T(n) = floor(n * (n + 2) * (2n + 1) / 8)
I could stop there, but I want to explain what the floor function is, and why it's needed in this case. The floor function in mathematics simply gives the largest integer less than or equal to its input. It simply rounds down to the nearest whole number (unless the input is already a whole number, in which case the output is exactly the same whole number as the input).
To see why we need to floor the results of this calculation, consider two things. First, when we're counting the triangles in the original image, I hope it's apparent that we will always come up with a whole number of triangles. That is, we'll never have some fractional part of a triangle left over. The answer will always be an integer value. Second, if we look at the results of the formula without using the floor function, we can see that it is needed.
f(n) = n * (n + 2) * (2n + 1) / 8
Note that when n is odd, there is always some a fractional remainder left over from applying the formula. Flooring the result removes this residue, giving us the same whole number answer we would arrive at by counting.
Since the remainder is always exactly equal to 1/8, we could get the same result by using two different formulas, one for even inputs, and one for odd inputs. This is the approach taken with the formulas given on on Wolfram MathWorld's Triangle Tiling article.
In Closing...
Congratulations to everyone who took the time to work through the solution to this problem. I'd like to thank everyone who left a comment on the original post. Your questions and comments made that one of the most fun (for me) posts I've written so far. I hope you enjoyed it as much as I did.
47 comments:
Epic post as usual.
It'd be an interesting exercise to write a program that counts triangles. Counting base-1 and base-2 is simple. Starting from base-3 things get hairy. The challenge is to come up with the efficient data structure that unrolls the biggest triangle into smaller ones.
Evgeni,
My immediate thought was some kind of tree, but there's so much overlap that I don't know right off how to make it efficient.
That would definitely be an interesting exercise. You could make it a full-blown project if you include an image capturing feature like Sudoku Magic.
I wrote a program in Python that calculates the number of triangles in one of these puzzles with just the number of base triangles (unit ones). But I used a completely different approach/formula. I felt very sad when I found out about this equation because I hoped I was the first person to find a solution. It seems that almost anything that's important or interesting has already been discovered *sigh*.
Anon,
Don't be discouraged. It's a good sign if you can come up with that formula on your own, even if you weren't the first. Keep exploring new problems and eventually you'll be the first to solve one.
I count this in simple way.
its always 3 * n^2 + 2
where n is number of horizontal lines inside main outer triangle.
Anon,
Did you make a mistake typing in the function you're using? The function you posted in your comment gives the sequence
2, 5, 14, 29, 50, 77, 110, 149, ...
which is only correct for one term. It's close for several terms, which is what leads me to believe you made a simple error, but I can't figure out what it is.
Though this formula of floor{n*(n+2)*(2n+1)/8} does the job, I haven't been able to fully satisfy myself of the derivation of the above result, nor have I been able to find any conclusive proof. I myself have tried to derive it, but have failed leaving me frustrated, any sort of explanation will be helpful
hi. The derivation is pretty simple, i think. i'm working towards getting the expression/formula to get it for any given level n. Its mostly nested summations, which can be simplified to polynomial expressions, which i'm hoping will work out to the given expression!
my approach [in progress]:
k=0
for b from 1 to n-1:
for a from 1 to b:
k+=a
this gives the count of all upright triangles in the system. its a simple summation formula, but can't write it here, so using pseudocode notation!
any help would be appreciated :)
Hi, I found your blog because I also became interested in this problem. The solution I found was:
3n - p(n)
f(n) = ----------------
2
3(n-1)^2 + 2(n-1) - p(n-1)
+ ----------------------------------------
4
__n-1_
\ 3(i)2 + 2(i) - p(i)
+ > ----------------------
/_____ 4
i = 1
Where p(n) = n (mod 2) **** n even, p(n) = 0; n odd, p(n) = 1
Python Implementation
————————————
def p(n):
return n%2
def f(n):
sum = 0
for i in range(1,n):
sum += (3*(i)**2 + 2*(i) - p(i))//4
return (3*n-p(n))//2 + (3*(n-1)**2 +2*(n-1) - p(n-1))//4 + sum
for i in range(10):
print("A triangle of index " + str(i) + " has " + str(f(i)) + " inscribed triangles.")
Oh, yeah, the program returns the following:
IDLE 2.6.4 ==== No Subprocess ====
>>>
A triangle of index 0 has 0 inscribed triangles.
A triangle of index 1 has 1 inscribed triangles.
A triangle of index 2 has 5 inscribed triangles.
A triangle of index 3 has 13 inscribed triangles.
A triangle of index 4 has 27 inscribed triangles.
A triangle of index 5 has 48 inscribed triangles.
A triangle of index 6 has 78 inscribed triangles.
A triangle of index 7 has 118 inscribed triangles.
A triangle of index 8 has 170 inscribed triangles.
A triangle of index 9 has 235 inscribed triangles.
>>>
p(n) is the remainder for the division n/2
dualbus,
Excellent solution! Thanks for sharing it, and for providing the source code.
On the 4 base triangle, I continually come up with 31 triangles. Additional ones not listed are - the entire outline = 1
(1,2,3,4,5,6,7,8,9)= 2
(2,5,6,7,10,11,12,13,14)=3
(4,7,8,9,12,13,14,15,16)= 4
These in addition to the 27 listed equals 31 triangles for a 4 base grid.
Anonymous,
Those four are already included in the total count of 27.
"A triangle with a base of four units consists of itself, 16 units, seven base-2 triangles, and the three base-3 triangles in each corner, for a total of 27."
The ones you listed are what I'm calling "itself" (the entire base-4 triangle), and the "the three base-3 triangles in each corner." To illustrate that last part, what you listed as (1,2,3,4,5,6,7,8,9), I would call the base-3 triangle in the top corner.
I hope that helps.
Great Post!
What I tried was counting the triangles in each of the base n-1 triangles that use one of the vertices of the base n triangle.
If you do that you've double counted all the triangles in the three base n-2 triangles where the three n-1 triangles overlap.
I can subtract those but would need to add T(n-3) back to count where all three overlap.
Adding one for the big triangle:
3T(n-1) - 3T(n-2) + T(n-3) + 1
Plugging in 0 for T(x) when n-1, n-2, or n-3 are less than 1 makes this work for odd n, but it needs another +1 for even n. This makes sense, since whenever I tried counting this way with even n there was an always an upside down triangle that wasn't counted.
I imagine solving this recurrence gives the equation you posted, but I don't know how to do that.
So what am missing why does Engeni want to us base-n(ything). I see a polynomial, my calculus is a bit rustty, but, from memory:
n * (n + 2) * (2n + 1) / 8
is the same as
n * (2n^2 + 4n + n + 2)/8
or
n * (2n^2 + 5n + 2 ) / 8
or
(2n^3 + 5n^2 + 2n) / 8
flame me if I am wrong, I would deserve it.
As for the floor function, computers always round down when converting from float to int, hence if the calculation is done as a float and converted to an int the correct result will be reached.
based on my old maths and in c:
#include
#include
int triangles (int base_triangles)
{
float res;
float n = base_triangles;
res = ((2 * pow(n, 3)) + (5 * pow(n, 2)) + (2 * n)) / 8;
return (int) res;
}
int main (void) {
int i;
for (i = 0; i < 7; i++)
printf("base_triangles = %d, total_triangles = %d\n", i, triangles(i));
}
It seems my calculus is not as rusty as I thought - increase 7 to what you like.
Nasty response thingy stole some of my text you will need to include stdio.h (for printf) and stdlib.h (for pow).
T compile I used cygwin - its free and the command:
gcc -o triangles triangles.c
and to run
triangles.exe
Enojy, hopefully someone will learn something.
This is awesome. :)
Funny, I started out wanting to take a 5x5x5 Rubik's cube apart and ended up here! Amazing. My initial guess, before I went online, to the number of "parts" inside the cube was 248. I have yet to take it apart. Wish me luck.
Excel cell formula: =POWER(A1,3)/4+POWER(A1,2)*5/8+A1/4+(POWER(-1,A1)-1)/16
where you pop the length of the side in cell A1.
good one...
#include
#include
#include
int main(void)
{
int T,i,j;
long long int N,sum=0,num1,num2;
scanf("%d",&T);
if(T>10000)
{
exit(0);
}
while(T--)
{
scanf("%lld",&N);
sum=0;
if(N<1||N>1000000)
{
exit(0);
}
sum+=(N*N*N+3*N*N+2*N)/6;
sum+=((N*N-N)*((N/2)+1))/2;
sum+=((-2*N+1)*((N/2)*((N/2)+1)))/2;
sum+=(4*((N/2)*((N/2)+1)*(2*(N/2)+1))/6)/2;
printf("%lld\n",sum);
}
return 0;
}
and the messages continue flowing into my mobile, like today I got 2 again.From hundreds of pages similar to yours, somebody chose exactly yours to link with and stealing money.He can be one of your 'bloggers' and he made a small revealing mistake-in one of his messages he used 'ae' instead of Finnish letter'a' with two dots above, thus providing his comp. wasn't bought in Finland(where computers are relatively cheap).Using deduction, I've found a few more things about this guy.Maybe URL is public, but many other things connected with this person are unique, like a psychological portrait.Anyway,he's keeping spoiling your good reputation and I'm changing my prepaid card, so he can send messages-it's idle.
Do I need to say that you are awesome while explaining things, Thanks a TON !!!
Hello...im a college prep student, just started on the 2 of july. anyway our math teacher gave us this problem as homework to try to find patterns. anyway i've found an alternate yet longer way to figure out how many triangles are in the triangle. now im not good at mathematical formulae, but i'll explain it the best i can.
since the number of unit triangles in the larger trangle is always the number or rows times itself(i.e. row 8 has 64 unit triangles, 8*8=64) if you take the number of unit triangles and subtract the row minus 1 you'll get the number of base 2 triangles in the next row. it goes on and on, until the number stagnates and will always be that number in that diagonal row. since i probably lost most of you here's a sort of graph:
as you can see, the number of triangles starts to diminish and finally stagnate as you go down diagonally. i've used my formula to get the number of triangles in a 20 row triangle. yes...it was very daunting...but it wasnt nearly as painful as counting the first ten rows and recording my findings by hand...anyways thanks for listening to a kid like me. if you find anything else out about this please post it.
ugh...srry that graph didnt come out like i wanted this should be better:-
i give up XD frakin hate computers somtimes...
Base 1 = 1
Base 2 = 1 + 4
Base 3 = 1 + 3 + 9
Base 4 = 1 + 3 + 7 +16
Base 5 = 1 + 3 + 6 +13+25
We could now assume that for a Base 6 triangle, the answer will be in the form of: 1 + 3 + ... +36. But how would one find the three numbers in between?
We should notice that the second number in the solutions, for Base 3 and greater is always going to be "3".
We also notice that the third number in the Base 4 sequence is 7, 2 less than the third number in the Base 3 sequence. The third number in the Base 5 sequence is 6, which is one less than the third number in the Base 4 sequence. Following the pattern set in the second and third sequence numbers for Base 5 triangles and less, the third number in the sequence will be 6 and remain 6 for all Base n triangles.
To summarize:
The first number when calculating any Base n triangles will be "1".
The second term for Base n sequence (where n is > 2) will be "3".
The third term for Base n sequence (where n is > 4) will always be "6".
The fourth term for Base n sequence (where n > 6) will always be "10".
The xth term for a Base n sequence (where n > 3x-2) will always be a "triangle number", i.e. 1,3,6,10,15,21....
Also, as many have noted, the last term in any Base n sequence will be n^2.
So there is a predictable sequence of terms where one could devise the answer to the "number of triangles" for any Base n without using the admittedly convenient formula and without tedious counting.
Base 6 : 1 + 3 + 6 +11 +21 +36
Base 7 : 1 + 3 + 6 +10 +18 +31 +49
Base 8 : 1 + 3 + 6 +10 +16 +27 +43 +64
Base 9 : 1 + 3 + 6 +10 +15 +24 +38 +57 + 81
Base10 : 1 + 3 + 6 +10 +15 +22 +34 +51 + 73 +100
Thanks for this site!
I solve it by counting the “up-triangles” (those pointing up) first, unit and non-unit.
Up(n) = n(n+1)(n+2)/6
then the down-triangles
Down(n) = n(n+2)(2n-1) – mod(n,2)/8
I saw a pattern then. Putting them together you get your result:
(n(n+2)(2n+1) – mod(n,2))/8
Oops
Down(n) = n(n+2)(2n-1)/24 – mod(n,2)/8
Hi this is john.try this [4n^3+10n^2+4n-1+(-1)^n]/16, if you want to know the total number of triangles with ...where is the number of trianles in the bade(BT)=n
Hi this is john.try this [4n^3+10n^2+4n-1+(-1)^n]/16, if you want to know the total number of triangles with ...where is the number of trianles in the bade(BT)=n
Here's the formula I figured out before I found your site.
sum^b_k{\lfloor \frac{b + k + 1}{\lceil \frac{1}{2} (b + k + 1) \rceil} \rfloor sum^k_i{i}},
where b is the number of unit triangles in the base. Sorry about the messy LaTeX; if you can't read it, here's the mathurl: mathurl.com/8r9vpmj
I really wish distinguishing between odd and even wasn't such a hassle for math notation. Really, all that mess of floor and ceiling in the middle of my formula does is simplify to either 1 or 2, depending on whether b and k are both odd, both even, or one is odd and the other even. It shouldn't be so complicated.
It works with an alghoritm. I also thinked, that I found a solution, before I saw the formula. Lets say that sum(5) = 5+4+3+2+1, and the base is broi1, and broi2 is broi1-1(broi2 actually represent the triangles with the edge poining downside)
$sum = 0;
while ($broi1 > 0)
{
$sum = $sum + sum($broi1);
if ($broi2 > 0)
{
$sum = $sum + makesum($broi2);
}
$broi1--;
$broi2 = $broi2 - 2;
}
And the $sum is the exact count of triangles, which you can find in the picture. :)
I found a formula for this problem: Triangles = (cos(pi*n) + 4n^3 + 10n^2 + 4n -1)/16
The formula you gave us is not so hard to calculate, it's the sum of: (1+2+3+4+...+n) + (1^2+2^2+3^2+...+n^2) +
Now we have the trick, for even:
+[1+3+5+...+(n-1)]+
[1^2+2^2+3^2+...+(n-1)^2]
Or for odd:
+[2+4+6+...+(n-1)] +
+[2^2+4^2+6^2+...+(n-1)^2]
If you want to know how you get to this step, it's simple, calculate first the triangles with the point up and after the ones with the point down.
Sorry, my mistake at something, at even the second sum is : 1^2+3^2+5^2+...+(n-1)^2
Yes, yes. Finding the number of triangles in an equilateral triangle comprised of equilateral triangle units is easy. The huge number of comments preceding mine have provided many formulae to define it.
Let's step it up.
How many triangles are there in a square grid of unknown size?
Link to PhysicsForums Topic
The floor function is nice!
Formula I came up with before I found this site:
1/4*(n*(n+1)^2 + 2*int(n/2)*int((n+1)/2))
Hey all ...
First I started coung the difference of triangles made up of one small triangle and I saw that its the same of a function y =x^2 .. and calculated the difference in triangles of 4 small triangles as every one raw increases the number .... Finally there is a big relation sequenced between these difference . The table is amazing if you need it add me or enter my page Ahmad zeinddein ... physics and mathematics devils
Thankyou so much this helped me understand this alot more.
thanks man....for taking the time and effort
I just derived the formulae for the number of triangles and it matches exactly what you have given here. Would be glad to show my method if anyone's interested.
I found this formula (in the link) using integration and sequential corrections:
In this formula, x is the base of the triangle.
It give us the exact value of triangles. I used another equation, but it wasn't a closed equation.
It took me about 6 hours without stopping to reach on it. @__@
Here is the first equation that I got and I used it just to take some values. I've implemented it in a C software and it worked.
Q = (L-0)(L-0) + (L-1)(L-1) + (L-2)(L-3) + (L-4)(L-4) + (L-5)(L-5) + (L-6)(L-7) + (L-8)(L-8) + ... + (L- i)(L - j)
Which
L is the base of the triangle
i = L or j = L
so (L- i)*(L - j)=0
I derived different equations which also work, using number trees by separating out the odd and even number sequences and then deriving equations from those number trees. My formulae are:
1/2 (4n^3 - n^2 - n) for odd terms
1/2 (4n^3 + 5n^2 + n) for even terms.
Hi guys
Found the no.of triangles in a triangular grid.
It's
1/6n×[{n+1}{n+2} + {n-1}{n-2}+1]
where n= no of unit triangles on its one side.
ARRIVED AT THIS SITUATION USING TRIANGULAR NO. PATTERN.
hope it helps.
|
http://www.billthelizard.com/2009/08/how-many-triangles.html
|
CC-MAIN-2018-26
|
refinedweb
| 4,086
| 70.23
|
Agenda
See also: IRC log
<trackbot> Date: 15 September 2009
<gpilz> I'll drown myself tonight in sangria, made with slice-up fruit and cheap marsala
<Bob> scribenick Katy
<Bob> scribenick: Katy
<scribe> Chair: considering schedule and deadline we would probably benefit for this.
UNKNOWN_SPEAKER: Follow tech
plenary start of November
... recommend registering by 31st Sept for early bird deadline. Also sign up quick for hotels
<Zakim> asir, you wanted to ask a question about Hursley F2F (when appropriate
Asir: Do we have enough at the F2F to do business
<scribe> Chair: Yes, 10 (with phone attendee) and possibly some who have not answered on ballot
<dug>
Doug: 2nd comment - talks about
missing space can't locate the problem, sounds like it's the
diffs tool
... Phrase of 3rd comment.
Valid Infoset for this specification are the one serializable in XML 1.0, hence the use of XML 1.0.
Agreed should be 'is' rather than 'are'.
<scribe> Chair: Have all issues that are incorporated in Sept 2nd review closable?
UNKNOWN_SPEAKER: no comment so we
will accept them as closed
... Hearing no objection, we shall publish these as heartbeat Working draft
Asir: Publication in transfer
specification has a dangling reference
... to fragment
<Zakim> asir, you wanted to ask a question
<scribe> Chair: We haven't updated other references and prior snapshots have included references to other specs
Asir: There are 2 references (one in middle and one in end). One in middle refers to dialeect uri
<scribe> Chair: Does anyone have any objection to us publishing with these links still there?
Yves: We can link to the editors' copy if we want. If the link is auto gen'd to be part of the multi doc publication then we may need to do something else
Asir: That's fine if we can fix with pub rules
<scribe> Chair: No objections
RESOLUTION: Go ahead with heartbeat publications
<dug> wait? fruitful??? I thought it was a boondoggle!
<scribe> Chair: do we have agreement for folk to produce produce action items in time for F2F so F2F is fruitful
Asir: There is only one policy assertion action item open. Should we have more?
<scribe> Chair: The intention was to base other policy issues on 6403 - that was going to be a template
Asir: Could Doug and I work on the others as well prior to the F2F?
Doug: Yes, so long as we get the template one in place
<scribe> Chair: Perhaps we could look at 6403 during next week's meeting
<scribe> ACTION: Asir and Doug to aim to get 6403 proposal for next week's meet as template for other policy [recorded in]
<trackbot> Created ACTION-101 - And Doug to aim to get 6403 proposal for next week's meet as template for other policy [on Asir Vedamuthu - due 2009-09-22].
Whoops - I forgot realise that would actually create another action - sorry Asir and Doug!
<asir> That's fine - double reminder :-)
Ram: I would like to consider dropping the WS-Frag spec and merging RT towards Frag?
<dug> something == an apple pie
<dug> with whip cream
<dug> or ice cream
<Bob> and cheese
Ram: Reference to fragment goes away and fragment dialect is replaced by something similar in RT spec
Doug: This is more of a structural decision rather than semantics. (i.e. should frag be part of RT)
<scribe> Chair: This has been ongoing discussion since Raliegh F2F. It is likely that some of the RT spec may come under discussion (based on issues)
UNKNOWN_SPEAKER: we may be left with just Frag
<scribe> Chair: ... We could simply say, is there anything in the RT spec that we need to keep if Frag was in place
Doug: I would like one more week for this decision
<scribe> Chair: Next week is the deadline before RT issues are addressed
Ram: OK with me too
<scribe> ACTION: Katy to create new proposal for 7553 [recorded in]
<trackbot> Created ACTION-102 - Create new proposal for 7553 [on Katy Warr - due 2009-09-22].
RESOLUTION: 7553
opened
... 7554 opened
Gil: It should be clear to the
unsubscriber, you should receive invalid subscription fault but
during unsubscribe you should receive this unsubscribe
fault
... in order to indicate that the subscription is still active
<scribe> ACTION: Katy to create proposal for 7554 that considers 7553 [recorded in]
<trackbot> Created ACTION-103 - Create proposal for 7554 that considers 7553 [on Katy Warr - due 2009-09-22].
Issue 7586
Gil: It is overly complex to have xs:duration and xs:datetime for subscriptions
RESOLUTION: 7586 is opened
<scribe> Chair: Is this predecessor to 7478
Gil: They are kindof separate. It has an existing proposal
<scribe> Chair: Could you provide a link to that proposal in bugzilla for this issue?
Gil: yes will do
RESOLUTION: 7587
opened
... 7588 opened
7589
RESOLUTION: 7589 opened
<scribe> ACTION: Gil to produce proposal for 7589 [recorded in]
<trackbot> Sorry, couldn't find user - Gil
<scribe> ACTION: GIlbert to produce proposal for 7589 [recorded in]
<trackbot> Created ACTION-104 - Produce proposal for 7589 [on Gilbert Pilz - due 2009-09-22].
Asir: When do we stop new issues coming in
<scribe> Chair: After Last call draft, if we make substantive changes, it drops back to last call
UNKNOWN_SPEAKER: so I have no problem with new issues but substantive changes will drop us back
Doug: I agree that we need to
close issues down but now is the time to step through the specs
in detail and notice issues
... especially as we have spent more time on some specs than others
<scribe> Chair: If you have any substantive issues against RT, open them now
UNKNOWN_SPEAKER: as don't know the frag direction
<Ram> Revision to RFC 3987 that is in progress at IETF:.
Yves: Other specs support IRI, we should too. Or we should state that each time we use 'URI' we actually mean 'IRI'
<Ram> Just want to draw the attention of this WG to the work on the revision to RFC 3987.
Doug: This is basically a global replace?
Yves: Yes
Ram: Fine with Yves
suggestion.
... but want to point out that there is a revision being worked upon (3987). This does not change the discussion
... I would just like to draw people's attention to this
<scribe> Chair: Is there a subsection that we should be pointing to?
<Yves> section 5
<Yves> 5.3.1. Simple String Comparison . . . . . . . . . . . . 23
UNKNOWN_SPEAKER: We need to be
aware of code set and representation and that IRI is intended
to support many languages
... but simple string comparison is key here. We are saying that the 5.3.1 comparison may cause a false negative comparison
... so a comparison would be a code point by code point comparison to check for equivalence
... Are folks still ok with this resolution?
Asir: Previous specs did not do a
global replace. For example, namespace must stay as URI
... so we need to be careful
Yves: In 2003 namespaces were not
IRI enabled so back then we needed to be careful about
namespaces being URI rather than IRI
... but 2006 recommendation IRI-enables namespaces
Asir: but 2006 and 6 has other issues such as xml11
<scribe> Chair: Need a bit more work on this
<Bob> acl gp
UNKNOWN_SPEAKER: Yves, Doug and Asir discuss on this on public mailing list to get a proposal ready for next week
Gil: I am concerned about the test effort for this
Ram: I have sent a proposal to address the problems with the references in the group of proposals relating to references
<Ram> Proposed resolution:
<Ram> Proposed resolution:
<Bob> and issues 6569, 6570, 6571, and 6572
Ram: I have gone through each of references and 1) make them point to the latest standards 2) decided whether the references are normative or not
<scribe> Chair: Can we use the technique for citing referencs in xslt 2
<dug> yves - i'm getting blocked again
<dug> 129.33.49.251
<Yves>
<dug> any chance you could fix it (again) ?
<Yves> done ;)
<asir> here is an example
<asir> [XML1]
.
Doug: Do we want to point to the lastest or the dated version
<scribe> Chair: would prefer that we link to a dated version rather than one that may not be yet written
RESOLUTION: Above issues 6568/69/70/71/72 accepted based on proposals from Ram. Dated version references and style guide here for editors
<asir> Bob said - s/Non-normative References/Informative References/g
<dug> blocked again!
<Yves> now on two machine!
RESOLTION: Proposal for 7486 accepted and 7486 is resolved
<Bob>
Ram: Assmuption that the eventing
spec has made is that the souce decides what to send to
subscriber and subscriber must code defensively. Source may
have some resource constraints meaning that it cannot always
provide client subscriptions
... how should client deal with this? Send unsubscribe? Gil's proposal is one way to deal with this may be others.
Bob: Small server community tend not to have concept of timezones or persistence.
<dug> earth-based?? LOL
Bob: What should client do if it
gets a notication but it has no idea why because it didn't
persist previous state is another scenraio
... restricted capabilities regarding time for small clients
Doug: Subscriber should have control as its the one asking for subscription. Should not be given a random period of time by which the subscription may live.
<scribe> Chair: Please add any issues with this proposal and others to mailing list.
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/Follwo/Follow/ Found ScribeNick: Katy Inferring Scribes: Katy WARNING: No "Present: ... " found! Possibly Present: Ashok Ashok_Malhotra Asir Bob Doug Gil IBM Mark_Little Microsoft P6 RESOLTION Ram Sreed Tom_Rutt Vikas Yves aabb aacc aadd dug gpilz joined li scribenick trackbot ws-ra You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Agenda: Found Date: 15 Sep 2009 Guessing minutes URL: People with action items: asir doug gil gilbert katy[End of scribe.perl diagnostic output]
|
http://www.w3.org/2009/09/15-ws-ra-minutes.html
|
CC-MAIN-2015-40
|
refinedweb
| 1,684
| 66.17
|
How to Modify Element or Elements of an ArrayList in Java
While working with ArrayList in Java, You might have faced some problems like, suppose you got a huge number of elements in your ArrayList and you have to modify a particular element from your list.
So how to do that?
In this post, we are going to learn the easy method to update or modify or you can say replace element as per your desire in Java program.
To understand this I will go with an example.
Assume you have an ArrayList named list.
The list contains the following elements:
[CodeSpeedy, ArrayList, Java]
But you need the list like this one:
[CodeSpeedy, ArrayList, J2EE]
So, you have to modify the last one. That means you need to change the last element which is “Java” whose index number is 2. (As index number starts with 0)
How To Convert An ArrayList to Array In Java
Java Program To Update Element in an ArrayList:
import java.util.ArrayList; public class Arraylistupdate { public static void main(String args[]){ ArrayList<String> list=new ArrayList<String>(); list.add("CodeSpeedy"); list.add("ArrayList"); list.add("Java"); System.out.println("before modify: "+list); list.set(2, "J2EE"); System.out.println("after modify: "+list); } }
Output:
run: before modify: [CodeSpeedy, ArrayList, Java] after modify: [CodeSpeedy, ArrayList, J2EE] BUILD SUCCESSFUL (total time: 0 seconds)
How to Increase and Decrease Current Capacity (Size) of ArrayList in Java
Explanation:
list.set(2, "J2EE");
This is the main line which is responsible for updating an element.
in set() method we have to put two values separated by a comma.
The first one is the index of the element which you want to be updated.
And the second value is for new updated element you want to be inserted in the list instead of the old one.
here 2 is the index of Java.
And then we put J2EE so that Java got replaced by J2EE.
We added ” ” these because its a String type ArrayList.
In the case of Integer, we don’t need to put those double quotations.
Finding an ArrayList in Java is Empty or Not
|
https://www.codespeedy.com/how-to-modify-element-or-elements-of-an-arraylist-in-java/
|
CC-MAIN-2019-43
|
refinedweb
| 355
| 63.9
|
We have compiled most frequently asked .NET Interview Questions which will help you with different expertise levels.
SQL Server Interview Questions and Answers PDF Free Download for .Net Developers
Question 1.
What is Microsoft SQL Server?
Answer:
The Microsoft relational database management system is a software product that primarily stores and retrieves data requested by other applications. These applications may run on the same or a different computer. SQL is a special-purpose programming language designed to handle data in a relational database management system. A database server is a computer program that provides database services to other programs or computers, as defined by the client-server model. Therefore, a SQL Server is a database server that implements the Structured Query Language (SQL). SQL Server is a central part of the Microsoft data platform. SQL Server is an industry leader in operational database management systems (ODBMS).
Question 2.
What is Database?
Answer:
A database is a collection of information that is organized so that it can be easily accessed, managed and updated. Computer databases, typically contain aggregations of data records or files, containing information about sales transactions or interactions with specific customers.
Question 3.
Write Advantage of DBMS?
Answer:
The following advantage of DBMS in SQL Server.
- The amount of data redundancy in-store data can be reduced.
- Store data can be shared by single or multiple users.
- Data abstraction and independence & Data Security
- The ability to swiftly recover from crashes and errors, including restorability and recoverability
- Simple access using a standard application programming interface (API)
- Uniform administration procedures for data
Question 4.
What is Normalization Explain?
Answer:
Normalization is a process that helps analysts or database designers to design table structures for an application. The focus of normalization is to attempt to reduce redundant table data to the very minimum. Through the normalization process, the collection of data in a single table is replaced, by the same data being distributed over multiple tables with a specific relationship being set up between the tables.
First Normal Form (INF) :
A relation is in INF if it contains an atomic value.
Ex: Sample Employee table, it displays employees are working with multiple departments.
Second Norma! Form (2nd NF):
A relation will be in 2NF if it is in INF and all non-key attributes are fully functional dependent on the primary key
Ex: The entity should be considered already in INF, and all attributes within the entity should depend solely on the unique identifier of the entity.
Sample Products table:
product table following 2NF:
products Category table:
Brand table:
Product Brand table:
Third Normal Form(NF3): A relation will be in 3NF if it is in 2NF and no transition dependency exists. The entity should be considered already in 2NF, and no column entry should be dependent on any other entry (value) other than the key for the table. If such an entity exists, move it outside into a new table. 3NF is achieved considered as the database is normalized. The BCNF, 3NF, and all tables in the database should be only one primary key.
Fourth Normal form(NF4): Tables cannot have multi-valued dependencies on a Primary Key.
Fifth Normal Form(NF5): A relation is in 5NF if it is in 4NF and not contains any join dependency and joining should be lossless.
Question 5.
What is the difference between a duplicate table and a cloning/coping table in SQL Server?
Answer:
Cloning may be a situation when you just want to create an exact copy or clone of an existing table to test or perform something without affecting the original table. The data copy returns a database with the structure and datable. Ex:
Datatable dt_copy = new Datatable();
Dt.Table name = “Copy table”;
dt_copy = it.copy( );
The clone will copy the structure of a data whereas Copy will copy the complete structure as well as data.
You can duplicate an existing table in SQL Server 2017 by using SQL
Server Management Studio or Transact-SQL by creating a new table
and then copying column information from an existing table.
Question 6.
What is the difference between “•Between” Operator & “IN” Operator in SQL Server?
Answer:
The BETWEEN operator selects a range of data between two values can be number text (Range) etc. Ex: Select * from abx where marks between 50 and 80;
The IN operator allows you to specify multi values.
Ex. Select * from abd where marks EM (89,82);
Question 7.
What is the difference between Temp Table & Table variable in SQL Server?
Answer:
Question 8.
How to bulk data upload in SQL Server types of process Explain?
Answer:
BCP Utility: A command-line utility (Bcp.exe) that bulk exports and bulk imports data and generates format files.
BULK INSERT Statement: A Transact-SQL statement that imports data directly from a data file into a database table or no partitioned view.
SQL Server Import & Expert Wizard: The wizard creates simple packages that import and export data between many popular data formats including databases, spreadsheets, and text files.
Question 9.
Difference between function & Store procedure in SQL Server.
Answer:
Question 10.
What is the SOLID Principle?
Answer:
SOLID are five basic principles that help to create good software architecture. SOLID is an acronym where: –
- S stands for SRP (Single responsibility principle
- O stands for OCP (Open closed principle)
- L stands for LSP (Liskov substitution principle)
- I stand for ISP (Interface segregation principle)
- D stands for DIP (Dependency inversion principle)
Question 11.
What is Transaction in SQL Server?
Answer:
A Transaction is a unit of work that is perforated against a database. A transaction is the propagation of one or more changes to the database example if you are creating records, delete, changes, and update from the table you are performing transactions.
Transaction control:
- Commit: to save the changes
- Rollback: to roll back the changes.
- Save Point: creates points within the group of transactions in which to roll back.
- Set Transaction: Place a name on a transaction.
Question 12.
Write different between Cluster Index & Non-Cluster Index?
Answer:
Cluster index defines the order in which data is physically stored in a table data can be sorted in only one way, therefore, they can be only one cluster index per table. The primary key constraint automatically creates a clustered index on a particular column.
Non-Cluster Index doesn’t sort the physical data inside the table. Non cluster index is sorted at one place and table data is sorted in another place. Like your textbook content is located in one place and index is located in another place. This allows for more than one non-cluster index per table. The non-cluster index is slow the clustered index.
Ex: create nonclustered index ABC Name ON Student (name ASC)
It’s useful for improved query performance & searching data in SQL Server.
Question 13.
Different between Having & Where in SQL Server?
Answer:
The Where is coming before Group by where having is used to check condition after the aggression task. Where is used for filtering row and it applies enhanced every row while having used grouping filter.
Question 14.
What is common table expression(CTE)?
Answer:
A Temporary name result set knows as a common table expression(CTE). The simple query and defined within the execution scope of a single select, Insert, Update, Delete or Merge statement.
Ex: Syntax
With Expression Name (Column 1, Column2) ( CTE query definition ) Select column form expression name.
Advantage of CTE:
- Can be used to create a recursive query.
- Can be substituted for a view.
- Can reference itself multiple Items.
Question 15.
The difference between Primary Key, Unique Key, Foreign Key?
Answer:
Question 16.
How to Query optimize in SQL Server?
Answer:
The following method of SQL Query Optimize is given below:
• Do not use the * operator in your SELECT statements. Instead, use column names, because SQL Server scans for all column names and replaces the * with all the column names of the table)s) in the SQL SELECT statement.
• Do not use NOT IN when comparing with Nullable columns. Use NOT EXISTS instead.
• Do not use table variables in joins. Use temporary tables, CTEs (Common Table Expressions), or derived tables in joins instead. Even though table variables are very fast and efficient in a lot of situations, the SQL Server engine sees it as a single row. Due to this, they perform horribly when used in joins. CTEs and derived tables perform better with joins compared to table variables.
• Do not begin your stored procedure’s name with sp_. Because the stored procedure is named sp or SP, SQL Server always checks in the system/master database even if the Owner/Schema name is provided.
• Avoid using a wildcard (%) at the beginning of a predicate. The predicate LIKE ‘%abc’ causes a full table scan, For example: Select from table 1 where coll like ‘%likeabc’
• Use inner join, instead of outer join if possible. The outer join should only be used if it is necessary. Using outer join limits the database optimization options which typically results in slower SQL execution
• Avoid using GROUP BY, ORDER BY, and DISTINCT as much as possible. When using GROUP BY, ORDER BY, or DISTINCT, the SQL Server engine creates a work table and puts the data on the work table. After that, it organizes this data in the work table as requested by the query, and then it returns the final result.
• Use SET NOCOUNT ON with DML operations. When performing DML operations (i.e. INSERT, DELETE, SELECT, and UPDATE), SQL Server always returns the number of rows affected. In complex queries with a lot of joins, this becomes a huge performance issue.
Question 17.
What is Identity? How many types of Identity are in SQL Server?
Answer:
An Identity column of a table is a column whose value increases automatically. The value in an identity column is created by the server. A user generally can’t insert a value into an identity column.
Syntax: Identity [(Seed, Increment)]
Seed – Starting value of column default value is 1.
Increment – It’s the Increment value that is added to the identity value of the previous row that was loaded. The default value is 1.
- Identity Inserted ON – Allow a user to insert data into an identity column.
- Identity Inserted OFF – redirect a user from inserting data into and Identify column.
Question 18.
What are Triggers in SQL? Explain the type of triggers.
Answer:
A Trigger is a special kind of store procedure that automatically executes when an event occurs in the database server. DML (Data Manipulation Language) Trigger execute when a user tries to modify data through (DML) event Insert, Update or delete.
DDL Triggers: SQL Server we can create triggers on DDL statements (like CREATE, ALTER, and DROP) and certain system-defined stored procedures that perform DDL-like operations.
DML Triggers: SQL Server we can create triggers on DML statements (like INSERT, UPDATE, and DELETE) and stored procedures that perform DML-like operations. DML Triggers are of two types. After Trigger (using FOR/AFTER CLAUSE) & Instead of Trigger (using INSTEAD OF CLAUSE)
CLR Triggers: CLR triggers are a special type of trigger based on the CLR (Common Language Runtime) in the .net framework. CLR integration of triggers has been introduced with SQL Server 2008 and allows for triggers to be coded in one of.NET language like C#, Visual Basic, and F#.
Login Triggers: Logon triggers are a special type of trigger that fires when the LOGON event of SQL Server is raised. This event is raised when a user session is being established with SQL Server that is made after the authentication phase finishes, but before the user session is actually established.
Example of Insert Trigger:
Create trigger abc on [dbo]. [mytable]
For INSERT As
Declare @ ID int,
Declare @Name varchare(50);
Declate @Audit_Action(50);
Select @ID = j.id from Inserted j;
Select @Name = j.name from Inserted j;
Select @Audit_Action = ‘Inserted Record after triggers’;
Insert into xyzBackupfile (Id,Name,Audit Action,Audit time)
values(@id,@name,@audit_action,GetDate());
Print ‘Trigger Inserted sucessfully’;
Question 19.
What is Scaler function in SQL Server?
Answer:
SQL Server scalar function takes one or more parameters and returns a single value. The scalar functions help you simplify your code. For example, you may have a complex calculation that appears in many queries. Instead of including the formula in every query, you can create a scalar function that encapsulates the formula and uses it in the queries.\
Ex:
CREATE FUNCTION [schema_name.]function_name (parameter_list)
RETURNS data type AS
BEGIN
statements
RETURN value
END
Question 20.
What is the difference between the # & @ table in SQL Server?
Answer:
# table refers to a local variable (visible to only users who created it) temporary table.
## table refers to a global temporary table (display all users).
@ variable name refer to a variable that can hold value depending on its size.
Question 21.
What is function Explain?
Answer:
A function is a database object in SQL Server. Basically, it is a set of SQL statements that accept only input parameters, perform actions and return the result. A function can return only a single value or a table. We can’t use a function to Insert, Update, Delete records in the database tables.
System Defined Function
Scaler function: Scalar functions operates on a single value and return a single value.
Example: rand, round, upper, lower, trim, convert
Aggregation function: SQL Server aggregate functions perform a calculation on a set of values and return a single value. With the exception of the COUNT aggregate function, all other aggregate functions ignore NULL values.
Example :
- min( ),
- max( ),
- ave( ),
- Count( ).
User Define function: These functions are created by the user in the system database or in the user-defined database. There are 3 types of user define function scaler function, table-valued function & System
Question 22.
What is SQL Injection?
Answer:
SQL Injection is a code injection technique that might destroy your database. SQL Injection is one of the most common web hacking techniques. SQL injection is the placement of malicious code in SQL statements, via web page input.
Question 23.
What are Constraints in SQL Server explain?
Answer:
The.
Example:
- satisfy a specific condition
- DEFAULT – Sets a default value for a column when no value is specified
- INDEX – Used to create and retrieve data from the database very quickly.
Question 24.
What is Rollback in SQL Server?
Answer:
We can undo a single or even multiple consecutive written and delete operations. the command used rollback in SQL server.
Ex:
SQL> DELETE FROM CUSTOMERS WHERE AGE = 25; SQL> COMMIT; SQL> Rollback; SQL> select * from customer;
Question 25.
The difference between delete & truncate?
Answer:
Question 26.
What is PIVOT & UNPIVOT Explain?
Answer:
The PIVOT and UNPIVOT relational operators change a table-valued expression into another table. PIVOT rotates a table-valued expression by turning the unique values from one column in the expression into multiple columns in the output. PIVOT runs aggregations, where they’re required on any remaining column values that are wanted in the final output UNPIVOT, carries out the opposite operation to PIVOT by rotating columns of a table-valued expression into column values in SQL Server.
Question 27.
What is Join Explain?
Answer:
The SQL Joins clause is used to combine records from two or more tables in a database. A JOIN is a means for combining fields from two tables by using values common to each.
There is 4 type of joins
- Inner Join,
- Outer Joins,
- Cross Joins.
1) Inner Join: Returns records that have matching values in both tables.
Syntax:SELECT column name(s)¥RON[ ta/)/e/INNER JOIN table2 ON table 1 .column jiame = table2.column _name\
2) Outer Join:
• Left Outer Join: Returns all records from the left table, and the matched records from the right table
Syntax.SELECT column_name(s)F7?0M table 1 LEFT JOIN table2
ON table 1 .column_name = table2.column_name;
• Right Outer Join: Returns all records from the right table, and the matched records from the left table
Syntax.SELECT column_name(s)F/?6W table 1 RIGHT JOIN table2
ON table 1 .column jiame = table2.column jiame;
• Full outer Join: Returns all records when there is a match in either the left or the right table.
Syntax:SELECT column_name(s)Fi?OM table IFULL OUTER JOIN la. ble2
ON table l.columnname = table2. column jiame WHERE condition;
3) Cross Join (Cartesian Product): A result set which is the number of rows in the first table multiplied by the number of rows in the second table if nowhere clause is used along with cross join.
Syntax: Select * from table 1 corss join table2;
Question 28.
How can SQL Injection be stopped?
Answer:
The following process of SQL Injection prevention is given below:
- Validate the SQL commands that are being passed by the front end.
- Validate the length and data type per parameter.
- Convert dynamic SQL to stored procedures with parameters.
- Prevent any commands from executing with the combination of or all of the following commands: semi-colon, EXEC, CAST, SET, two dashes, apostrophe, etc.
- Based on your front end programming language determine what special characters should be removed before any commands are passed to SQL Server
- Depending on the language this could be a semi-colon, dashes, apostrophes, etc.
- Prevent traffic from particular IP addresses or domains & See if email-based alerts can be sent if traffic comes from these sources
- Review the firewall settings to determine if SQL Injection attacks can be prevented
Question 29.
What do Database Encryption and Decryption mean?
Answer:
Database encryption is the process of converting data, within a database, in plain text format into a meaningless cipher text by means of a suitable algorithm.
Database decryption is converting the meaningless ciphertext into the original information using keys generated by the encryption algorithms.
Question 30.
What is a Symmetric-key in the Data encryption process?
Answer:
In the Symmetric cryptography system, the sender and the receiver of a message share a single, common key that is used to encrypt and decrypt the message. This is relatively easy to implement, and both the sender and the receiver can encrypt or decrypt the messages.
Question 31.
What key provides the strongest encryption in SQL Server DBA?
Answer:
Used AES (256 bit)
- If we choose a longer key, then encryption will be better, so choose longer keys for more encryption. However, there is a larger performance penalty for longer keys. DES is a relatively old and weaker algorithm than AES.
- AES: Advanced Encryption Standard
- DES: Data Encryption Standard
Question 32.
How to find the 2nd Highest salary in the table?
Answer:
Select distinct salary from emp el where 2 = select count(distinct salary) from emp e2 where el.salary=e2.salary;
Or
(DEN SE_RANK Method)
Select a.name, s.salary from (select name, salary, DENSE RANK()
over (ORDER BY Salary) As RK from personal) as a where rk = 3;
Question 33.
How to delete all duplicate records in SQL Server?
Answer:
Delete from emp where id in (select id emp GroupBy e id having count (*) >1); ‘
Question 34.
How to update Male = Female & Female = Male in SQL Server;
Answer:
update emp set gender = case gender were ‘male’ then ‘female’ when ‘female’ than ‘male’ else gender END;
Question 35.
How do find the last records from the table?
Answer:
Select * from emp where id = select max (id) from emp;
Question 36.
How do find 1st five records from the table?
Answer:
Select * from emp where ROWNUM <= 5;
Question 37.
How do find the last 5 records from the table?
Answer:
Select * from e where ROWNUM <=5 union select * from (select * from emp e Order by Desc.) where ROWNUM <=5;
Question 38.
How do find the 3 highest salaries from the table?
Answer:
Select distinct salary from emp el where 3>= select counts (distinct salary) from emp2 where el. salary <= e2. salary order by el. salary disc;
Question 39.
How to create a clone table in SQL Server?
Answer:
create table ABC as select * from emp where 1=2;
Question 40.
How to delete all duplicate records from the table?
Answer:
Delete from emp where id in select id emp Group by E id having count (*)> 1;
Question 41.
How do find out the maximum salary of each department?
Answer:
Select Deptid Max(Salary) from emp Group By Deptid;
Question 42.
How do I fetch only common records between 2 tables?
Answer:
Select from emp Intersect select from empl;
Question 43.
Find out monthly salary from the table?
Answer:
Select emp name salary/12 as ‘monthly’ from EMP;
Question 44.
Write benefits of Store Procedure with Example?
Answer:
The following benefit as given below:
- Reduced amount of information sent to the database server.
- Compilation step is required only once when the store procedure used means one-time compilation required.
- Re-Usability of code means multiple clients can have used the same code.
- Helpful for enhancing the system security since we can grant permission to the users for executing the store procedure.
- Helpful for your business logic.
- Save time for coding.
Example:
/* This store procedure is used to insert a value into the table “Mapa” */
Create Procedure record ( @ Name varchar (20), @Age int, @Sex char (1) ) As Begin Insert into Mapa(Name,Age,Sex) Values(@Name,@Age,@Sex) END
Question 45.
Write Example of Index with Syntax?
Answer:
Syntax of Index:
CREATE INDEX Index name ON Table Name (Column1,Column2….);
Example My Table name is Emp:
CREATE INDEX Salarylndex ON Emp (Salary, Age);
Question 46.
Write Cluster Index Example?
Answer:
Example Table Name is EmpRecords:
CREATE TABLE [EmpRecord] ( [EmpKey] [int] NOT NULL PRIMARY KEY, [Name] [varchar](50) NOT NULL , [Email] [nvarchar](50) NULL, [Profession] [nvarchar]( 100) NULL, [Yearly Income] [money] NULL )
Details you can check by using: EXECUTE SP_HELP EmpRecord
Question 47.
Write a Syntax of Constraint in SQL Server with Example?
Answer:
Constraint “Constraint Name” “Constraint definition” Example:
- Unique Constraint: “column name” datatype<size> unique
- Not null Constraint: “column name” datatype<size> not null
- Check Constraint: “column name” datatype<size> checkdogical expression>
- Default Constraint: “column name” datatype<size> Default <value> constraint definition.
Question 48.
Date function in SQL Server, write a Query to Add after the 4-month date of March 2019?
Answer:
Select ADD_Month(SYSDATE,4) “Next Month” from XYZ;
Output: Next Month: 01-Aug-19
Question 49.
Calculate last day in month SQL Server?
Answer:
Select SYSDATE, LAST DAY(SYSDATE) “NewDay” from XYZ;
Output: SYSDATE:01-Jul-19 & NewDay:31-Jul-19.
Question 50.
Calculate the Month between (Ol-May-19 to 30-Oct-19) in SQL server?
Answer:
Select Months Between (’01-MAY-19′ , ’30-OCT-19′)”Total Month ” from ABC.
Question 51.
in SQL Server?
Answer:
Select Months Between(’01-MAY-19’,’30-OCT-19’) “Total Month” from ABC;
Output: Total Month: 6
Question 52.
How to calculate Next day in SQL Server in Date function?
Answer:
Select NEXT_DAY (’14-Oct-19’,’ Monday’) “Next Week” from Apple;
Output: Next Week: 21-Oct-19
Question 53.
Write a Syntax of time Zone? Answer:New_Time(date, Zonal,zone2)
Example:
SELECT EID, EMPJoiningDate,
EMPJoiningDate AT TIME ZONE ‘Pacific Standard Time’ AS
EMPJoiningDate TimeZonePST
FROM EMP.EMPEnterHeader;
Question 54.
What are used Rollup Operators?
Answer:
The Rollup operator is used to calculate aggregates and super aggregates expression in the group by statement. The Rollup option allows you to include extra rows that represent the subtotals, which are commonly referred to as super-aggregate rows, along with the grand total row. ROLLUP (Ml, M2, M3) creates only four grouping sets, assuming the hierarchy Ml > M2 > M3, as follows
Syntax: Select Ml,M2,M3 aggregate function(p4) from tablename GROUP BY ROLLUP (Ml,M2,M3);
Question 55.
What is Sub Query Explain with Example?
Answer:
The Subquery is a query set inside of another SQL Statement. It’s a nested query inside select, insert, update and delete statements. A subquery can be used anywhere an expression is allowed. We can also use where & having condition. The DISTINCT keyword & COMPUTE and INTO clauses cannot be used with subqueries that include GROUP BY.
Example:
Select * from SalDetail
where sal_add IN(select dept_add from dept where dept name – finance’)
Question 56.
Write Query of Not Exists in SQL Server?
Answer:
SELECT StudentID, ProductName FROM Student p WHERE NOT EXISTS (SELECT * FROM [Student Details] od WHERE p.Studentld = od.Studentld)
Question 57.
Write a triggered update & delete in SQL Server?
Answer:
Update after Trigger Example as given below:
Create Trigger abc as on [dbo].[mytable]
For UPDATE As
Begin
Declare @id int;
Declare @name varchar (20);
Declare @Audit_Time (60);
Select @id = j.Id from Inserted j;
Select @Name = j.Name from Inserted j;
Select @Audit_Time = Updated Record – After Updated Trigger;
Update into MPGbackup (id,Name.Audit Time) values(@id,@Name, @Audit_time ,GetDate());
Print ‘Trigger Updated Successfully’
–Acb is table name, MPGbackup is backup file name
Example of Delete after Trigger as given below:
Delete Trigger abc as on [dbo].[mytable]
For UPDATE As
Begin
Declare @id int;
Declare @name varchar (20);
Declare @Audit_Time (60);
Select @id = j.Id from deleted j;
Select @Name = j.Name from deleted j;
Select @Audit_Time = Record Deleted – After Updated Trigger;
Insert into MPGbackup (id,Name.Audit_Time) values(@id,@Name,@Audit_time,GetDate());
Print ‘Trigger Deleted Successfully’
– -Acb is table name, MPGbackup is backup file name
Question 58.
Write Case Statement in SQL Server with Example?
Answer:Syntax:
CASE
WHEN condition1 THEN result1
WHEN condition2 THEN result2
WHEN condition3 THEN result3
WHEN condition4 THEN result4
WHEN condition THEN resultN
ELSE result
END;
Example:
SELECT EmpID, EmpAge,
CASE WHEN EmpAge > 40 THEN “The employee is greater than 30”
WHEN EmpAge = 30 THEN “The Age is 30”
ELSE “The Age is under 30”
END AS EmpText FROM EmployeeDetails; ‘
26) Give me Example of a Function in SQL Server?
Answer:
Example 1: User Define a function
CREATE FUNCTION dbo. Stock1(@ItemID int)
RETURNS int AS
— Returns the Store Inventory level for the product.
BEGIN
DECLARE @ret int;
SELECT @ret = SUM(r.Quantity)
FROM Production.Productlnventory r WHERE r. ItemID = @ItemID AND r.Location® = ‘6’;
IF (@ret IS NULL)
SET @ret = 0;
RETURN @ret;
END;
Example 2: The Example of Scaler Function
SELECT student id, SUM(sales.udfNetSale(quantity, list_price, dis count))
net amount FROM sales.orderitems
GROUP BY order_id
ORDER BY net_amount DESC;
Question 59.
Find 3rd Highest salary using Rank Method?
Answer:
Select * from (Select Dense Rank() over ( order by salaryl desc)
as Rnk,E.* from Employee E) where Rnk=3;
Question 60.
How to get a Distinct Record without using distinct keywords?
Answer:
Select * from Student a where rowid = (select max(rowid) from
Student b where a.Student_ID =b. Student_ID);
Question 61.
How to remove duplicate rows from the table?
Answer:
Select EID FROM Employee WHERE ROWID <>
(Select max (rowid) from Employee b where EID=b.EID);
Delete duplicate rows:
Delete FROM Employee WHERE ROWID
(Select max (rowid) from Employee b where EID =b. EID);
Question 62.
How to fetch duplicate records from the table?
Answer:
SELECT Empld, Name, Salary, COUNT(*)
FROM EmpSalary
GROUP BY Empld, Name, Salary
HAVING COUNT(*)> 1;
Question 63.
Write a SQL query to fetch top n records?
Answer:
Select Top N * from Manager Order By Salary DESC;
Question 64.
What is Cursor, Explain?
Answer:
The SQL Server Query basically works on a complete set like Select Statement & as per Where condition. Sometimes you may want to process a data set on a row by row basis. This is where cursors come into play. Here we can use cursors.
Type of cursor in SQL Server:
- Forward Only
- Static
- Keyset
- Dynamic
Syntax:
DECLARE cursor name CURSOR [ LOCAL | GLOBAL ] [ FORWARD ONLY | SCROLL ]
[ STATIC | KEYSET | DYNAMIC | LAST LORWARD]
[ READ-ONLY | SCROLL_LOCKS | OPTIMISTIC]
[ TYPEWARNING ] FOR select statement
FOR UPDATE [coll,col2,…coln]
—define columns that need to be updated
Question 65.
What is Collection in SQL Server?
Answer:
According to Microsoft Definition “A collection is a list of objects that have been constructed from the same object class and that share the same parent object. The collection object always contains the name of the object type with the Collection suffix. Lor example, to access the columns in a specified table, use the ColumnCollection object type.”
Question 66.
Explain the locks in SQL Server and type of locks?
Answer:
The Lock is used for security purposes and to prevent data from being corrupted by the database. The locks are used to implement concurrency control when multiple users access the Database to manipulate its data at the same time.
Lock Mode as given below:
• Shared(S) – The Shared Mode is used for reading operations that do not change or update data, such as a SELECT statement.
• Exclusive(X) – The Exclusive Mode used for data- modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.
• Intent – The Intent used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SEX).
• Bulk Update – We can have updated bulk data lock mode Tab Lock.
Question 67.
Write a Query of Bulk(BCP) data uploaded in SQL Server?
Answer:
create table Customer (id int,
name varchar(60),
Age int,
Now we will load the data using the BULK INSERT statement & We have CVS file with bulk data:
BULK INSERT Customer FROM 'c:\sql\mycustomers.CSV WITH ( FIELDTERMINATOR = ROWTERMINATOR = '\n' )
Question 68.
Query Data Received from server Syntax with getting Method Example?
Answer:
<!DOCTYPE html> <html> <head> <script src="https ://aj ax. googleapis. com/aj ax/libs/j query/3.4.1/j query. mi n.j s'"></script><script> $(document).ready(function() { $("button").click(function() { $.get("demo_test2.asp", fimction(data, status){ alert("Data: " + data+ "YnStatus: " + status); }); ' }); }); </script> </head> <body> <button>Send an HTTP GET Method</button> </body> </html> Question
Question 69.
JQuery Method Post Method data in SQL Server Example?
Answer:
<!DOCTYPE html> <html> <head> <script src= "https ://aj ax.googleapis. com/aj ax/libs/j query/3.4.1/j query .mi n.js"></script> <script> . $(document).ready(function() { $('"button"). click(function() { $ .post(" demotest 1 _post.asp", { name: "Tom", city: "Rome" }, function(data,status){ alert("Data: " + data + "\nStatus: " + status); }); }); }); </script> </head> <body> <button>Send an HTTP POST Method</button> </body> </html>
Question 70.
Write a JQuery Load Data Method?
Answer:
Just add the code in JQuery as per your txt file name.
<script> $(document).ready(function() { $("button").click(function() { $( "#di v 1"). load(" demo_test.txt"); }); }); </script>
Question 71.
Write a Simple code for Ajax Data Request from Serve, Example?
Answer:
The Code as given below for changes in Script File.
<script> $(document).ready(function() { $("button").click(function() { $("#divl ").load("demo_testl .txt"); }); }); </script>
Question 72.
Write List Method in LINQ in MVC?
Answer:
public static void Main( ) { // string collection IList<string> stringList = new List<string>() { "Dot NET", "MVC", "SQL", "AJAX", "JQuery" // LESTQ Query Syntax var result = from s in stringList where s.Contains("Interview2") select s; foreach (var str in result) { Console. WriteLine(str); } }
Question 73.
Write Group by Query in LINQ Example?
Answer:
string[ ] groupingQuery = {“Banana”, “Apple”, “Grape”, “beans”, “Mango” }; ‘
IEnumerable<IGrouping<char, string» queryFoodGroups = from item in groupingQuery
group item by item[0];
Question 74.
Write an Inner Join Example in LINQ MVC?
Answer:
public static void Main( ) { // Color collection in table IList<string> strListl = new List<string>() { "Red", "Yellow", "Green", "Blue" }; IList<string> strList2 = new List<string>() { "Red", "Yellow", "Orange", "Black" };
var innerJoinResult = strListl Join(// outer sequence
strList2, // inner sequence str1 => strl, // outerKeySelector str2 => str2, // innerKeySelector (strl, str2) => strl); foreach (var str in innerJoinResult) { Console.WriteLine("{0} ", str); } } }
Output: Red, Yellow
Question 75.
Write a Query to skip method in LINQ with the top 4 highest value skip?
Answer:
The LINQ Example is given below:
int[ ] grades = { 58, 80, 68, 40, 100, 200, 150,250 }; IEnumerable<int> lowerGrades = grades.OrderByDescending(g => g).Skip(4); Console.WriteLine("All grades except the top three are:"); foreach (int grade in lowerGrades) { Console.WriteLine(grade); } /*
Output:
All grades except the top three are:
80
68
58
40
*/ ‘
Question 76.
The difference between FirstOrDefault and First in LINQ?
Answer:
The first return First element value from the collection of elements or a sequence. It also returns the first element of sequence when you specified the condition. The First ( ) simply gives you the first one.
The default not showing the first value it’s showing the default value set in LINQ Operation. The default return is null when there is no result, it will not throw an exception.
Question 77.
Write an Anonymous Function Example? Answer: The following example of anonymous:
delegate int func(int r, int s); static void Main(string[ ] args) { func f1= delegate(int r, int s) { return r + s; }; Console.WriteLine(fl(l, 2)); }
Question 78.
How can we achieve Ajax method call using MVC?
Answer:
The Example as given below:
$.ajax({type: ‘POST’,url: ’@Url.Action(“TestLogic”, “MySearch”)’
,data: {listQuize },success: function (response)
{window.location.href = urldata;if (response != null)
{alert(“Search Working”);}
else {alert(“Search Not Working);}},
error: function (response) {}});
Question 79.
Give me Example of Lambda Expression in MVC?
Answer:
delegate int func(int m, int n); static void Main(string[] args) { func fl = (m, n) => m + n; Console.WriteLine(fl(l, 2)); }
Question 80.
How to find out duplicate Array in the list?
Answer:
Example:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace CodingAlgorithms { //Given an array of integers, find if the array contains any duplicates. //This function should return true if any value appears at least twice in the array, and it should return false if every element is distinct. public static class ArrayDuplicates1 { //Dictionary solution public static bool ContainsDuplicates(params int[ ] z) { Dictionary<int, int> d = new Dictionary<int, int>( ); foreach (int i in z) { if (d.ContainsKey(i)) return true; else d.Add(i, 1); } return false; } } }
Question 81.
Write a Program to Rotate array to the right given a pivot?
Answer:
Example:
using System; using System.Collections.Generic; using System. Linq; using System. Text; using System.Threading.Tasks; namespace CodingAlgorithm { public static class RotateArrayRight { //Rotate array to the right of a given pivot public static int[] Rotate(int[] y, int pivot) { if (pivot < 0 || y == null) throw new Exception ("Invalid argument"); pivot = pivot % y.Length; //Rotate first half y = RotateSub(y, 0, pivot - 1); //Rotate second half y = RotateSub(y, pivot, x.Length - 1); //Rotate all y = RotateSub(y, 0, y.Length - 1); return y; } private static int[ ] RotateSub(int[ ] y, int start, int end) { while (start < end) { int temp = y[start]; y[start] = y[end]; y[end] = temp; start++; end - ; } return x; } } }
Question 82.
How to remove duplicates from the sorted linked lists?
Answer:
Example:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LinkedListAlgorithm { partial class LinkedList1 { //Remove duplicates from a sorted linked list (Assumes nodes contain integers as data) //Source:- duplicates-from-sorted-list/ public void RemoveDuplicates( ) { Node lag = head; Node lead = head.Next; ? while (lead != null) { if ((int)lag.Data == (int)lead.Data) { lag.Next = lead.Next; lead = lag .Next; } else { lag = lag.Next; lead = lead.Next; } } } } }
Question 83.
What is next Number 5,10,19,32,49,70….?
Answer:
The Solution is given below in Bold:
10-5 = 5
19-10 = 9
32-19=13
49-32= 17
70-49=21
?- 70= 25
Question 84.
A trader buys tea for $1200 and sells it for $1500, per sack of tea he makes a profit of $50. How many sacks of tea did he have?
Answer:
Solution:
He might have 6 sacks of sugar,
$1500-$1200 = 300
300/50 = 6
Question 85.
Now we have a committee of 10 men, where the age of all 10 men is the same as it was 4 years ago because an old man is replaced by a young man? Find out how much younger is the new men?
Answer:
Let say the sum of nine men = p and age of old man = R
So its average 4 years before = (p+r)/10
After 4 years let R be replaced by Y
So now avg=(p+4x 10+y)/10
Now, (p+r)/10 = (p+40+y)/10
So in the end you will get z=y+40
So young committee member is 40 years younger than old member
Question 86.
Find the next Series F21, S23, T25, T27, S29, M31,____?
Answer:
Solution: W02
F21: Friday the 21 st.
S23: Sunday the 23rd.
T25: Tuesday the 25th.
T27: Thursday the 27th.
S29: Saturday the 29th.
M31: Monday the 31 st.
Avoiding every other day, the next term must be ‘Wednesday the 2nd’ i.e. ‘W02’
Question 87.
You are given a 6 by 6 grid and asked to start on the top left corner. Now your aim is to get to the bottom right corner. You are only allowed to move either right or down. You must never move diagonally or backward.
Answer:
Solution: 252 Way
Think how symmetric the problem is. Pick up any random cell. Suppose that you took ‘n’ steps in reaching the particular cell above it and ‘m’ steps to reach the particular cell placed left to it. In such circumstances, the number of ways to reach the random cell is equal to the sum of two steps ‘n+m’. Thus, keep that logic in mind and fill out the six by six grid. In that manner, there are 252 ways to get to the destination that is the bottom right step.
Question 88.
You have 100 coins lying flat on a table, each with a head side and a tail side. 10 of them are heads up, 90 are tails up. You can’t feel, see or in any other way find out which side is up. Split the coins into two piles such that there is the same number of heads in each pile.
Answer:
Place 50 coins into two piles on their edges so that both have the same amount of heads in each pile, neither facing up or down.
Question 89.?
Answer:
Let’s assume:
box 1 is labeled Oranges (O)
box 2 is labeled Apples (A)
box 3 is labeled Apples and Oranges (A+O)
and that ALL THREE BOXES ARE LABELLED
INCORRECTLY”
If you are picking fruit from box 1
if you pick an Orange:
- box l’s real label can only be O or A+O
- box 1 ‘s current label is O
- since ALL LABELS ARE INCORRECT then box l’s real label can not be O
- box l’s new label should then be A+O by elimination
- since ALL LABELS ARE INCORRECT
- box 2’s label is changed to O
- box 3’s label is changed to A
Question 90.
If you’re given a Mug/Gallon with a mix of fair and unfair coins, and you pull one out and flip it 3 times, and get the specific sequence heads tails, what are the chances that you pulled out a fair or an unfair coin?
Answer:
If you don’t know how unfair the coins are then you have to compare fair vs. not fair. A fair coin Pf = 1/2 probability of heads or tails and therefore has a 1/(2A3) = 1/8 probability of going HHT. An unfair coin has a probability P*P*(1-P) = 2*(PA2) – (PA3) of going HHT. Integrating across P from 0 to 1, you also get 1/8. Therefore, whether the coin was biased or not, you had an even chance when you got HHT that the coin was fair or not fair (50%). However, if you were to know the distribution of the coins in the jar between fair and not fair, you would be able to estimate the probability the coin was fair based on that distribution.
Question 91.
If you have 2 eggs, and you want to figure out what’s the highest floor from which you can drop the egg without breaking it, how would you do it? What’s the optimal solution?
Answer:
you can say the answer more like, you can drop the egg even from the topmost floor without breaking it. The egg only breaks when it hits the floor.
Question 92.
The probability of a car passing a certain intersection in 20-minute windows is 0.9. What is the probability of a car passing the intersection in a 5-minute window? (Assuming a constant probability throughout)
Answer:
This is one of the basic probability questions)A4
0.9= 1-(1-x)∧4
(1 -x)∧4 = 0.1
1 – x= 10∧(-0.25)
x = 1 – 10∧(-0.25) = 0.4377
Question 93.
There are 3 light bulbs in a hidden room and 3 switches outside the room that corresponds.
Question 94. often overlook the easier solution to this problem. Let’s start with the easiest solution. If you make one straight horizontal cut along with the height of the cake, the resulting slices are of equal sizes. But this solution may not work so well on a cake with icing. So let’s rethink.
Question 95.:
Per.
Question 96.
What is the output of given data in JavaScript?
- alert (‘7’+ 7 + 7)
- alert (7 + 7)
- alert (7+ 7‘7’)
- alert (7 + ‘7’ + 7)
Answer:
In this JavaScript series ‘7’ representation of string format. Hence, result will be , String + number = string Number + Number = Number Therefore,
- alert (‘7’ + 1 + 1) = 111
- alert (7 + 7 ) = 14
- alert (7+ 7‘7’) = 147
- alert (7 + ‘7’ + 7) = 111
Question 97.
Uses of find() method .closestO method in Jquery?
Answer:
The method used for searches for descendant elements that match the specified selector. Syntax: $(selector) . find{filter)
The. the closest method used for matching element in DOM Tree or new jQuery object from the matching elements. Syntax: $(selector) . closest(filter)
Question 98.
How to Authenticated data coming from the right way by API?
Answer:
We can authenticate 3 basic approaches for our data coming from right way or wrong.
• HTTP Based Authentication: Simple authentication checking user-id & password by authorized users. we can also use OTP-based authentication.
• API Key Based Authenticate: we can also use the API key and after matching the API key we can verify authorized users or not. Most APIs require you to sign up for an API key in order to use the API. The API key is a long string that you usually include either in the request URL or request header
• HAMC based Authentication: We can crate HAMC for authentication.
Question 99.
Write a LINQ Count Query in MVC?
Answer:
IEnumerable <string> Items = new List
<string>{“3aywj”Soniya”,”Lita”,”Arohi”);
Int count = Item.Count();
// count: 3
Or
//Count between tables Example < then items.
IEnumerable <string> Items = new List <string>
{“10″,”15″,”18″,”20″,”30”}; ‘
Int Count = Items.Count (x=> x<15), // Count:3 Int Count = Items.Where (x=> x<15)Count.();
//Count:3
Question 100.
Write Select Statement Example in LINQ?
Answer:
string [ ] daysArrayDotNet =
{“Sunday”,”Monday”j”T uesday”};
Var days = from day in daysArrayDotNet select day;
//Example of Where Condition in Select Statement
Var days = from day in daysArrayDotNet where day == “Sunday”;
Question 101.
Order By Example in LINQ Select Statement?
Answer:
OrderByDecending (x =>’ x.Deliveryl.SubmissionMe);
Question 102.
Average Count Example in LINQ Statements?
Answer:
Var list = new List <int> {I,3,5,7,9};
Double result = list. Average (); //result: 5
Question 103.
Difference between Ref & Out Parameter?
Answer:
The Ref parameter means that the parameter has a value on it before going into the function. And out parameter means that the parameter has no official value before going into the function.
The ref keyword reference parameter when a passing argument in the method. While Out parameter we can pass argument in the method by out keyword.
The reference parameter or argument must be initialized first before it is passed to ref. while Out parameter is not compulsory to initialize a parameter or argument before it is passed to an out.
Example :
Out Parameter
Int m ;
Foo (out m);
Ref parameter
Int n;
Foo (ref n);
|
https://btechgeeks.com/sql-server-interview-questions-and-answers-for-net-developers/
|
CC-MAIN-2022-27
|
refinedweb
| 7,340
| 57.67
|
Scraping News Websites like CNN & NBC using Python
Scraping News Websites like CNN & NBC using Python
Send download link to:
News websites contains a lot of data. Every day more data is posted on these websites on most hot topics around the world. They are a great source not only for news but also for other things like health, fashion, finance, Tech, Gadgets etc. One can find new articles on almost any topics by Scraping News Websites.
In this tutorial we will scrape two new websites CNN and NBC News . We will go to these two websites and scrape all the news articles related to COVID-19.
See the complete code below:
from bs4 import BeautifulSoup as soup
import requests
CNN:
from datetime import date today = date.today() d = today.strftime("%m-%d-%y") cnn_url="-{}-intl/index.html".format(d) html = requests.get(cnn_url) bsobj = soup(html.content,'lxml') for link in bsobj.findAll("h2"): print("Headline : {}".format(link.text))
Output:
for news in bsobj.findAll('article',{'class':'sc-jqCOkK sc-kfGgVZ hQCVkd'}): print(news.text.strip())
NBC News:
nbc_url='' r = requests.get('') b = soup(r.content,'lxml') for news in b.findAll('h2'): print(news.text)
Output:
links = [] for news in b.findAll('h2',{'class':'teaseCard__headline'}): links.append(news.a['href'])
#for link in links: page = requests.get(link) bsobj = soup(page.content) for news in bsobj.findAll('div',{'class':'article-body__section article-body__last-section'}): print(news.text.strip())
Output:
Not only from CNN & NBC, we can scrape news data from other websites also. If you are in need of news content aggregation then our scraping services serves best your requirement.
|
https://www.worthwebscraping.com/scraping-news-websites-like-cnn-nbc-using-python/
|
CC-MAIN-2021-43
|
refinedweb
| 272
| 62.14
|
The Properties_impl class implements the Properties interface. More...
#include <properties_impl.h>
The Properties_impl class implements the Properties interface.
The key=value pairs are stored in a std::map. An instance can be created either by means of the default constructor, which creates an object with an empty map, or alternatively, it can be created by means of the static parse_properties function with a String_type argument. The string is supposed to contain a semicolon separated list of key=value pairs, where the characters '=' and ';' also may be part of key or value by escaping using the '\' as an escape character. The escape character itself must also be escaped if being part of key or value. All characters between '=' and ';' are considered part of key or value, whitespace is not ignored.
Escaping is removed during parsing so the strings in the map are not escaped. Escaping is only relevant in the context of raw strings that are to be parsed, and raw strings that are returned containing all key=value pairs.
Example (note \ due to escaping of C string literals): parse_properties("a=b;b = c") -> ("a", "b"), ("b ", " c") parse_properties("a\\==b;b=\\;c") -> ("a=", "b"), ("b", ";c")
get("a=") == "b" get("b") == ";c"
Additional key=value pairs may be added by means of the set function, which takes a string argument that is assumed to be unescaped.
Please also refer to the comments in the file properties.h where the interface is defined; the functions in the interface are commented there.
Remove all key=value pairs.
Implements dd::Properties.
Are there any key=value pairs?
Implements dd::Properties.
Check for the existence of a key=value pair given the key.
Implements dd::Properties.
Get the string value for a given key.
Return true if the operation fails, i.e., if the key does not exist or if the key is invalid. Assert that the key exists in debug builds.
Implements dd::Properties.
Insert key/value pairs from a different property object.
The set of valid keys is not copied, instead, the existing set in the destination object is used to ignore all invalid keys.
Implements dd::Properties.
Insert key/value pairs from a string.
Parse the string and add key/value pairs to this object. The existing set of valid keys in the destination object is used to ignore all invalid keys.
Implements dd::Properties.
Iterate over all entries in the private hash table.
For each key value pair, escape both key and value, and append the strings to the result. Use '=' to separate key and value, and use ';' to separate pairs.
Invalid keys are not included in the output. However, there should never be a situation where invalid keys are present, so we just assert that the keys are valid.
Implements dd::Properties.
Remove the key=value pair for the given key if it exists.
Otherwise, do nothing.
Implements dd::Properties.
Set the key/value.
If the key is invalid, a warning is written to the error log. Assert that the key exists in debug builds.
Implements dd::Properties.
Get the number of key=value pairs.
Implements dd::Properties.
Check if the submitted key is valid.
Implements dd::Properties.
|
https://dev.mysql.com/doc/dev/mysql-server/latest/classdd_1_1Properties__impl.html
|
CC-MAIN-2020-16
|
refinedweb
| 529
| 57.57
|
JustLinux Forums
>
Community Help: Check the Help Files, then come here to ask!
>
Programming/Scripts
> More Qt Questions
PDA
Click to See Complete Forum and Search -->
:
More Qt Questions
Dun'kalis
02-04-2003, 11:34 PM
I've got some more Qt questions.
I got my text editor working (after scrapping it and starting over), and it works fine right now.
I have two questions.
First, the QSyntaxHighlighter (I want to implement syntax highlighting) docs are really vague, and they simply tell me to reimplement highlightParagraph. I'm not sure as to how I would go about doing this.
Another question: I want to implement some sort of configuration for this program. I know it wouldn't be hard to do, but I need to know how I would search for conditions in a QTextStream object, and then emit signals based on those.
These questions probably go hand in hand...
bwkaz
02-05-2003, 11:16 AM
The best thing I can suggest for syntax highlighting is to look through the Qt source for the current state of the QSyntaxHighlighter class' highlightParagraph function. That might tell you how to do the highlighting. It may be as "simple" as adding HTML tags to the paragraph, or it may be something completely different, I don't know. I could look into it, but it'd probably take me a couple of hours, and I'm lazy. :D
For the QTextStream, I assume you've looked at the class documentation at Trolltech? Other than that... you could look at how one of the KDE programs uses it. I'm not sure which ones do, but I'd guess most of them. Maybe just something in kdebase that handles persistence for everything? I'm not quite sure.
ariell
02-05-2003, 03:48 PM
Hi there,
I'm working with Qt for more than two years. If you could specify what excatly you want a t-stream to search for I think I can get you some hints.
Best,
ariell.
Dun'kalis
02-05-2003, 05:07 PM
Wow...highlightParagraph is pretty simple. Just like the rest of Qt.
As for the text streaming, I'd want to search for stuff like "tabWidth=4" or something, simple stuff. It would then assign the found value there to a variable that controls that sort of stuff. I could do the assignment, but I can't figure out how to find the string.
AFAIK, most KDE apps use XML for comfiguration files. I don't care much for XML.
ariell
02-05-2003, 08:13 PM
Hi again,
I hope I made it readable. My suggestion: Use an "invisible" editor to handle files/streams. As for config-files, that might be a cool approach.
/*
I think using an "adapter", such as QMultiLineEdit is a very conveniant way to handle text-streams.
Once you decide to go this way, it comes in handy to derive a new class from QMultiLineEdit that
will provide additional methods to handle file contents. It's less than 100 lines of code and makes your sources
more readable.
See Trolltech's html-doc to understand QMultiLineEdit. This class really provides everything you need to
deal with lots of text stored to a single string. "Redirecting" a text stream's content to an instance of
QMultiLineEdit "eases the pain" to hook up with your file.
Hust a proposal.... Have fun.
*/
class myEditor: public QMultiLineEdit {
Q_OBJECT
public:
myEditor(QWidget * parent=0, const char * name=0); // just those args QMultiLineEdit is expecting
~myEditor();
public:
int searchKey(const char* key = 0); // checks a stream for existence of "key"
QString getKeyValue(int pos); // returns a key value from the file
};
// implementation
int myEditor::searchKey(const char* key) {
// searches current file for a certain key
// returns line number in which key was found, or (-1) if failed
if (numLines() < 1) return (-1);
int iLines = numLines()/*to avoid frequent function calls to "numLines()" when iterating*/;
// Key holds what we are looking for
QString* sCurrent = new QString("");
for (int iCurrentLine=1; iCurrentLine<iLines; iCurrentLine++) {
// move cursor to current line
setCursorPosition(iCurrentLine,1,false);
// read this line and copy to sCurrent
sCurrent = getString(iCurrentLine);
// we now have it in a regular string(-pointer)
// if key is in current line ,thus "find()" will return its position,
// -1 if not in line, exit function and return line number
if (sCurrent->find(key,0,false) > (-1)) return iCurrentLine;
}/*for*/
return (-1);
}
QString myEditor::getKeyValue(int pos) {
// reads a key-value from line pos in a text-based file and return as a QString
// if this method encounters problems it returns an empty string
QString sValue = "";
// move to line
// check for valid range
if (pos > numLines()) return sValue;
// get line content
setCursorPosition(pos,1,false);
QString* sLine = new QString(); // must be a pointer- QMultiLineEdit::getString() returns a pointer
sLine =getString(pos);
sValue = *sLine;
return sValue;
}
// example method
bool loadConfig(const char* filename) {
myEditor* ed = new myEditor(/*whatever you pass here*/);
// LOAD ed file
// construct a file-object
QFile f(filename);
if ( !f.open( IO_ReadOnly ) ) return false;
// clear editor buffer
ed->setAutoUpdate( false );
ed->clear();
// open a new stream
QTextStream t(&f);
while ( !t.eof() ) {
QString s = t.readLine();
ed->append(s);
}
// drop the file
f.close();
// re-configure editor
ed->setAutoUpdate( true );
ed->setEdited( false );
// ed file is now allocated in memory
// "invisible" editor object
bool ok = true; // as soon as there is something wrong , this value will be toggled to false
int iLineInConf = ed->searchKey("tabWidth"); // "iLineInConf" now has line number in ed file
// if ed->searchConfigKey found the appropriate entry
// value of "iLineInConf" is greater than (-1)
if (iLineInConf > (-1)) QString s = ed->getKeyValue(iLineInConf);
else ok = false;
/*
do whatever you want to do wtih "s",
split the string, search for "=" and take all that's right-handed
*/
return ok;
}
Dun'kalis
02-05-2003, 10:03 PM
/me bows.
Wow. Thats a really good idea and implementation. I was thinking of using something like QString::find(), but this is really nice.
Much better than anything I could ever come up with.
Thank you!
Energon
02-06-2003, 02:13 AM
Just out of curiosity, is there an easy way to add line numbers to a QTextEdit? I've been looking for a way but haven't come across any simple leads like I thought there would be.
ariell
02-06-2003, 04:32 AM
You're very welcome, good to know it served the purpose...
As for QTextEdit I must confess to not knowing this class. Does it come with Qt 3? If so, I can't be of any assistance, I still use Qt 2.3.
best,
ariell.
bwkaz
02-06-2003, 10:56 AM
Yes, in Qt 3, there are QLineEdit and QTextEdit widgets, where in Qt 2 (I believe) there were QLineEdit and QMultiLineEdit widgets.
I don't know if it was more than just a change in class name, though.
Energon
02-06-2003, 12:40 PM
QMultilineEdit is deprecated and inherets QTextEdit. So it's a little bit more than a name-change, but the two interfaces are a lot different.
ariell
02-06-2003, 12:44 PM
If it is, as you assume, just a change in class names, I understand QTextEdit to be substitute of QMultiLineEdit. Then, however, there is several ways to add a line. The direct approach reads something like this:
QMultiLineEdit* ed = new QmultiLineEdit(...)
QString* s = new QString("some content);
ed->append(*s); //
Consequently, they (Trolltech) should have called it QSingleLineEdit.
Anyway, thanks for your info. Qt is cool stuff, that is for sure.
Best,
ariell.
ariell
02-06-2003, 12:47 PM
append() is a void
if you need to check whether it really worked or not use:
ed->insert(ed->numLines()+1, *s);
or:
ed->insert(-1, *s)
since those methods are bool
Best,
ariell.
Dun'kalis
02-06-2003, 06:09 PM
In either the QFile or QTextStream documentation, there is a snippet of code that does just that.
Dun'kalis
02-06-2003, 11:12 PM
OK, I understand how to implement the highlightSyntax() function, but I'm getting stuck on something.
From the QTextEdit, I want to get the contents of the current paragraph (in code, most likely the current line), and then the rest of the QSyntaxHighlighter takes over.
From what I can tell, there are no functions in the QTextEdit to do that.
I also tried taking the text() property and assigning it to a QString, but it told me that the object edit (thats what the instance of QTextEdit is called) is private in this context. It is declared as public in the main window class.
Dun'kalis
02-09-2003, 12:56 AM
Some more mucking around has shown me that I don't actually need to get the paragraph myself. Very nice.
Anyway, I'm now attempting to install the syntax highlighter into my text editor, and it continues to tell me 'edit is private in this context'
Why? The QTextEdit is declared as public...
Another thing: I was looking around the Qt Designer code for their syntax highlighter, and, oddly enough...they didn't use QSyntaxHighlighter.
bwkaz
02-09-2003, 09:25 AM
"edit is private"
Hmm... Could you post the full error? And the lines of code that it's referencing (plus 5-10 lines before and after)?
Energon
02-09-2003, 01:41 PM
QT Designer doesn't use QSyntaxHighlighter because it's a new class for Qt 3.1 and QT Designer was written way back in Qt 2.
ariell
02-09-2003, 05:46 PM
First of all, I don't use Qt 3, so details are "hidden" to me.
Anyway, referencing an instance of QTextEdit as public is one thing. Using its properties or methods (that might be private from "the class' view") is quite a different thing.
Could you post some more code?
Best,
ariell.
Dun'kalis
02-09-2003, 08:15 PM
/opt/qt/include/qsyntaxhighlighter.h: In constructor `HSyntax::HSyntax()':
/opt/qt/include/qsyntaxhighlighter.h:71: `QTextEdit*QSyntaxHighlighter::edit'
is private
syntax.cpp:16: within this context
make: *** [syntax.o] Error 1
Here's the constructor:
HSyntax::HSyntax()
:QSyntaxHighlighter(edit)
{}
This is supposed to install the syntax highlighter into QTextEdit *edit.
The relevant parts of the main window class (derived from QMainWindow)
Q_OBJECT
public:
HMainWindow();
~HMainWindow();
QTextEdit *edit;
The relevant part of the main window implementation:
edit = new QTextEdit(this, "Editor");
When I was trying to debug it, I tried using QTextEdit *edit =..., but that didn't help.
I subclassed QSyntaxHighlighter, but it doesn't have a Q_OBJECT macro in it, since it doesn't need one. No new signals or slots.
ariell
02-09-2003, 08:39 PM
I subclassed QSyntaxHighlighter, but it doesn't have a Q_OBJECT macro in it, since it doesn't need one. No new signals or slots.
...
Use Q_OBJECT. There's a good chance, signals and slots inside your BASE-CLASS don't work. It actually depends on lots of circumstances, but as a rule of thumb: in qt use this macro (finally ev'rything IS derived from QObject).
Anyway, it's late at night here in Europe. I gonna review your code t'row and get you an answer.
Best,
ariell.
Dun'kalis
02-11-2003, 11:27 PM
Tried the Q_OBJECT, and it gives me the EXACT same error.
As an aside, is it always a good idea to use Q_OBJECT, even when the class has no signals/slots?
bwkaz
02-11-2003, 11:46 PM
Post, from /opt/qt/include/qsyntaxhighlighter.h, lines 65-75 or so. Also, lines 10-20 or so from syntax.cpp would be helpful.
It might be that the QSyntaxHighlighter class defines its own "edit" variable that you can't change, and for SOME strange reason, the compiler is seeing that one first. Maybe if you change the variable name?
ariell
02-12-2003, 08:22 AM
Hi there again,
I downloaded Qt 3/X11 to review that problem.
I think the solution is quite simple.
As I understand, "your base class" QSyntaxhighlighter (QSHL) has a private data-member QTextEdit* edit.
QSHL's constructor takes an argument to know where the class is "installed on", namely an instance of QTextEdit. Ownership of class is being transferred to QTextEdit as soon as QSHL is constructed.
Thus: If you declare an instance of QTextEdit within your main window you can't call it edit, for there is no way for a compiler to "recognize" WHICH edit you're currently referring to.
Consider this: edit (declared in main window) can not be passed to QSHL as long as QSHL already HAS (a private data member) called edit, too.
Hope that sounds plausible. Look up header files, qsyntaxhighlighter.h in particular.
As for Q_OBJECT,as a thumb of rule: use it. Q_OBJECTS prupose is (indeed) to provide signals/slots. You don't need signals/slots- you don't need Q_OBJECT. But what if you gonna derive one more class that you want to make use of signals/slots?
That moc generated code is pretty fast, there's almost no "overhead" at all.
Hope that helped,
best,
ariell.
Dun'kalis
02-12-2003, 05:07 PM
Wow, thanks. I changed every instance of edit to editor, and it doesn't give me that error anymore.
However, I get the following:
syntax.cpp: In constructor `HSyntax::HSyntax()':
syntax.cpp:16: `editor' undeclared (first use this function)
syntax.cpp:16: (Each undeclared identifier is reported only once for each
function it appears in.)
The other parts compile properly.
This is the first complex application I've ever tried to write in Qt, so please forgive my ignorance. I'm still learning it...
If I add a declaration for QTextEdit in the public of the syntax, it compiles, but (for obvious reasons) it doesn't highlight syntax.
ariell
02-12-2003, 05:54 PM
you're very welcome,
that's what we do in the unix-world: help each other...
As for the most recent error, I need to know more code.
Best,
ariell.
justlinux.com
|
http://justlinux.com/forum/archive/index.php/t-88598.html
|
crawl-003
|
refinedweb
| 2,350
| 64.2
|
This is, can we separate this context from the monad, and reconstitute it later? If we know the monadic types involved, then for some monads we can. Consider the
State monad: it’s essentially a function from an existing state, to a pair of some new state and a value. It’s fairly easy then to extract its state and later use it to “resume” that monad:
import Control.Applicative import Control.Monad.Trans.State main = do let f = do { modify (+1); show <$> get } :: StateT Int IO String (x,y) <- runStateT f 0 print $ "x = " ++ show x -- x = "1" (x',y') <- runStateT f y print $ "x = " ++ show x' -- x = "2"
In this way, we interleave between
StateT Int IO and
IO, by completing the
StateT invocation, obtaining its state as a value, and starting a new
StateT block from the prior state. We’ve effectively resumed the earlier
StateT block.
Nesting calls to the base monad
But what if we didn’t, or couldn’t, exit the
StateT block to run our
IO computation? In that case we’d need to use
liftIO to enter
IO and make a nested call to
runStateT inside that
IO block. Further, we’d want to restore any changes made to the inner
StateT within the outer
StateT, after returning from the
IO action:
import Control.Applicative import Control.Monad.Trans.State import Control.Monad.IO.Class main = do let f = do { modify (+1); show <$> get } :: StateT Int IO String flip runStateT 0 $ do x <- f y <- get y' <- liftIO $ do print $ "x = " ++ show x -- x = "1" (x',y') <- runStateT f y print $ "x = " ++ show x' -- x = "2" return y' put y'
A generic solution
This works fine for
StateT, but how can we write it so that it works for any monad tranformer over IO? We’d need a function that might look like this:
foo :: MonadIO m => m String -> m String foo f = do x <- f y <- getTheState y' <- liftIO $ do print $ "x = " ++ show x (x',y') <- runTheMonad f y print $ "x = " ++ show x' return y' putTheState y'
But this is impossible, since we only know that
m is a
Monad. Even with a
MonadState constraint, we would not know about a function like
runTheMonad. This indicates we need a type class with at least three capabilities: getting the current monad tranformer’s state, executing a new transformer within the base monad, and restoring the enclosing transformer’s state upon returning from the base monad. This is exactly what
MonadBaseControl provides, from
monad-control:
class MonadBase b m => MonadBaseControl b m | m -> b where data StM m :: * -> * liftBaseWith :: (RunInBase m b -> b a) -> m a restoreM :: StM m a -> m a
Taking this definition apart piece by piece:
The
MonadBaseconstraint exists so that
MonadBaseControlcan be used over multiple base monads:
IO,
ST,
STM, etc.
liftBaseWithcombines three things from our last example into one: it gets the current state from the monad transformer, wraps it an
StMtype, lifts the given action into the base monad, and provides that action with a function which can be used to resume the enclosing monad within the base monad. When such a function exits, it returns a new
StMvalue.
restoreMtakes the encapsulated tranformer state as an
StMvalue, and applies it to the parent monad transformer so that any changes which may have occurred within the “inner” transformer are propagated out. (This also has the effect that later, repeated calls to
restoreMcan “reset” the transformer state back to what it was previously.)
Using monad-control and liftBaseWith
With that said, here’s the same example from above, but now generic for any transformer supporting
MonadBaseControl IO:
{-# LANGUAGE FlexibleContexts #-} import Control.Applicative import Control.Monad.Trans.State import Control.Monad.Trans.Control foo :: MonadBaseControl IO m => m String -> m String foo f = do x <- f y' <- liftBaseWith $ \runInIO -> do print $ "x = " ++ show x -- x = "1" x' <- runInIO f -- print $ "x = " ++ show x' return x' restoreM y' main = do let f = do { modify (+1); show <$> get } :: StateT Int IO String (x',y') <- flip runStateT 0 $ foo f print $ "x = " ++ show x' -- x = "2"
One notable difference in this example is that the second
foo becomes impossible, since the “monadic value” returned from the inner call to
f must be restored and executed within the outer monad. That is,
runInIO f is executed in IO, but it’s result is an
StM m String rather than
IO String, since the computation carries monadic context from the inner transformer. Converting this to a plain
IO computation would require calling a function like
runStateT, which we cannot do without knowing which transformer is being used.
As a convenience, since calling
restoreM after exiting
liftBaseWith is so common, you can use
control instead of
restoreM =<< liftBaseWith:
y' <- restoreM =<< liftBaseWith (\runInIO -> runInIO f) -- becomes... y' <- control $ \runInIO -> runInIO f
Another common pattern is when you don’t need to restore the inner transformer’s state to the outer transformer, you just want to pass it down as an argument to some function in the base monad:
foo :: MonadBaseControl IO m => m String -> m String foo f = do x <- f liftBaseDiscard forkIO $ f
In this example, the first call to
f affects the state of
m, while the inner call to
f, though inheriting the state of
m in the new thread, but does not restore its effects to the parent monad transformer when it returns.
Now that we have this machinery, we can use it to make any function in
IO directly usable from any supporting transformer. Take
catch for example:
catch :: Exception e => IO a -> (e -> IO a) -> IO a
What we’d like is a function that works for any
MonadBaseControl IO m, rather than just
IO. With the
control function this is easy:
catch :: (MonadBaseControl IO m, Exception e) => m a -> (e -> m a) -> m a catch f h = control $ \runInIO -> catch (runInIO f) (runInIO . h)
You can find many function which are generalized like this in the packages
lifted-base and
lifted-async.
|
http://newartisans.com/2013/09/using-monad-control-with-monad-transformers/
|
CC-MAIN-2017-13
|
refinedweb
| 1,002
| 58.86
|
What is OpenCV?
Computer vision is a process by which we can understand how the images and videos are stored and manipulated, also it helps in the process of retrieving data from either images or videos. Computer Vision is part of Artificial Intelligence. Computer-Vision plays a major role in Autonomous cars, Object detections, robotics, object tracking, etc.
OpenCV
OpenCV is an open-source library mainly used for computer vision, image processing, and machine learning. It gives better output for real-time data, with the help of OpenCV, we can process images and videos so that the implemented algorithm can be able to identify objects such as cars, traffic signals, number plates, etc., and faces, or even handwriting of a human. With the help of other data analysis libraries, OpenCV is capable of processing the images and videos according to one’s desire.
More information about OpenCV can be acquired here ()
The library which we are going to use along with OpenCV-python is Mediapipe
What is Mediapipe?
Mediapipe is a framework mainly used for building multimodal audio, video, or any time series data. With the help of the MediaPipe framework, an impressive ML pipeline can be built for instance of inference models like TensorFlow, TFLite, and also for media processing functions.
Cutting edge ML models using Mediapipe
- Face Detection
- Multi-hand Tracking
- Hair Segmentation
- Object Detection and Tracking
- Objectron: 3D Object Detection and Tracking
- AutoFlip: Automatic video cropping pipeline
- Pose Estimation
Pose Estimation
Human pose estimation from video or a real-time feed plays a crucial role in various fields such as full-body gesture control, quantifying physical exercise, and sign language recognition. For example, it can be used as the base model for fitness, yoga, and dance applications. It finds its major part in augmented reality.
Media Pipe Pose is a framework for high-fidelity body pose tracking, which takes input from RGB video frames and infers 33 3D landmarks on the whole human. Current state-of-the-art approach methods rely primarily on powerful desktop environments for inferencing, whereas this method outperforms other methods and achieves very good results in real-time.
Pose Landmark Model
Source:
Now Let’s Get Started
First, install all the necessary libraries.
– pip install OpenCV-python
– pip install mediapipe
Download any kind of video for example dancing, running, etc. We will make use of that for our pose estimation. I am using the video provided in the link below.
()
To check if mediapipe is working, we will implement a small code using the above-downloaded video.
import cv2 import mediapipe as mp import time
mpPose = mp.solutions.pose pose = mpPose.Pose() mpDraw = mp.solutions.drawing_utils #cap = cv2.VideoCapture(0) cap = cv2.VideoCapture('a.mp4') pTime = 0 while True: success, img = cap.read() imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) results = pose.process(imgRGB) print(results.pose_landmarks) if results.pose_landmarks: mpDraw.draw_landmarks(img, results.pose_landmarks, mpPose.POSE_CONNECTIONS) for id, lm in enumerate(results.pose_landmarks.landmark): h, w,c = img.shape print(id, lm) cx, cy = int(lm.x*w), int(lm.y*h) cv2.circle(img, (cx, cy), 5, (255,0,0), cv2.FILLED))
In the above you can get easily that using OpenCV we are reading the frames from the video named ‘a.mp4’ and that frames are converted from BGR to RGB image and the using mediapipe we will draw the landmarks on the entire processed frames and finally, we will get video output with landmarks as shown below. The variables ‘cTime’, ‘pTime’, and ‘fps’ are used to calculate the reading frames per second. You can see the left corner in the below output for the number of frames.
The output in the terminal section is the landmarks detected by mediapipe.
Pose Landmarks
You can see a list of pose landmarks in the terminal section of the above image. Each landmark consists of the following:
· x and y: These landmark coordinates normalized to [0.0, 1.0] by the image width and height respectively.
· z: This represents the landmark depth by keeping the depth at the midpoint of hips as the origin, and the smaller the value of z, the closer the landmark is to the camera. The magnitude of z uses almost the same scale as x.
· visibility: A value in [0.0, 1.0] indicating the probability of the landmark being visible in the image.
MediaPipe is running well and well.
Let us create a Module to estimate the pose and also that the module can be used for any further project related to posing estimation. Also, you can use it in real-time with the help of your webcam.
Create a python file with the name ‘PoseModule’
import cv2 import mediapipe as mp import time
class PoseDetector: def __init__(self, mode = False, upBody = False, smooth=True, detectionCon = 0.5, trackCon = 0.5): self.mode = mode self.upBody = upBody self.smooth = smooth self.detectionCon = detectionCon self.trackCon = trackCon self.mpDraw = mp.solutions.drawing_utils self.mpPose = mp.solutions.pose self.pose = self.mpPose.Pose(self.mode, self.upBody, self.smooth, self.detectionCon, self.trackCon) def findPose(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.pose.process(imgRGB) #print(results.pose_landmarks) if self.results.pose_landmarks: if draw: self.mpDraw.draw_landmarks(img, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS) return img def getPosition(self, img, draw=True): lmList= [] if self.results.pose_landmarks: for id, lm in enumerate(self.results.pose_landmarks.landmark): h, w, c = img.shape #print(id, lm) cx, cy = int(lm.x * w), int(lm.y * h) lmList.append([id, cx, cy]) if draw: cv2.circle(img, (cx, cy), 5, (255, 0, 0), cv2.FILLED) return lmList def main(): cap = cv2.VideoCapture('videos/a.mp4') #make VideoCapture(0) for webcam pTime = 0 detector =) if __name__ == "__main__": main()
This is the code that you need for pose estimation, In the above, there is a class named ‘PoseDetector’, inside that we created two objects that are ‘findPose’ and ‘getPosition’. Here the object named ‘findPose’ will take the input frames and with help of mediapipe function called mpDraw, it will draw the landmarks across the body and the object ‘getPosition’ will get the coordinates of the detected region and also we can highlight any coordinate point with the help of this object.
In the main function, we will have a test run, you can take a live feed from the webcam by changing the first line in the main function to “cap = cv2.VideoCapture(0)”.
Since we created a class in the above file, we will make use of it in another file.
Now the final phase
import cv2 import time import PoseModule as pm
cap = cv2.VideoCapture(0) pTime = 0 detector = pm)
Here the code will just invoke the above-created module and run the whole algorithm on the input video or the live feed from the webcam. Here is the output of the test video.
The complete code is available in the below GitHub link.
Link to the Youtube video:
If you have any queries please make use of the issue option in my GitHub repository.
Thank you
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
|
https://www.analyticsvidhya.com/blog/2021/05/pose-estimation-using-opencv/
|
CC-MAIN-2021-25
|
refinedweb
| 1,210
| 57.67
|
Fabric: a System Administrator's Best Friend
A Brief Word on Application Deployment
Fabric also is used in development teams to deploy new code to production. It is actually used in a fairly similar fashion to how system administrators use it (copy files, run a few commands and so on), just in a very specific manner. Because of how automated Fabric is, it's easy to incorporate it into a continuous integration cycle and even fully automate your deployment process.
Command-Line Arguments
-a,
--no_agent— sets
env.no_agentto True, forcing your SSH layer not to talk to the SSH agent when trying to unlock private key files.
-A,
--forward-agent— sets
env.forward_agentto True, enabling agent forwarding.
--abort-on-prompts— sets
env.abort_on_promptsto True, forcing Fabric to abort whenever it would prompt for input.
-c RCFILE,
--config=RCFILE— sets
env.rcfileto the given file path, which Fabric will try to load on startup and use to update environment variables.
-d COMMAND,
--display=COMMAND— prints the entire docstring for the given task, if there is one. It does not currently print out the task's function signature, so descriptive docstrings are a good idea. (They're always a good idea, of course, just more so here.)
--connection-attempts=M,
-n M— sets the number of times to attempt connections. Sets
env.connection_attempts.
-D,
--disable-known-hosts— sets
env.disable_known_hoststo True, preventing Fabric from loading the user's SSH known_hosts file.
-f FABFILE,
--fabfile=FABFILE— the fabfile name pattern to search for (defaults to fabfile.py), or alternately an explicit file path to load as the fabfile (for example, /path/to/my/fabfile.py).
-F LIST_FORMAT,
--list-format=LIST_FORMAT— allows control over the output format of
--list.
shortis equivalent to
--shortlist;
normalis the same as simply omitting this option entirely (the default), and
nestedprints out a nested namespace tree.
-g HOST,
--gateway=HOST— sets
env.gatewayto HOST host string.
-h,
--help— displays a standard help message with all possible options and a brief overview of what they do, then exits.
--hide=LEVELS— a comma-separated list of output levels to hide by default.
-H HOSTS,
--hosts=HOSTS— sets
env.hoststo the given comma-delimited list of host strings.
-x HOSTS,
--exclude-hosts=HOSTS— sets
env.exclude_hoststo the given comma-delimited list of host strings to keep out of the final host list.
-i KEY_FILENAME— when set to a file path, will load the given file as an SSH identity file (usually a private key). This option may be repeated multiple times. Sets (or appends to)
env.key_filename.
-I,
--initial-password-prompt—or by setting
env.passwordin your fabfile is undesirable.
-k— sets
env.no_keysto True, forcing the SSH layer not to look for SSH private key files in one's home directory.
--keepalive=KEEPALIVE— sets
env.keepaliveto the given (integer) value, specifying an SSH keepalive interval.
--linewise— forces output to be buffered line by line instead of byte by byte. Often useful or required for parallel execution.
-l,
--list— imports a fabfile as normal, but then prints a list of all discovered tasks and exits. Will also print the first line of each task's docstring, if it has one, next to it (truncating if necessary).
-p PASSWORD,
--password=PASSWORD— sets
env.passwordto the given string; it then will be used as the default password when making SSH connections or calling the sudo program.
-P,
--parallel— sets
env.parallelto True, causing tasks to run in parallel.
--no-pty— sets
env.always_use_ptyto False, causing all run/sudo calls to behave as if one had specified
pty=False.
-r,
--reject-unknown-hosts— sets
env.reject_unknown_hoststo True, causing Fabric to abort when connecting to hosts not found in the user's SSH known_hosts file.
-R ROLES,
--roles=ROLES— sets
env.rolesto the given comma-separated list of role names.
--set KEY=VALUE,...— allows you to set default values for arbitrary Fabric env vars. Values set this way have a low precedence. They will not override more specific env vars that also are specified on the command line.
-s SHELL,
--shell=SHELL— sets
env.shellto the given string, overriding the default shell wrapper used to execute remote commands.
--shortlist— similar to
--list, but without any embellishment—just task names separated by newlines with no indentation or docstrings.
--show=LEVELS— a comma-separated list of output levels to be added to those that are shown by default.
--ssh-config-path— sets
env.ssh_config_path.
--skip-bad-hosts— sets
env.skip_bad_hosts, causing Fabric to skip unavailable hosts.
--timeout=N,
-t N— set connection timeout in seconds. Sets
env.timeout.
-u USER,
--user=USER— sets
env.userto the given string; it then will be used as the default user name when making SSH connections.
-V,
--version— displays Fabric's version number, then exits.
-w,
--warn-only— sets
env.warn_onlyto True, causing Fabric to continue execution even when commands encounter error conditions.
-z,
--pool-size— sets
env.pool_size, which specifies how many processes to run concurrently during parallel execution.
- «:)
|
http://www.linuxjournal.com/content/fabric-system-administrators-best-friend?page=0,2&quicktabs_1=0
|
CC-MAIN-2015-11
|
refinedweb
| 828
| 60.72
|
Type: Posts; User: codelogman; Keyword(s):
I respect so much the work with slackware about in the tutorials section (). This simple man has understanding very good how the...
in agreement to nihil, the AV "industry" need food, need pay takes and employees.
where they i supose the "instant" worms appear?
the same AV enterprises free those codes on the net for make a...
OK OK for not sound "like a spam" in my post i decide publish the code and a little explanation here:
Hello this a little MD5 functionally code in c++
[md5.c]
i still thinking about publish a tutorial called " Introduction to unsecure a Wireless Network".
the only secure pass protection can...
EDITED!!
i make a serious mistake trying to answer to this post, some users take that for "spam" my own web site, so, i found a "edit" button and erase the content.
thanks for the...
Tonto, You can translate the DKEY2600 code published on my magazine and learn about that, i'm not able to "rewrite" the article here but you can read these article on:
any...
I has been published the MD5 code for fr33dom web page:
yo ca see that here:
and if you have question related about MD5 process or SHA polinterpolation...
Hi, i try to understand how i perform my sybase sql server machine server-client side.
My idea is: develop for myself a database server and client structure program for understand how it works.
...
No only java: perl, php (c for girls), c++, C# (c for girls), and all web bassed object languajes are capable for do that.
This is a basic encoder using c for girls (C#)
public class...
http ftp certification code, like HTRegz says: "punch that into google and I get this from a registry dump".
The cert is a essential registry for www transactions, one of my virus work attack...
Windows sp provide only a "on demand" background process, i think personally, sp2 is a crashing bad malfunction code.
i give you a simple firewall if you decide boring sp2
for antionline...
good!
this schema is the most near from penetration web reality, so, exist another methods or another software? the schema is good.
greetz
hi i read all asked for your sql question
you can try run directly on "stored procedures" and don't claim for sql injection for a long time.
stored procedure like that
keep alive? and HTTPVersion '1.0' i try with 1.1 and the results is good
good code and explanation ;)
AzRaEL
a example for negotiation:
this is the communication method for www browsers
saludos
Hi,
I'm with you ic2 and this person need read more about reverse engineering. VB is not the best way, and his routines for cryp and decryp are so poor.
i recomend for my past post and ic2 post...
why you request for another person, do performance? o encoding this source?
you see:
i don't understand really where is the "cracking" section
Hi, my hume opinion for this texts:
first:
think in server structure:
a) server rack (how you say)
Yes, good question, i write a Creation Kit for polimorphic variants, assembler, c++ etc, but the problem with that is:
a virus writter is alone person, underground person, actually i don't. I...
I'm a older virus writter, and some for my virus don't run actually, but i don't understand for malicious virus writters who send a virus only for molest or appear in papers.
Basically this area...
Pk, yes, the idea is processing data, like a image render, database request processing or reconstruct data in a corrupt DB file.
Cracking passwords is a better idea but what is the best...
like a data processing or not?
cracking passwords is a obfuscated task only for expertise coders, also data storage is a better idea for make and implemment a cluster scheme or not?
saludos
Hi
this error is consequence of:
First, you can generate first the solution with a new web.config file? if yes, you try it..
How can Microsoft destroy the internet?
Is the first and essential question for this article.....
Virus, Worms, Monopoly, Billi, Crash ICAAN, and some no importance task has passed for ask...
Hi iron,
i have a young friend, devian designer, this guy is excelent graphical designer, contact me and i introduce him.
my msn contact is w32mydoom@hotmail.com
he design the logo for...
|
http://www.antionline.com/search.php?s=5fb3082500086c6b1e6ec6d12c8bf0b5&searchid=2324927
|
CC-MAIN-2015-18
|
refinedweb
| 741
| 72.36
|
: 2960
Author: joergl
Date: 2007-12-04 02:15:50 -0800 (Tue, 04 Dec 2007)
Log Message:
-----------
do not start readpipe in constructor and catch interupted system calls (thanks to Laurence Tratt and Eric Faurot)
Modified Paths:
--------------
trunk/pyx/CHANGES
trunk/pyx/pyx/text.py
Modified: trunk/pyx/CHANGES
===================================================================
--- trunk/pyx/CHANGES 2007-11-26 12:52:18 UTC (rev 2959)
+++ trunk/pyx/CHANGES 2007-12-04 10:15:50 UTC (rev 2960)
@@ -112,6 +112,9 @@
- config module:
- psfontmaps and pdffontmaps config options (TODO: documentation)
- config option for format warnings
+ - text module:
+ - fix two bugs in the read pipe of the texrunner (thanks to
+ Laurence Tratt and Eric Faurot)
0.10 (2007/10/03):
Modified: trunk/pyx/pyx/text.py
===================================================================
--- trunk/pyx/pyx/text.py 2007-11-26 12:52:18 UTC (rev 2959)
+++ trunk/pyx/pyx/text.py 2007-12-04 10:15:50 UTC (rev 2960)
@@ -21,7 +21,7 @@
# along with PyX; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
-import glob, os, threading, Queue, re, tempfile, atexit, time, warnings
+import errno, glob, os, threading, Queue, re, tempfile, atexit, time, warnings
import config, siteconfig, unit, box, canvas, trafo, version, attr, style
from pyx.dvi import dvifile
import bbox as bboxmodule
@@ -621,11 +621,18 @@
self.gotqueue = gotqueue
self.quitevent = quitevent
self.expect = None
- self.start()
def run(self):
"""thread routine"""
- read = self.pipe.readline() # read, what comes in
+ def _read():
+ # catch interupted system call errors while reading
+ while 1:
+ try:
+ return self.pipe.readline()
+ except IOError, e:
+ if e.errno != errno.EINTR:
+ raise
+ read = _read() # read, what comes in
try:
self.expect = self.expectqueue.get_nowait() # read, what should be expected
except Queue.Empty:
@@ -637,7 +644,7 @@
self.gotqueue.put(read) # report, whats read
if self.expect is not None and read.find(self.expect) != -1:
self.gotevent.set() # raise the got event, when the output was expected (XXX: within a single line)
- read = self.pipe.readline() # read again
+ read = _read() # read again
try:
self.expect = self.expectqueue.get_nowait()
except Queue.Empty:
@@ -897,6 +904,7 @@
self.texruns = 1
oldpreamblemode = self.preamblemode
self.preamblemode = 1
+ self.readoutput.start()
self.execute("\\scrollmode\n\\raiseerror%\n" # switch to and check scrollmode
"\\def\\PyX{P\\kern-.3em\\lower.5ex\hbox{Y}\kern-.18em X}%\n" # just the PyX Logo
"\\gdef\\PyXBoxHAlign{0}%\n" # global PyXBoxHAlign (0.0-1.0) for the horizontal alignment, default to 0
|
https://sourceforge.net/p/pyx/mailman/pyx-checkins/?viewmonth=200712&viewday=4&style=flat
|
CC-MAIN-2017-09
|
refinedweb
| 408
| 60.51
|
After having a good command over classes and objects, you must have understood how useful the concept of classes and objects can be.
In the previous chapter, we printed the area of a rectangle by making an object of the Rectangle class. If we have to print the area of two rectangles having different dimensions, we can create two objects of the Rectangle class, each representing a rectangle. Before moving to the concept of array of objects, let's first see an example of printing the area of two rectangles.
#include <iostream> using namespace std; class Rectangle { public: int length; int breadth; Rectangle( int l, int b ) { length = l; breadth = b; } int printArea() { return length * breadth; } }; int main() { Rectangle rt1( 7, 4 ); Rectangle rt2( 4, 5 ); cout << "Area of first rectangle " << rt1.printArea() << endl; cout << "Area of second rectangle " << rt2.printArea() << endl; return 0; }
Area of second rectangle 20
We created two objects rt1 and rt2 of class Rectangle representing the two rectangles. Rectangle rt1( 7, 4 ); - created the object 'rt1' and assigned its length and breadth as 7 and 4 respectively. Similarly, Rectangle rt2( 4, 5 ); created the object 'rt2' and assigned its length and breadth as 4 and 5 respectively.
Now, suppose we have 50 students in a class and we have to input the name and marks of all the 50 students. Then creating 50 different objects and then inputting the name and marks of all those 50 students is not a good option. In that case, we will create an array of objects as we do in case of other data-types.
Let's see an example of taking the input of name and marks of 5 students by creating an array of the objects of students.
#include <iostream> #include <string> using namespace std; class Student { string name; int marks; public: void getName() { getline( cin, name ); } void getMarks() { cin >> marks; } void displayInfo() { cout << "Name : " << name << endl; cout << "Marks : " << marks << endl; } }; int main() { Student st[5]; for( int i=0; i<5; i++ ) { cout << "Student " << i + 1 << endl; cout << "Enter name" << endl; st[i].getName(); cout << "Enter marks" << endl; st[i].getMarks(); } for( int i=0; i<5; i++ ) { cout << "Student " << i + 1 << endl; st[i].displayInfo(); } return 0; }
Enter name
Jack
Enter marks
54
Student 2
Enter name
Marx
Enter marks
45
Student 3
Enter name
Julie
Enter marks
47
Student 4
Enter name
Peter
Enter marks
23
Student 5
Enter name
Donald
Enter marks
87
Student 1
Name : Jack
Marks : 54
Student 2
Name : Marx
Marks : 45
Student 3
Name : Julie
Marks : 47
Student 4
Name : Peter
Marks : 23
Student 5
Name : Donald
Marks : 87
Now let’s go through this code.
Student st[5]; - We created an array of 5 objects of the Student class where each object represents a student having a name and marks.
The first for loop is for taking the input of name and marks of the students. getName() and getMarks() are the functions to take the input of name and marks respectively.
The second for loop is to print the name and marks of all the 5 students. For that, we called the displayInfo() function for each student.
Hopefully, you are now ready to create arrays of objects.
An ounce of practice is worth more than tons of preaching.
-Mahatma Gandhi
|
https://www.codesdope.com/cpp-array-of-objects/
|
CC-MAIN-2022-40
|
refinedweb
| 552
| 65.76
|
Hide Forgot
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003
Description of problem:
After applying the march 19 glibc security update to redhat 8.0, attempts to
link statically programs using ncurses (5.2) fail with __ctype_b undefined.
Version-Release number of selected component (if applicable):
glibc.2.3.2-4.80.i686.rpm
How reproducible:
Always
Steps to Reproduce:
1.Create file t.c:
#include <curses.h>
main () { initscr(); }
2. Compile with:
gcc -static t.c -lncurses
3.
Actual Results:
/usr/lib/gcc-lib/i386-redhat-linux/3.2/../../../libncurses.a(lib_tparm.o): In
function `parse_format':
lib_tparm.o(.text+0x1112): undefined reference to `__ctype_b'
/usr/lib/gcc-lib/i386-redhat-linux/3.2/../../../libncurses.a(lib_tputs.o): In
function `tputs':
lib_tputs.o(.text+0x213): undefined reference to `__ctype_b'
/usr/lib/gcc-lib/i386-redhat-linux/3.2/../../../libncurses.a(lib_mvcur.o): In
function `_nc_msec_cost':
lib_mvcur.o(.text+0xa5): undefined reference to `__ctype_b'
collect2: ld returned 1 exit status
Expected Results: No link errors.
Additional info:
This problem did not exist prior to uploading the March 19 glibc patch.
This probably causes programs using ncurses that are dynamically linked to crash
(if so, I suppose that would raise the severity level).
__ctype_b is gone for *__ctype_b_loc() because of the new locale model.
In libc.so, __ctype_b is exported as compatibility symbol, but for the 8.0
errata it should be readded to libc.a.
This bug can be seen in many other scenarios, too, for example when trying to
rebuild rpm from source libbz2 bails out:
/usr/lib/libbz2.a(bzlib.o): In function `bzopen_or_bzdopen':
bzlib.o(.text+0x27ce): undefined reference to `__ctype_b'
Using nm on /usr/lib/*.a gives ~700 instances of undefined __ctype_b. Probably
any more complex build process involving static libraries will hit this bug
(e.g. packaging rpms).
What is the right way to readd these symbols (is it only __ctype_b or maybe
more?) to libc.a?
Can you please try ?
*** Bug 87468 has been marked as a duplicate of this bug. ***
Fix verified in glibc-2.3.2-4.80.
Hello,
I get __ctype_b missing on Redhat 9.0, this there already a fix for RH 9.0
Winfrid
On RHL9 this is not a bug. There is no binary compatibility for .a libraries
or .o files between distribution major releases (glibc only ensures
binary compatibility for programs and shared libraries using symbol versioning,
for static libraries this cannot work).
Hello Jakub,
Personally I do not have any problems with your answer, that the new version is
not compatible with glibc 2.2.x, but currently the incompatibility causes a
link trouble for Intel's fortran95 under RedHat 9.0. Intel is informed about
the trouble but they are very slow in fixing it.
Winfrid
The change was done in glibc on purpose. If __ctype_b etc. are used, programs
or libraries using uselocale(3) might be doing the wrong thing.
Can we have a fix for RH 9 please?
We definitely need a fix for RH9, otherwise nobody will upgrade
I have to agree with those people asking for a fix in RH9. Our lab has been
using RedHat for 5 years now, but this bug is causing people to actaully
DOWNGRADE new machines to RedHat 8 or 7.3. Come Dec. when 8/7.x is EOL, we may
have to move to another distrobution if this bug is not resolved by RedHat
and/or f95 compiler vendors (NAG and Intel).
Our high-end compilers (Lahey, Absoft, etc.) no longer work, nor does Tecplot.
These programs are crucial to our work.
I see mention of this bug on numerous lists and some seem like they say it is
fixed in the latest glibc updates (I am running RH 9, glibc-2.3.2-27.9), but I
have no such luck.
I am also running RH9, with similar problems. I noticed that there is a bug
numbered 91290 dealing with the same issue, except dedicated to RH9. It is
currently high priority, and I hope it gets fixed soon so I can compile my
Fortran once again.
I find these reports of problems interesting because when I upgraded to
RH9 and recompiled, the problems disappeared. While still using RH8.0, I
added the following "bug workaround" file and it appeared to solve the problem
(although it did not get a lengthy test since shortly therefter I upgraded to
RH9 and no longer needed it). The bug workaorund that I used on RH8 is provided
below in case it helps those of you having problems with RH9.
/*
* ctype_b.c
*
* This file has been added to compensate for a bug in
* version 2.3.2 of the glibc library for RH8.
*/
#define attribute_hidden
#define CTYPE_EXTERN_INLINE /* Define real functions for accessors. */
#include <ctype.h>
/*
#include <locale/localeinfo.h>
__libc_tsd_define (, CTYPE_B)
__libc_tsd_define (, CTYPE_TOLOWER)
__libc_tsd_define (, CTYPE_TOUPPER)
#include <shlib-compat.h>
*/
#define b(t,x,o) (((const t *) _nl_C_LC_CTYPE_##x) + o)
extern const char _nl_C_LC_CTYPE_class[] attribute_hidden;
extern const char _nl_C_LC_CTYPE_toupper[] attribute_hidden;
extern const char _nl_C_LC_CTYPE_tolower[] attribute_hidden;
const unsigned short int *__ctype_b = b (unsigned short int, class, 128);
const __int32_t *__ctype_tolower = b (__int32_t, tolower, 128);
const __int32_t *__ctype_toupper = b (__int32_t, toupper, 128);
Recompiled what? glibc? Recompiling a library is not a valid (IMO) answer to
fix a bug in a _binary_ distribution, if I wanted to do that I'd use a source
distribution.
However, if I have the time I will try it out (I'm not saying it's a _bad_
solution, just not one that RH should accept ;)), but I've found
patches/workarounds for most of the programs we use here, still looking for the
rest.
My problem is with Eiffel Software's Eiffel compiler. I can't link any programs
at all.
So I'm having to await delivery of a new machine, so I can install RedHat 8.0 on it.
Sorry about the lack of clarity in #18. I recompiled the program I had written
that I could not compile with the -static option on RH8 (a linux tax preparation
program). The program compiled fine on RH9. I did not recompile glibc.
Ahh, okay. Did the '-static' option still work? One of the compilers we use
(Lahey F95) had a patch on their site for glibc 2.3.x support and after applying
this it would compile, but not statically. The other compilers still have problems.
Is this going to be fixed on RH 9, so I can upgrade?
On RHL9 and later it is going to stay the way it is, ie. forcing all incompatible
objects to be recompiled.
The reason is e.g. libstdc++, which extensively uses uselocale these days,
and __ctype_b using objects don't work together with uselocale (that's why
__ctype_b_loc etc. were introduced).
Binary compatibility is maintained for shared libraries and binaries only
accross releases, so what you're trying to accomplish was never guaranteed
to work between any releases.
According to the official
RedHat 9 box sold in the stores contains glibc 2.3.2-5, which seems to provide
the __ctype_b symbol and some other symbols that have been dropped for 2.3.2-11.
Using this environment based upon glibc 2.3.2-5 and the appropriate development
libraries, would it be possible to build a static or dynamic library that cannot
be linked on a RedHat 9 system with glibc-2.3.2-11 or later installed?
Regards
Harri
The following article seems to solve my problem (rm/cobol compiler)
MGL
Does anyone know where I can download glibc 2.3.2-5?
I have it now.
Now, does anyone know how to downgrade from glibc-2.3.2-95.27 to
glibc-2.3.2-5?
Another way to solve my problem would be to have multiple versions of
glibc installed. I have tried using rpm using the --relocate and --
prefix options but the package I have is not relocateable.
Dan
when I'm compliling my program I have this : (.text+0x8daf): undefined reference to `__ctype_b'
collect2: ld returned 1 exit status
I'm using fedora release 7. please what can I do in this case? which version of glibc must I downolad?
thank you. best regards!
(In reply to comment #29)
> when I'm compliling my program I have this : (.text+0x8daf): undefined
> reference to `__ctype_b'
> collect2: ld returned 1 exit status
Check the dates and the bug history, this bug was fixed 6 years ago!
> I'm using fedora release 7. please what can I do in this case? which version of
> glibc must I downolad?
Upgrade to a newer Fedora, currently supported releases are Fedora 9 and Fedora 10, I recommend the latter.
If the problem still exists open a new bug. It is unlikely that this 6 year old bug still persists unnoticed. And even if then the data in this report is very outdated to be in context with a current glibc release, so please file a fresh bug after verifying that the bug exists on Fedora 10. Don't forget to attach a minimal test program, the command you used to invoke the build and the screen for reproduction of the error.
As explained earlier, the above error with contemporary glibc just means that you are trying to link an object file (or *.a library) compiled against a 6+ years old glibc, which is not supported. You either have to compile/link everything against the glibc the binary only files were compiled against, or you need the source.
|
https://bugzilla.redhat.com/show_bug.cgi?id=86465
|
CC-MAIN-2020-50
|
refinedweb
| 1,593
| 58.89
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I'm running Processing 2.2.1 along with SimpleOpenNI 1.96 (based on instructions at). I modified the DepthImage example sketch and added file writing (code below).
I'm trying to output the depth data from Kinect to .txt files in a folder of my choice. In the Processing IDE, the sketch runs fine; the depth is written to the files correctly.
However I would like this functionality in an .exe file so that another program can run this .exe and read from those files, at run time. Export functionality in Processing IDE runs without errors and I get both win32 and win64 application folder. But if I execute .exe present in either one of them, nothing happens; I cannot see any errors anywhere. Even if I select "Present Mode" while exporting, only a gray screen appears but I cannot see any files being written to the path I supply. Toggle various selections (PresentMode/Export java) on the Export options windows hasn't helped.
I also tried to use savePath() because someone else (not using a kinect application) was able to write data into folders using that. But it did not work for me.
Following is my sketch that works correctly in the IDE:
import SimpleOpenNI.*; SimpleOpenNI context; int[] dmap; int dsize; float[] dmapf; PrintWriter output; int fitr; String path; void setup() { size(640*2, 480); fitr=1; context = new SimpleOpenNI(this); if (context.isInit() == false) { println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return; } // mirror is by default enabled context.setMirror(false); // enable depthMap generation context.enableDepth(); // enable ir generation context.enableRGB(); path = savePath("E:\\SYMMBOT\\DepthReading"); } void draw() { // update the cam context.update(); dmap = context.depthMap(); //dmapf = dmap.array(); output = createWriter(path+"\\depth"+fitr+".txt"); fitr++; int itr = 0; for(int i=0; i<480; i++){ for(int j=0;j<640;j++){ output.print(dmap[itr]+" "); itr++; } output.println(); } output.flush(); output.close(); //dsize = context.depthMapSize(); background(200, 0, 0); //<>// // draw depthImageMap image(context.depthImage(), 0, 0); // draw irImageMap image(context.rgbImage(), context.depthWidth() + 10, 0); }
Answers
|
https://forum.processing.org/two/discussion/7042/processing-s-export-functionality-does-not-work-with-simpleopenni-kinect-application
|
CC-MAIN-2021-10
|
refinedweb
| 365
| 62.34
|
drand48, erand48, jrand48, lcong48, lrand48, mrand48, nrand48, seed48, srand48 - generate uniformly distributed pseudo-random numbers
#include <stdlib.h> double drand48(void); double erand48(unsigned short int xsubi[3]); long int jrand48(unsigned short int xsubi[3]); void lcong48(unsigned short int param[7]); long int lrand48(void); long int mrand48(void); long int nrand48(unsigned short int xsubi[3]); unsigned short int *seed48(unsigned short int seed16v[3]); void srand48(long int seedval);
This family of functions generates pseudo-random numbers using a linear congruential algorithm and 48-bit integer arithmetic.
The drand48() and erand48() functions return non-negative, double-precision, floating-point values, uniformly distributed over the interval [0.0 , 1.0].
The lrand48() and nrand48() functions return non-negative, long integers, uniformly distributed over the interval [0,231].
The mrand48() and jrand48() functions return signed long integers uniformly distributed over the interval [-231,231].
The srand48(), seed48() and lcong48() are initialisation entry points, one of which should be invoked before either drand48(), lrand48() or mrand48() is called. (Although it is not recommended practice, constant default initialiser values will be supplied automatically if drand48(), lrand48() or mrand48() is called without a prior call to an initialisation entry point.) The erand48(), nrand48() and jrand48() functions do not require an initialisation they must be initialised initialised; will not depend upon how many times the routines are called to generate numbers for the other streams.
The initialiser function srand48() sets the high-order 32 bits of Xi to the low-order 32 bits contained in its argument. The low-order 16 bits of Xi are set to the arbitrary value 330E16 .
The initialiser re-initialise via seed48() when the program is restarted.
The initialiser() will restore the standard multiplier and addend values, a and c, specified above.
The drand48(), lrand48() and mrand48() interfaces need not be reentrant.
As described in the DESCRIPTION above.
No errors are defined.
None.
None.
None.
rand(), <stdlib.h>.
Derived from Issue 1 of the SVID.
|
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/drand48.html
|
CC-MAIN-2016-40
|
refinedweb
| 328
| 52.6
|
Linked List Data Structure in Java
In this Java tutorial, we are going to discuss the linked list data structure. We will learn the following things in this Java Linked List tutorial:
- Advantages of LinkedList in Java
- Disadvantages of LinkedList in Java
- Implementation of LinkedList in Java
- Representation of LinkedList in Java
What is Linked List?
Linked list is a linear data structure in which elements are not stored in contiguous memory locations. Elements in LinkedList are linked to each other using pointers.
A linked list consists of nodes in which each data holds a data field and a pointer to the next node. Starting node of linked list is known as head node.
Representation of a Linked List
Linked list consists of two parts:-
1.) Data field:- stores the data of a node.
2.) Pointer to next node:- Each node holds the address of the next node. Each node in a linked list is connected to next node using pointer.
In Java Linked List is represented as a separate class and node as a separate class
class Codespeedy { node head; class node { int value; node next; Node(int x) { value = x; } } }
Advantages of using Linked List
1.) Dynamic Size:- Memory allocation to linked list elements is done dynamically. So, we can add more and more elements in a linked list easily as compared to arrays. There is an upper limit on the number of elements in array as the size of array is fixed.
2.) Ease of Insertion and Deletion:- Insertion and deletion of an element in a linked list is very easy as compared to arrays and can be done in O(1). In array insertion of an element needs room for that element and shifting of elements.
Disadvantages of using Linked List
1.) Extra memory space for pointers:- As each node in a linked list consists of pointers which hold the address of next node, so an extra memory space for a pointer is required with each element of linked list .
2.) Random access not allowed:- Random access of an element in a linked list is not allowed. We have to traverse the linked list from starting to access an element in linked list. While in arrays accessing an element can be done in O(1).
Implementation of LinkedList in Java
import java.util.Scanner; public class LinkedList { static node head; static node p; static class node { int value; node next; node(int x) { value = x; next=null; } } public void show() { node newnode = head; while (newnode != null) { System.out.print(newnode.value+" "); newnode = newnode.next; } } public static void main(String[] args) { LinkedList list = new LinkedList(); Scanner scan = new Scanner(System.in); System.out.println("Enter value for head node :"); int Head = scan.nextInt(); list.head = new node(Head); head.next = null; node head = p; char ch='Y'; while (ch!='N') { System.out.println("Enter value for adding node :"); int add = scan.nextInt(); node addnode = new node(add); list.p.next = addnode; addnode = p; addnode.next = null; System.out.println("Do you want to continue :-"); ch=scan.next().charAt(0); } list.show(); } }
In the above code, we are using LinkedList class and node a separate class. We have to define head, p and class node as static so that it can be accessed from main() function. The class node consists of a variable ‘value’ which holds data for each node and reference to the next node . We are assigning the value to each node starting from the head node and connecting the previous node to the current node using reference node p and next of current node making null. In this way, we are creating nodes one by one by asking the user whether he wants to continue.
In the show() function, we are starting from the head node and printing each node one by one till the last node. Criteria for checking to reach the last node is that next of the last node points to null.
Also learn,
|
https://www.codespeedy.com/linked-list-data-structure-in-java/
|
CC-MAIN-2020-24
|
refinedweb
| 662
| 72.56
|
- A First Look at VB 7.0 and the .NET Framework
- A New Ballgame
- Summary
Abstract: Dan Fox takes a look at what is on the horizon for the next version of Visual Basic, and how it relates to Microsoft's new .NET initiative.
Regular readers of InformIT have no doubt already noticed the coverage of Microsoft's newest language, C#. As a result, you're aware that C# will debut in the next release of Microsoft Visual Studio, version 7.0, tentatively referred to as Visual Studio .NET. What you may not be aware of is that much of the power and the features found in C# will also make their way into VB 7.0 (or VB .NET, if you prefer). In this article, I'll take you through a few of those features and show you how VB takes advantage of the Microsoft .NET Framework. Before I get started, I should note that the code in this article was built with the PDC Tech Preview version of VS 7.0, the core of which can be downloaded from MSDN at. As a result, any of the specifics discussed here are liable to change before the product is released.
Sometimes Less Is More
To be sure, the most interesting aspect of VB 7.0 is that it derives its core features from the .NET Framework (formerly known as Next Generation Windows Services or NGWS). Simply put, the .NET Framework provides a set of class libraries and a run-time engine that multiple languages share. From this perspective, you can think of Visual Basic as one of many possible wrappers around the .NET Framework, with very little, other than syntax and program structure, that is VB-specific. However, as you'll see, far from restricting VB, this architecture allows VB to extend itself all the way across the enterprise.
Perhaps the most important feature of the .NET Framework is its class libraries. The class libraries are grouped together in namespaces and are supplied to allow consistent access to system services. The class libraries encapsulate and abstract everything from the low-level operating system services such as IO to high-level services such as messaging using MSMQ. This architecture should make life easier by providing a well-debugged and reusable set of classes. This feature alone should be a boon to VB developers who frequently amass their own code libraries to deal with thorny Win32 issues or services such as string manipulation, sorting and searching, and hashing.
Using a class library also affords simplicity since a developer needn't use a combination of Win32 API calls and COM objects, each with their own sets of rules. A particularly good example of classes that provide these benefits are found in the System.Net namespace. This namespace includes classes that provide services to allow applications to communicate over the Internet using HTTP and FTP. An additional benefit that should be apparent is that by releasing new classes, VB developers can easily take advantage of new system services. For example, the incorporation of the ASP+ technologies into class libraries allows VB to easily create web applications without reliance on propriety schemes such as the Web classes and Dynamic HTML projects introduced in VB 6.0. The classes within the libraries are also object oriented and, as a result, can be extended by a developer through inheritance when implementing specific features. The PDC version of the .NET Framework ships with references to 87 different namespaces that comprise hundreds of classes. A small sampling of these namespaces (adapted from Jeffery Richter's article on MSDN at) can be seen in the following table.
The second core feature of the .NET Framework is the use of a common run-time engine. All applications compiled to use the run-time produce "managed code", as opposed to "unmanaged code" produced by the current generation of the development tools. VB 7.0 will only produce code that uses the run-time and hence only managed applications. In other words, instead of VB apps calling into the infamous VB runtime (msvbvm60.dll for example), they will instead call into a run-time engine shared by VC++, C#, JScript, and Visual FoxPro for starters, although among these, VC++ will also be able to produce unmanaged executables. Microsoft will also release a standard dubbed the Common Language Specification (CLS) that will allow other vendors to produce compilers that work with the .NET run-time.
The architecture of the .NET run-time (found in MSCorEE.dll) is significantly different from what VB developers are used to. For example, rather than produce a native executable, VB 7.0 will produce a Portable Executable (PE) file that contains a CPU-independent machine language called MSIL. Although I can almost hear the crashing hopes of many seasoned VB developers as they recall the time and sweat that it took for native compilation to finally make it into VB 5.0, rest assured the benefits of an intermediate language executable are significant. For starters, as expected MSIL provides the benefits of hardware abstraction and security that developers can take advantage of if the .NET run-time is ported to other platforms and new versions of Windows (64 bit). However, the .NET run-time also uses Just-In-Time (JIT) compilation to offset the performance penalty normally invoked when compiling an application as it is loaded. To further increase performance, the JIT compiler can also take advantage of particularities in the machine state (such as CPU type) or patterns in program data when the application runs to increase performance at run-time. Expect improvements such as these to be made incrementally as Microsoft refines the run-time engine.
Aside from performance, the run-time also provides several other features. For example, since the .NET run-time is shared by multiple languages, it also provides the ability for those languages to reuse code from one another. For example, a class created in C# can be extended through inheritance in VB to provide additional functionality. This integration also makes it possible to do cross-language debugging more easily through a common debugger. The run-time is also object oriented and provides object creation and management facilities (referred to as the Virtual Object System or VOS) exposed through the System.Object class. The VOS is in many ways analogous to the COM system libraries that are currently a part of Windows, although VOS's inclusion in the run-time makes them more portable and self-contained. Other general benefits of the run-time include features that VB developers are familiar with such as type safety and garbage collection.
While the run-time provides the underlying architecture, the .NET Framework also changes the way you'll view applications. In an effort to eliminate the most persistent support headaches, the scheme for packaging and deploying applications has altogether changed. An application is now deployed as a collection of files referred to as an "assembly" that includes its own self-referencing "manifest", which contains references to all the resources, down to the particular version, that the application needs. This scheme allows the run-time to load the appropriately versioned classes for the application. As a result, all applications built in VB 7.0 will have everything they need to run correctly when deployed to a different machine. Although there are subtle variations, by and large applications can be deployed simply by copying a directory structure to a new machine. This scheme also alleviates the "DLL Hell" problem that occurs when an application overwrites a DLL upon which another application is dependent. In addition, the inclusion of the VOS means that applications no longer need to rely on the registry for COM component creation, a continual source of confusion for developers and users. While some hail this as the "death of COM", it is still possible to call unmanaged COM components from VB 7.0.
As mentioned above, VB can be thought of merely as packaging over the .NET Framework. In the remainder of this article, we'll explore the packaging to see how VB exposes the features of the .NET Framework.
|
https://www.informit.com/articles/article.aspx?p=19481
|
CC-MAIN-2020-45
|
refinedweb
| 1,357
| 54.02
|
Core Perl: A Book Review
Title: Core PerlAuthor: Reuven M. LernerPublisher: Prentice-Hall, Inc.URL:: 0-13-035181-4Price: US$44.99
Every Linux Journal reader is probably familiar with Reuven Lerner's "At The Forge" column, where each month he describes web programming techniques and technologies. If you are a fan of Perl (like me), you probably mourned Reuven's recent shift to Java as the programming language around which he structures his discussions. The monthly Perl-fix in LJ has been missing for some time now, and Java appears to be everywhere. What kept me going was Reuven's standard signature sign-off teasing that his book, Core Perl, was to be published "real soon now" by Prentice-Hall. The book finally published in January 2002, and, as I'm always on the lookout for a good all-around Perl text to recommend to my students, I was keen to take a look at it.
Core Perl is part of the Prentice-Hall PTR Core Series, a set of programming books aimed at the professional programmer. The Core Series is most closely associated with Core Java by Horstmann & Cornell. Although primarily a Java-focused collection, a small selection of Core Series books cover other programming technologies, namely Python, C++, PHP and now Perl. Tagged as the Serious Developer's Guide to Perl, Reuven's book is positioning itself not as a book for newbie programmers, but as a book for already practicing Perl programmers or for programmers of another language who want to learn Perl.
Overall, Reuven does a pretty good job of meeting the needs of these two audiences. In it's 565 pages, Core Perl includes some 18 chapters, each devoted to a specific Perl topic. In the preface, the structure of the book is presented, together with a list of Perl topics not covered by the book. These topics would be considered advanced Perl technologies, and they include GUI-development (using Tk), linking to C/C++, using XS, compilation and threads programming. If you are interested in these Perl topics, you will need to look elsewhere. The rest of the book can be roughly split into three main sections (although they aren't identified as such in the text): Perl as a programming language (chapters 1 through 10, and chapter 13), working with databases (chapters 11 through 12) and working with the web (chapters 14 through 18).
After describing what Perl is, what it isn't and how to get it in Chapter 1 ("What is Perl?"), things really get going in Chapter 2 ("Getting Started"), and the reader is introduced to the basic variable building blocks of Perl--scalars, arrays, lists, hashes and references. As this is a book aimed at practicing programmers, I was surprised to see descriptions of what HEX and OCTAL numbers are and how they work. Can there be a programmer out there that doesn't know what a HEX number is?! Of more interest (and importance) is the coverage of the way scoping works in Perl at the end of this chapter, where the differences between global and lexical variables are described. In Chapter 3 ("Extending your Perl Vocabulary"), coverage of the syntax of the language continues. This material includes basic I/O, the conditional constructs, operators and loops. A number of the more heavily used built-in functions are also described. The fork and eval functions are briefly described at the end of this chapter. I didn't really like the accompanying descriptions and would have preferred some additional, expanded discussion and review of these important functions.
Chapter 4 ("Subroutines"), is all about, well, subroutines. The discussion about signals toward the end of the chapter would have benefitted from providing a list of signals that Perl does and doesn't support. However, in general, this material is fine. Chapter 5 ("Text Patterns") is a good tutorial on Perl's regular expression, pattern-matching technology. Chapter 6 ("Modules") continues discussing the mechanics of Perl programs, describing the creation of namespaces with the package command. The steps required to create a standalone module file are also discussed. The latter half of this chapter presents a brief collection of some of the third-party add-on modules available for Perl from the CPAN repository. The creation of documentation using POD is also covered. Following the chapter on modules, Chapter 7 ("Objects") discusses using Perl as an object-oriented programming technology. Although the material presented is fine, I found the examples to be somewhat simplified. If I was moving to Perl from another OO language and reading this book, I'd be asking myself "is that it?". This chapter was also far too short, but what's there is good.
Reuven redeems himself somewhat in the next chapter, "Tying". The tie and untie subroutines are presented, together with a really excellent description of how to tie variables of differing types to scalars, hashes, DBM files and arrays. I have always struggled a little with tying in Perl, but by the time I had finished this chapter I was saying to myself "now I get it!".
Chapter 9 ("Working with Files"), returns to the topic of performing I/O with Perl. One of the largest chapters in the book, this material details how to work with files, directories, and the underlying operating system from within your Perl programs. As expected, this material is presented very much from the perspective of the Perl programmer working with UNIX or Linux. Chapter 10 ("Networking and Interprocess Communication"), is another large chapter. Here the techniques required to program anonymous and named pipes are described, together with an introduction to network programming with the Socket API. In the first half of this chapter, a small collection of clients and a server are presented. The discussion of this material is marred by some pretty poor descriptions of what is going on. Coupled with some incorrect comments in the program code, the result is disappointing coverage of this material. The second half of the chapter describes a collection of Internet add-on modules available for Perl. Example protocols programmed include FTP, Telnet, SMTP (e-mail) and HTTP (the web). HTML parsing is also briefly described.
Having disappointed in the last chapter, Reuven again redeems himself with another great chapter. Chapter 11 ("Relational Databases") is about as good an introduction to databases and SQL as you are likely to find in a book of this kind. I really liked this material, even though I tend to find the subject of databases to be pretty dull stuff. The standardized Perl API to databases (DBI) is also introduced in this chapter. Note that the RDBMS used by Reuven is the open-sourced PostgreSQL. Chapter 12 ("Building Database Applications") expands upon the previous chapter and builds a complete database application using Perl, DBI and PostgreSQL. Additional material covers debugging DBI. Again, I found the material in this chapter to be very good and well presented.
Chapter 13 ("Maintenance and Security") serves as a separator between the database section of the book and the web section, and it includes an overview of the help Perl gives to the programmer when dealing with warnings and errors. Debugging is also covered (both command-line and graphical), as is benchmarking and tainting (Perl's paranoid mode). Providing only a two-page description of tainting was, in my opinion, not enough coverage (especially in light of the book's next chapter, which introduces CGI).
The final four chapters of the book (together with the database chapters) have the most to offer practicing Perl programmers. After a lightening fast introduction to HTTP, HTML and URLs, a brief tour of Perl's support for CGI is presented in Chapter 14 ("CGI Programming"). Apache is the host web server for the CGIs presented in this chapter. I would have liked to have seen some screenshots in this (and the subsequent) chapter(s), as I believe they would have enhanced the discussion and explanation of the CGI programs presented. Chapter 15 ("Advanced CGI Programming") expands on the introduction in Chapter 14 and presents more complete CGI example programs. These include setting-up a registration system, programming cookies, dynamically creating images and the basics of working with HTML templating technology. The examples are good, but, some screenshots would have made them better.
Chapter 16 ("Web/Database Applications"), combines the database techniques from Chapters 11 and 12 with the CGI techniques from Chapters 14 and 15. This is another great chapter (although it too suffers somewhat from a lack of screenshots), and it contains some quite long and involved programming examples. These examples include working with stock data stored in a web-hosted database, a web-based electronic postcard sending system and personalization technologies that combine a web-hosted database with cookies.
The final two chapters of the book are targeted at heavy-weight web programmers. Chapter 17 ("mod_perl") discusses the popular Perl add-on module to Apache. Coverage includes installation, configuration and use of the mod_perl module. Three simple handlers are presented in order to demonstrate the basic mechanism, then the latter half of the chapter presents a collection of standard mod_perl modules (including Apache::DBI for connecting mod_perl handlers to a back-end database). The final chapter of the book, titled "Mason", presents a more detailed description of HTML templating technology. This is a good, introductory description of HTML::Mason. I tend not to do much dynamic web page development; however, if you spend any portion of your day creating web sites with HTML and Perl, this technology looks like something you really should be into. The single appendix (Bibliography and Recommended Reading) presents Reuven's recommended reading-list. This is a nice, small collection of some of the most important works in this area. A detailed 33-page index completes the book.
The best advice I could give any programmer reading Core Perl would be to work through the book with your computer and Perl be your side. To get the most from any programming book, you need to try out the examples to solidify your understanding of the topic being discussed. This is especially true of Core Perl. It's not that the explanations of what's going on are poor, it's that there are very few occasions where the output generated by the example programs is shown, so you'll need to run the code snippets through the Perl interpreter to really see what's going on. Thankfully, the book's web site includes all of the source code used in the book as a downloadable, compressed tar archive.
Despite using Perl as my primary programming language for the last few years, and despite having read more Perl books than I care to mention, I was pleasantly surprised to find that Core Perl taught me a few new things about the language. We all know Reuven knows his Perl. In Core Perl, he proves it. I was also pleased to see Reuven describing release 5.6.x of Perl, which means that his description of the language is up-to-date.
In addition to some screenshots in the latter chapters, I'd suggest a few additions to a future edition of the book. It would be nice to have some extra appendices covering in sufficient detail the acquisition, installation and configuration of some of the book's core technologies, for example, PostgreSQL, Apache, mod_perl, and HTML::Mason. Also, in Chapter 17 ("mod_perl"), it would have been more useful to see a larger httpd.conf extract, as opposed to the collection of small snippets scattered throughout the chapter.
I have a few more gripes.
Although the writing style is informal (and very like the familiar "At The Forge" columns), I was struck by Reuven's use and heavy reliance on forward references. Naturally, a certain number of these are unavoidable, but there is an awful lot of them. This is especially noticeable in the earlier chapters. For readers already familiar with Perl, such a style is not a huge problem. However, for a reader from another programming background, it may become quite tiring to constantly jump forward in the text to read about a feature that is used earlier in the book but not discussed in detail until later. A particularly bad example of the problems this can result in is on page 107 when zombie processes are being discussed. The reader is referred to Section 4.7.4 for more details. Section 4.7.4 refers back to page 107, before referring the reader to the perlipc on-line documentation.
I was also surprised (and a little disappointed) by the number of errors I found in the text. Without wishing to be unkind, the first printing of Core Perl is riddled with errors. After reading the book, I had close to 70 queried errors to send to Reuven. Some are trivial typos (or typesetting errors), others are coding errors and yet others are problems with some of the explanations. To Reuven's credit, he has an errata on the book's web site, and he has included a description of the errors I found as well as a few others on the errata page. Also, Reuven tells me that any errors uncovered in the first printing will be fixed by Prentice-Hall in subsequent printings. If you do purchase a first printing, print out the errata and keep it close while working through Core Perl. Note that despite these problems, I do intend to recommend Core Perl to my students. The material is good, even though I have some problems with how it is presented.
Paul Barry lectures at The Institute of Technology, Carlow in Ireland. He is the author of Programming the Network with Perl, published by John Wiley & Sons
|
http://www.linuxjournal.com/article/5813?quicktabs_1=1
|
CC-MAIN-2015-35
|
refinedweb
| 2,302
| 61.16
|
#include <sys/types.h> #include <sys/buf.h> void biodone(struct buf *bp);
Architecture independent level 1 (DDI/DKI).
Pointer to a buf(9S) structure.
The biodone() function buffer I/O request is complete.
The biodone() function.
The biodone() function can be called from user, interrupt, or kernel context. } . . .
read(2), strategy(9E), biowait(9F), ddi_add_intr(9F), delay(9F), timeout(9F), untimeout(9F), buf(9S)
Writing Device Drivers for Oracle Solaris 11.2
After calling biodone(), bp is no longer available to be referred to by the driver. If the driver makes any reference to bp after calling biodone(), a panic may result.
Drivers that use the b_iodone field of the buf(9S) structure to specify a substitute completion routine should save the value of b_iodone before changing it, and then restore the old value before calling biodone() to release the buffer.
|
http://docs.oracle.com/cd/E36784_01/html/E36886/biodone-9f.html
|
CC-MAIN-2016-30
|
refinedweb
| 142
| 58.89
|
I'm having a small problem with Pyglet. Building the following source code will show "Running" in the console window, but will not show the Pyglet main window. It works fine when running via the command line.
import pyglet
game_window = pyglet.window.Window(800, 600)
if __name__ == '__main__':
print 'Running'
pyglet.app.run()
I'm on Windows XP and trying to find the problem. Not sure if it's Sublime Text of Pyglet at the moment. Anyone here know what could be happening?
Thanks.
I believe that Sublime Text does something weird with its python and it's hard to get it to show any sort of GUI.
I created a plugin to use an external console window. This fixes the problem.
github.com/joeyespo/sublimetext-console-exec
|
https://forum.sublimetext.com/t/pyglet/6743
|
CC-MAIN-2016-22
|
refinedweb
| 128
| 69.38
|
Rob's Blog 2019-09-04T19:37:55+00:00 Rob Bos raj.bos@gmail.com Use Stryker for .NET code in Azure DevOps 2019-09-04T00:00:00+00:00 <p>Recently I was at a customer where they where testing running test mutation with <a href="">Stryker</a>. <code class="highlighter-rouge">mutations</code> and they check if they <code class="highlighter-rouge">survive</code> with the unit tests or not. Nice play on words there 😄.</p> <p>Of course this triggered me to see how this works with .NET code and if we can integrate this in Azure DevOps!</p> <p><img src="/images/20190829/suzanne-d-williams-VMKBFR6r_jg-unsplash.jpg" alt="Hero image" /></p> <h5 id="unsplash-logophoto-by-suzanne-d-williams">="Photo by Suzanne D. William Suzanne D. Williams</span></a></h5> <h2 id="setting-up-an-example">Setting up an example</h2> <p>To have a SUT and a set of unit tests for this, I have set up a small C# library with .NET Core and some unit tests to run. I created a solution that only contains what we need:</p> <p><img src="/images/20190829/2019-08-29_SolutionExplorer.png" alt="Example of Visual Studio Solution explorer with the two projects" /></p> <p>To have something to unit test I added a simple class that would be instantiated with a string and we load that as a boolean value into a property of the class.</p> <p><img src="/images/20190829/2019-08-29_StrykerDemo.Class1.png" alt="Layout of class1" /></p> <p.</p> <p><img src="/images/20190829/2019-08-29_StrykerDemo.UnitTests.png" alt="Example of unit tests for both value 'True' and 'False'" /></p> <p>I’ve already set up an Azure DevOps build to trigger on any pushes to the repo, with the default .NET Core template to restore the NuGet packages, run a Build and then run the Unit Tests:<br /> <img src="/images/20190829/2019-09-04_StrykerAzureDevOps.png" alt="" /></p> <p>The Azure DevOps build is green with the current set of tests:</p> <p><img src="/images/20190829/2019-08-29AzureDevOpsBuild.png" alt="Azure DevOps build pipeline is green with these tests" /></p> <p>Using <a href="">Stryker for .NET</a> can by done on the CLI by installing the <code class="highlighter-rouge">dotnet tool</code> with this command:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet tool install -g dotnet-stryker </code></pre></div></div> <p>I’ve ran into some configuration issues and an older version of .NET Core that I have documented <a href="">here</a>.</p> <p>When you run the tool from the CLI, you can see the results immediately and also see the changes Stryker has made to your code.</p> <p><img src="/images/20190829/2019-08-29WindowsTerminalInstallStryker.png" alt="Commands to install and run Stryker" /></p> <p <a href="">here</a>.</p> <h2 id="running-stryker-in-the-cli">Running Stryker in the CLI</h2> <p.</p> <p><img src="/images/20190829/2019-08-29_TerminalStrykerRun.png" alt="Executing Stryker" /></p> <h2 id="mutations">Mutations</h2> <p>As you can see in the screenshot above, Stryker searches the original code for boolean expressions, strings and other things it can ‘<a href="">mutate</a>’.</p> <p>The first mutation in this run was changing the line <code class="highlighter-rouge">if (isOpen == "true")</code> into <code class="highlighter-rouge">if (isOpen == "")</code> (a string mutation). This mutation is caught by the first unit test and therefore marked as ‘killed’.</p> <p><img src="/images/20190829/2019-09-04StrykerMutation.png" alt="Mutation example" /></p> <h2 id="stryker-report">Stryker report</h2> <p>Adding a html report parameter to the Stryker command will write a html file to your disk that can be used for finding the mutations that either survived and where killed.</p> <h3 id="summary-view">Summary view</h3> <p><img src="/images/20190829/2019-08-29StrykerReport.png" alt="Stryker report for the tests" /></p> <h3 id="details-view">Details view</h3> <p><img src="/images/20190829/2019-08-29StrykerReportDetails.png" alt="Stryker detailed report for the tests" /></p> <h2 id="adding-stryker-to-your-azure-devops-pipeline">Adding Stryker to your Azure DevOps pipeline</h2> <p <a href="">Running dotnet tools in Azure DevOps</a>.<br /> Do note the specific arguments I pass into the Stryker command here: my mutation tests where scoring on 54%, so I needed custom thresholds to actually fail the build.</p> <p><img src="/images/20190829/2019-09-04_StrykerAzureDevOpsConfig.png" alt="Azure DevOps Steps to run Stryker in the build pipeline" /></p> <h2 id="failed-build-result">Failed build result</h2> <p>Running Stryker on my current set of tests will actually fail the build because of the custom threshold. This way you can validate your unit tests and actually check to see if there are any outliers that you missed while creating the tests for your code.</p> <p><img src="/images/20190829/2019-09-04_AzureDevOpsFailedBuild.png" alt="Fail the Azure DevOps build" /></p> <p><strong>Note:</strong> mutating your code and running the unit tests again means that your tests will run multiple times. This <em>can</em> add up to quite some additional time that your build needs to run!</p> <h2 id="next-step">Next step</h2> <p>The next step is to include the html report in you build pipeline and upload it as an artifact. You can then download it if you need to check it.</p> <a href="">repository</a>.</p> Fixing error in .NET Core tool installation 2019-09-03T00:00:00+00:00 <p>Last week I was testing some .NET tooling and wanted to install a tool locally instead of globally. To do so you run this command:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet tool install dotnet-stryker </code></pre></div></div> <p>While running (either locally or in an <a href="">Azure DevOps</a> task) I got this error message:</p> <pre><code class="language-cmd"> </code></pre> <p><img src="/images/20190903/20190903-dotnet-core-logo.png" alt=".NET Core logo" /></p> <p>Searching around on the internet for the file it is searched (throughout the whole folder tree), I found that you need to run this command to create a local manifest:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet new tool-manifest </code></pre></div></div> <p>Yet doing so resulted in the following error message and the default prompt to choose a template to run.</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>No templates matched the input template name </code></pre></div></div> <p>Apparently the command to generate the manifest wasn’t available on my machine. Further searching lead to this <a href="">GitHub issue</a> that pointed out this was recently added in .NET Core 3.0, so it seemed that it could be coming from an old preview version?</p> <p>Checking the runtimes I had installed with <code class="highlighter-rouge">dotnet --list-runtimes</code>, pointed out that I was indeed running on on older version of the preview for .NET Core 3.0.</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Microsoft.NETCore.App 2.2.4 <span class="o">[</span>C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.0.0-preview-27113-06 <span class="o">[</span>C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.0.0-preview-27114-01 <span class="o">[</span>C:\Program Files\dotnet\shared\Microsoft.NETCore.App] </code></pre></div></div> <p>Downloading and installing the <a href="">latest preview</a> (3.0.0-preview8-28405-07 at the time of writing) fixed the issue and I could carry on with figuring out my other steps that I was actually working on 😄.</p> Running .NET Core tools in Azure DevOps 2019-09-03T00:00:00+00:00 <p>I wanted to run .NET Core tools in Azure DevOps and ran into some installation issues. I tried installing the tool I needed globally, yet the agent could not find it.</p> <h2 id="local-tools-to-the-rescue">Local tools to the rescue</h2> <p>In the latest versions of .NET Core 3.0 (currently still in preview), you can install tools <code class="highlighter-rouge">locally</code>. This means that you can install the tool in the current folder, with its own version and thus independent from other tools or versions on your machine. More information can be read <a href="">here</a>.</p> <h2 id="calling-the-installation-of-the-tool-in-azure-devops">Calling the installation of the tool in Azure DevOps</h2> <p>To actually install the tool (locally) through the .NET Core tasks you need to run the command in a specific way. This took quite some testing to figure this out. I wish this was documented a little better, so here it is for myself in the future 😁:</p> <p><img src="/images/20190903/20190903_ToolInstall.png" alt="Example of the configuration in Azure DevOps" /></p> <p>Do note that the <code class="highlighter-rouge">custom command</code> to run is just <code class="highlighter-rouge">tool</code> and the parameter input gets the name of the action and the tool.</p> <p>In this case I am installing <a href="">Stryker</a> to start with mutation testing.</p> <h3 id="error-cannot-find-any-manifests-file">Error: ‘Cannot find any manifests file’</h3> <p>Just running the installation in the work folder will give the error below. .NET Core wants a config file for local tools. Find more information about that <a href="">here</a>.</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code </code></pre></div></div> <p>To get a new manifest, add an extra .NET Core task and run this custom command:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet new tool-manifest </code></pre></div></div> <h2 id="run-the-net-core-tool">Run the .NET Core Tool</h2> <p>After setting up a manifest and the installation itself, you can now run the .NET Core tool itself by using a custom command again:</p> <p><img src="/images/20190903/20190903_ToolRun.png" alt="Running the .NET Core tool in Azure DevOps" /></p> Lets Encrypt: Manually get a certificate on Windows for an Azure App Service 2019-08-27T00:00:00+00:00 <p>Recently I had to refresh a <a href="">Let’s Encrypt</a> certificate for an Azure App Service after the first certificate had expired. Of course, refreshing a certificate should be done by some tooling, either in a CI/CD pipeline or another service. I tried setting up the <a href="">Lets Encrypt Extension</a>!</p> <p>There will be steps in here that can be executed easier. If you have any tips, please let me know!</p> <p><img src="/images/20190827/johnny-chen-CE1_qYPbMBU-unsplash.jpg" alt="Underwater photo of a school of fish" /></p> <h5 id="unsplash-logophoto-by-johnny-c Johnny Chen"> Johnny Chen</span></a></h5> <h2 id="lets-encrypt-process">Let’s encrypt process</h2> <p <code class="highlighter-rouge">\.well-known\acme-challenge\unique_file_name</code>. It checks to see if a specific set of characters is in the file. If you can set that information up on the domain, it proves to Let’s Encrypt that you are the domain owner and they can generate a certificate for you.</p> <h2 id="windows-subsystem-for-linux">Windows Subsystem for Linux!</h2> <a href="">documentation</a>.<br /> After the installation I can now run the certbot with:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>certbot certonly <span class="nt">--manual</span> </code></pre></div></div> <p>The bot will then ask you a couple of questions, like the domain(s) you want to get the certificate for, your email address so they can e-mail you when the certificate is about to expire and if you are OK with logging your IP-address.<br /> <img src="/images/20190827/2019-08-27_CertBot.png" alt="Example of Certbot commands in WSL" /></p> <pre><code class="language-dos"> <h2 id="uploading-the-challenge-files-to-an-app-service">Uploading the challenge files to an App Service</h2> <p. <strong>Trying to do this with the keyboard will send a key to the shell and will be seen as a ‘return’ key-press!</strong> When this happens, Let’s Encrypt will try to validate the file and you can start over again! Luckily it will rotate the expected file and content, so after two or three tries you will be back at the initial values 😉.</p> <p>Create a new file in the Azure App Service with the correct name through Kudu:<br /> <img src="/images/20190827/2019-08-27_Kudu.png" alt="Example of opening Kudu in the Azure Portal" /><br /> You can only do this with the editor in Kudu, since the App Service Editor will only enable you to create or edit files within the <code class="highlighter-rouge">/site/</code> folder. The acme-challenge lives outside of it.</p> <h2 id="extracting-the-lets-encrypt-files">Extracting the Let’s Encrypt files</h2> <p>After Let’s Encrypt validates the domain, the CertBot will write down a couple of files that you can use for the certificate. It will tell you that it wrote the files in the following location: <code class="highlighter-rouge">etc\letsencrypt\live\your_domain_here</code>. To get to those files from Windows, you need to find out where WSL saves its local files. The root the file store is <code class="highlighter-rouge">%userprofile%\AppData\Local\Packages</code>. As I am using Ubuntu, the folder for that subsystem is <code class="highlighter-rouge">CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc</code> from where I can navigate to <code class="highlighter-rouge">LocalState\rootfs\etc\</code> to find the root file system and then the rest of the path.<br /> <img src="/images/20190827/2019-08-27_ETC.png" alt="Example of Certbot result files" /></p> <p><strong>Be aware</strong> the files in this directory only contain links to an archive folder! <img src="/images/20190827/2019-08-27_Links.png" alt="Links to other files in the archive folder" /><br /> Get the actual files from that path, you can see they are all suffixed with the number one in the filename.</p> <h2 id="converting-the-pem-files-to-a-pfx">Converting the pem files to a pfx</h2> <p <a href="">Stack Overflow question</a> seemed rather straightforward. Unfortunately <a href="">OpenSSL</a> does not provide the binaries for the different platforms anymore. You can only download the Git repository and try to build it from there. Luckily I found the binaries hosted <a href="">here</a> and I used them to execute the next steps.</p> <p>Navigate to the OpenSSL path and execute this command to generate a pfx based from the pem files Let’s Encrypt generated:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>.\openssl pkcs12 -inkey <span class="s2">"C:\Users\RobBos\Desktop\GDBC Challenges\privkey1.pem"</span> -in <span class="s2">"C:\Users\RobBos\Desktop\GDBC Challenges\fullchain1.pem"</span> -certfile <span class="s2">"C:\Users\RobBos\Desktop\GDBC Challenges\cert1.pem"</span> -export -out <span class="s2">"C:\Users\RobBos\Desktop\GDBC Challenges\gdbc_challenges.pfx"</span> </code></pre></div></div> <p>It will request a password that can be left empty for usage in Windows itself, but Azure App Service requires a password on it. <img src="/images/20190827/2019-08-27_CovertPEM.png" alt="Powershell command to convert the pem files to a pfx" /></p> <h2 id="uploading-the-new-certificate">Uploading the new certificate</h2> <p>Uploading a certificate to Azure App Service can be done in just a few steps. Upload the new certificate and bind it with an SNI Binding to the correct domain.</p> <p><img src="/images/20190827/2019-08-27_UploadCert.png" alt="Upload certificate" /></p> Run .NET Core programs in Azure DevOps 2019-08-17T00:00:00+00:00 <p>Recently I wanted to build and run a .NET core console application in Azure DevOps and found out you cannot do that with the default .NET core tasks.</p> <p><img src="/images/20190817/sam-truong-dan--rF4kuvgHhU Sam Truong Dan"> Sam Truong Dan</span></a></p> <p>The default tasks in Azure DevOps and <a href="">tutorials</a> are more geared towards web-development and publishing a zip file that can be used with a WebDeploy command.</p> <p>For an application,I would have thought that you could run the compiled assembly by calling <code class="highlighter-rouge">dotnet run path-to-assembly</code> on it. Turns out that the run command is used to run the code from a project, not from a compiled assembly (see the <a href="">docs</a>).</p> <p>You can just call <code class="highlighter-rouge">dotnet path-to-assembly</code>, but the .NET core tasks in Azure DevOps will not let you do that: you can select a custom command, but you cannot leave that command empty for example.</p> <h2 id="option-1-publish-the-application-to-self-contained">Option 1: Publish the application to self-contained</h2> <p <code class="highlighter-rouge">publish</code> folder.<br /> <img src="/images/20190816/20190816_06_AzureDevOpsBuild.png" alt="Azure Build Pipeline overview" />.</p> <p>You can then run it in a release. The release just consists of extracting the build artefact, overwriting the application settings with an <a href="">Azure DevOps Extension</a> and running the executable.</p> <p><img src="/images/20190816/20190816_06_AzureDevOpsRelease.png" alt="Azure Release Pipeline Task running the executable" /></p> <h2 id="option-2-run-the-assembly">Option 2: Run the assembly</h2> <p>An even easier way to run the assembly is to call the dotnet command on the assembly itself, just do it in a PowerShell task:</p> <p><img src="/images/20190817/20190817_01_AzDo-Run-dll.png" alt="Azure Release Pipeline with Task calling the assembly" /></p> Azure DevOps Marketplace News - Or Imposter Syndrome for developers? 2019-08-16T00:00:00+00:00 <p>I.</p> <p><img src="/images/20190816/clem-onojeghuo-JUHW6hAToY4-unsplash.jpg" alt="Picture of shoes on a forest background" /></p> <h5 id="photo-by-clem-onojeghuo-on-unsplash">Photo by Clem Onojeghuo on Unsplash</h5> <p>So this post is meant as an example battling against <a href="">imposter syndrome</a> and to openly document the development process and some key decisions along the way and give an insight about the stuff I need to look up (hint: <strong>it is a lot!</strong>).</p> <p>The git repository with the source for the tool is open source and can be found on <a href="">Github</a>.</p> <p>All in all I have spend around 6 hours in an editor (measured with the <a href="">WakaTime extension</a>) over the course of 2 weeks to get a working twitter account that checks something every three hours and tweets about it if needed.</p> <h2 id="reason-this-project-exists">Reason this project exists</h2> <p>The reason I started with this project is that I always wondered if there could be a way to stay up to date on Azure DevOps extensions on the <a href="">Marketplace</a>:.</p> !</p> <h2 id="functional-requirements">Functional requirements</h2> <p>My own functional requirements where quite easy:</p> <ol> <li>Check the marketplace data periodically for changes.</li> <li>Tweet about them on a new account when found.</li> </ol> <p>Seems straightforward and not that hard!</p> <h2 id="research">Research</h2> <p <a href="">Postman</a> and actually get some results: that way I knew that I didn’t need to login to an API or send in some magic cookies or something.</p> <p><img src="/images/20190816/20190816_05_Postman.png" alt="Postman example result" /></p> <p.</p> <h2 id="starting-point">Starting point</h2> <p:</p> <ul> <li>Git repository</li> <li>C# with .NET core</li> <li>Azure DevOps</li> </ul> <h2 id="first-real-commit">First real commit</h2> <p>Starting in Visual Studio with File –> New –> .NET Core Console Application after creating a new Git repo I quickly created an outline to load the information from the API.</p> <p>In the <a href="">first commit</a> you can see the thinking process and coding style that I use when building a MVP (minimal viable product).</p> <p><img src="/images/20190816/20190816_02_FirstRealCommit.png" alt="" /></p> <p>Everything is still in the Program class with a lot of to-do’s in it. Some data classes like <code class="highlighter-rouge">ExtensionDataResult.cs</code> are separated and just filled with the awesome Visual Studio feature “Paste JSON as Classes”.</p> <p><img src="/images/20190816/20190816_03_PasteSpecial.png" alt="Paste JSON as Classes" /></p> <p>At this point I was mostly trying to get the download working and store the results in a list so I could easily start a diff method next.</p> <p.</p> <h2 id="searching-for-code">Searching for code</h2> <p>To give a feel for the amount of searching around I did to actually create a tool around my own needs:</p> <ul> <li>Working with HttpClient to send calls to the Marketplace API: looked it up in a <strong>different project</strong>.</li> <li>Serializing the result with <a href="">Newtonsoft.Json</a>: <strong>googled it.</strong></li> <li>Sending a tweet through the Twitter API: found an <strong>example on</strong> <a href="">Stack Overflow</a></li> <li>Setting up a developer account on Twitter so that I can tweet: googled it.</li> <li>Exporting a list to CSV: <strong>googled it.</strong></li> <li>Storing the data in an Azure Storage Account using blobs: looked it up in a <strong>different project</strong>.</li> </ul> <h2 id="flow">Flow</h2> <p <a href="">CSV</a>.</p> <p><img src="/images/20190816/20190816_01_Commits.png" alt="GitHub commit history" /></p> <h3 id="running-locally">Running locally</h3> <p <a href="">class</a>..</p> <p.</p> <p.</p> <p>I also thought it would be nice to add the publishers to the tweet themselves if I could find their Twitter accounts. The first publishers have been added to a <a href="">hard-coded list</a>.</p> <p>See how gradually this has progressed? Everything has been added when I actually had a need, and refactored into separate classes when those parts felt ready.</p> <p><img src="/images/20190816/20190816_04_VisualStudioExplorer.png" alt="Visual Studio Explorer pane of the current solution" /></p> <h2 id="unit-tests">Unit tests</h2> <p>Only recently I added my first unit test to the project, because I wanted to test if the Azure DevOps publish tags would work correctly: that is a comma separated string in the JSON object and I created some <a href="">tests</a> to make sure those would work.</p> <h2 id="automated-runs">Automated runs</h2> <p.</p> <h3 id="preparation">Preparation</h3> <p.</p> <h1 id="azure-pipelines">Azure Pipelines</h1> <p.</p> <h2 id="building-the-solution">Building the solution</h2> <p>The first step is building the solution. I could run it from the build but that seems like overkill: the solution should not change that much that often (after the first few iterations) and adding a release for something in .NET should not be that much work.</p> <p>I started with the default ASP.NET core web template. That uses the flag <code class="highlighter-rouge">Publish Web Projects</code> to publish a zip file with a WebDeploy package in it. Since we cannot use that I changed the publish step.</p> <p><img src="/images/20190816/20190816_06_AzureDevOpsBuild.png" alt="Azure Build Pipeline overview" /></p> <p>Later on I found out that the release cannot handle the normal .NET core DLL that is generated instead of a .NET executable: the .NET core tasks do not support executing the dll with the .NET <code class="highlighter-rouge">run</code> command: it wants the .NET core project folder to first build and then run the solution. I had to work around it by publishing a full .NET core app targeting the Windows platform. That way I have an executable that I can trigger with a PowerShell task.</p> <h2 id="release-or-run-pipeline">Release or run pipeline</h2> <p>The release just consists of extracting the build artefact, overwriting the application settings with an <a href="">Azure DevOps Extension</a>.</p> <p><img src="/images/20190816/20190816_06_AzureDevOpsRelease.png" alt="Azure Release Pipeline" /></p> <h2 id="full-circle">Full circle</h2> <p>I started out with a Git repo, pushed that to <a href="">GitHub</a>, build and run in Azure DevOps and then report the status of the build and release through badges in Azure DevOps and included those in the readme of the repository: |Step|Latest execution| |—|—| |Build|<a href=""><img src="" alt="Build status" /></a>| |Release/Run status|<img src="" alt="Release status" />|</p> <p>By the way, I am using <a href="">Azure Pipelines from the GitHub marketplace</a> to run the CI/CD triggers for the project.</p> <p>For the scheduling part I am using a scheduled trigger that will run the release definition every three hours. Somewhat irksome to add so many (no chron notation), but if it works….</p> <h2 id="conclusion">Conclusion</h2> <p>I hope this gives an insight to the development process to someone that is curios and maybe helps to lower some of the <a href="">imposter syndrome</a> that some developers seem to have (myself included!). With a good mindset you can figure things out or just reach out and ask for help. With that, you can get anything done!</p> Using Chrome Personas to split identities 2019-07-13T00:00:00+00:00 <p>As a consultant, I get to work at a lot of different settings and environments. For most of my customers these days, that means working on my own laptop and in the cloud with SaaS application.</p> <p>Logging in to all those customers can be a messy thing: I’ve seen people having a identity picker in Azure (or any other Azure Active Directory backed system) that they have to <strong>scroll</strong> through to get to the identity they want to use 😱. Seriously!</p> <p><strong>Note</strong>: It did get better in the last years, previously you actually had to log out and in as a different identity to make this work, but still… we can do this better!</p> <p><img src="/images/20190713/jesse-orrico-unsplash.jpg" alt="Hero image" /></p> <h5 id="unsplash-logojesse-orrico-on-unsplash"> jesse orric">Jesse Orrico on Unsplash</span></a></h5> <h2 id="solution">Solution</h2> <p>I have a helpful way of keeping all those personas separately from each other, with some additional benefits.</p> <p>First and foremost, I do this to keep everything nice and tidy: separating them helps my mind in compartmentalizing the status and context of the tasks that I am doing.</p> <h1 id="chrome-personas">Chrome Personas</h1> <p>I have use Chrome personas for several years to accomplish this, after learning this feature was available in FireFox and Google had copied that functionality. It is a feature that lets you separate parts of your browser into it’s own (hosted) process and keep everything in that compartment. I noticed that Chrome does this with:</p> <ul> <li>Browsing history</li> <li>Tab history</li> <li>Plugins</li> <li>Stored Passwords</li> </ul> <h3 id="benefits">Benefits</h3> <p>So by doing this, I can separate my personas and with it my identity! I make a new persona for every customer that I get to (and others for personal account separation). I let Chrome store the URLs, tabs and passwords that I need for that customer, and when I leave, I just remove the persona from my system!</p> <p>In each persona I have logged in to different Azure accounts for example, together with different Azure DevOps accounts, Office365 logins and other services that I need (tooling, CRM’s, other SaaS offerings). This saves a lot of switching.</p> <p>Some note’s to this:</p> <ul> <li>I don’t sync the personas currently, as I mainly use my 1 laptop), but you can if you want to. This is a persona by persona setting.</li> <li>Any account information that I need for later is stored inside of a <a href="">KeePass</a> file backed up in my Office365 OneDrive folder.</li> </ul> <h2 id="cool-now-how-do-you-do-this">Cool! Now how do you do this?</h2> <p>To get started, click on the User circle on the top right. You get a flyout with all you persona’s. Mine is quite long:<br /> <img src="/images/20190713/20190713_01_MyPersonas.png" alt="Persona's fly out in Chrome" /></p> <p>Noticed the persona <code class="highlighter-rouge">Faebook</code>? I wanted a persona with my FaceBook account in it, to prevent my personal account leaking around everywhere when I don’t want to. I use that product only on my phone for family stuff, so I did not want to login to it on any of my persona’s. Unfortunately <strong>you cannot renamed</strong> them (can’t find it in the UI anyway)!</p> <h3 id="manage-people">Manage people</h3> <p>Click on ‘manage people’ to open a window where you can create new accounts and delete existing ones:<br /> <img src="/images/20190713/20190713_03_PersonasAdmin.png" alt="" /></p> <p>Notice that you can use a lot of different icons and images to link to your persona. I usually use the companies color for my customers persona and something else for personal stuff.</p> <p>Now you can use the new persona. Notice how it opens a new Chrome window that’s also separated on the taskbar? That means you can pin it!</p> <p>My most active accounts are always within reach. <img src="/images/20190713/20190713_02_Taskbar.png" alt="Windows Taskbar with the different personas" /></p> <h2 id="edge-dev">Edge Dev</h2> <p>You can do the same thing in <a href="">Edge Dev</a>, since the functionality is in the <a href="">Chromium</a> underpinnings that both Edge Dev and Chrome build on top off.</p> <p>I wanted to use Edge Dev for this because of one big benefit: <img src="/images/20190713/20190713_04_OpenAsInChrEdge.png" alt="Open link in different personas in Edge Dev" /><br /> This means that you no longer have to copy and paste links into a different browser/persona window to use a link! You can just use the build in functionality.</p> <p>The main reason I haven’t started moving my personas there is listed below in the Downside section below. I was a little disappointed by this and got sidetracked with other stuff. Writing about this is getting me to think about doing it again…</p> <h2 id="downside">Downside</h2> <p>The only real downside I have with this setup, is that Chrome does not let me <strong>pin my main persona</strong> (the one that I did not create) to the taskbar. Edge Dev has the same issue, so I suspect that this stems from the Chromium underpinnings.</p> <h3 id="main-persona">Main persona</h3> <p>The main persona seems to be the one that you logged in with to Chrome: I use that as my Gmail account and let that persona sync across other devices (iPad and iPhone). It is the user account on Googles back-end, so to say.</p> <h3 id="the-issue-with-the-main-persona">The issue with the main persona</h3> <p>The main persona is attached to the first Chrome pin on the taskbar. When you switch to that persona in Chrome, suddenly it is updated with the image for that persona:<br /> <img src="/images/20190713/20190713_05_Taskbar2.png" alt="Taskbar with main persona icon filled in" /></p> <p>Note the difference with the screenshot before, when that persona was not loaded. Notice the 6th icon here? It was empty before:<br /> <img src="/images/20190713/20190713_02_Taskbar.png" alt="" /></p> <p>When you try to pin that persona, it’s pinned as the default process! There is no way to have that persona pinned on the taskbar. If you close that session, the icon is clear again. If you click on it <strong>it will open the last opened persona</strong>!</p> <h2 id="help-me-out">Help me out</h2> <p>If you have a way to fix this (If have even tried to mess around in <code class="highlighter-rouge">C:\Users\RobBos\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\</code> with manual shortcuts, but none of it worked), please reach out!</p> <p>That would solve the only issue that I have with this setup: it could be complete by fixing this!</p> <h2 id="update-also-tested-with-launching-chrome-manually">Update: also tested with launching Chrome manually</h2> <p>After a tip from <a href="">Jasper</a> I tried to see if I can launch Chrome.exe from the commandline with the correct persona. You can do so by providing it an extra parameter:<br /> <code class="highlighter-rouge">.\chrome.exe --C:\Users\RobBos\AppData\Local\Google\Chrome\User Data\Guest Profile</code></p> <p>Or look in the registry editor:<br /> <img src="/images/20190713/20190713_06_ChromeManually.png" alt="Registry editor window with Chrome settings open" /></p> <p>I’ve tried them all before finding that the Default profile is indeed called…. the Default profile! Unfortunately does the same thing. I also found to extra personas that might have been deleted at some point. After launching them, they are also visible again in the Chrome UI!</p> <p>By the way, storing the shortcut like this on the desktop and launching it does work. But I don’t like to switch to the desktop each time I want to launch into this profile.</p> Fixing GitHub Pages Syntax Highlighting 2019-07-12T00:00:00+00:00 <p>Today I noticed that my syntax highlighting was not working on this blog. Here is how I fixed it!</p> <p>I am using Jekyll on GitHub pages as I wrote <a href="">before</a>.</p> <p><img src="/images/20190712.02/zach-reiner-unsplash.jpg" alt="" /></p> <h4 id="unsplash-logophoto-by-zach-reiner"> Rein Zach Reiner</span></a></h4> <p>Looking at the generated HTML indicated that there was some parsing done during the build of the page, but there were no CSS classes available to them:<br /> <img src="/images/20190712.02/20190712_02.png" alt="Showing correctly generated HTML with extra tags" /></p> <p>I tried searching for documentation about this issue and found some <a href="">basic stuff</a>.<br /> This hinted that I needed to set up a highlighter in my <code class="highlighter-rouge">_config.yml</code>:</p> <div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">highlighter</span><span class="pi">:</span> <span class="s">rouge</span> </code></pre></div></div> <p>There is another highlighter (<a href="">Pygments</a>), but this is not supported. It even seems that <code class="highlighter-rouge">rouge</code> is just the default, so you do not need to set it at all!</p> <p>I found a <a href="">Stack Overflow question</a> that indicated I needed to include a CSS file with the highlighting I want myself.</p> <p>Lazy as I am, I searched around and found a <a href="">gist</a> with a SCSS setup in it. I modified that to be just CSS and added it to my <code class="highlighter-rouge">head.html</code> like so:</p> <div class="language-html highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><link</span> <span class="na">href=</span><span class="s">"/css/syntax.css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span><span class="nt">></span> </code></pre></div></div> <p>I also needed to add the syntax name that I am using in lowercase to get it all to work: so <code class="highlighter-rouge">powershell</code> instead of <code class="highlighter-rouge">PowerShell</code>.</p> <h3 id="cool-feature-of-github-pages">Cool feature of GitHub Pages</h3> <p>GitHub pages is already cool by itself, but did you now they actually send you an e-mail if there is an issue with you setup that prevents the yml-build from working? <br /> <img src="/images/20190712.02/20190712_01.png" alt="E-mail error from GitHub with Page Build Warning" /></p> Using Azure CLI with PowerShell: error handling explained 2019-07-12T00:00:00+00:00 <p>I found myself searching the internet again on how to use the Azure CLI from PowerShell so that I can use it in Azure Pipelines to create new Azure resources. The reason I want to do this with PowerShell is twofold:</p> <ol> <li>Azure Pipelines has a task for using the Azure CLI, but this only has the options to use the command line (.cmd or .com files), or from bash (.sh). I don’t like them that much, I want to use PowerShell (Personal preference)!</li> <li>Running the Azure CLI from PowerShell has the issue that it was not created specifically for use with PowerShell. You’ll need to do some extra work.</li> </ol> <p>I’ve fixed it before, but it took a while to find it again. That is why I am documenting it here, to save me some yak shaving in the future.</p> <p><img src="/images/20190712/Yak.jpg" alt="Yak to shave" /></p> <h2 id="the-issue-in-powershell">The issue in PowerShell</h2> <p>Running this Azure CLI command in PowerShell will result in an error, because storage accounts cannot have a dash or capitals in the> az storage account create -n <span class="nv">$StorageAccountName</span> -g <span class="nv">$ResourceGroup</span> -l <span class="nv">$location</span> --sku Standard_LRS </code></pre></div></div> <p>Result:<br /> <img src="/images/20190712/20190712_02_Error.png" alt="Error displayed in PowerShell" /></p> <p>Seems like an error, what’s the issue then?<br /> Well, adding error handling like you’d expect from PowerShell will not work!<="k">try</span> <span class="o">{</span> az storage account create -n <span class="nv">$StorageAccountName</span> -g <span class="nv">$ResourceGroup</span> -l <span class="nv">$location</span> --sku Standard_LRS <span class="nb">Write-Host</span> <span class="s2">"Just continues"</span> <span class="o">}</span> <span class="k">catch</span> <span class="o">{</span> <span class="nb">Write-Host</span> <span class="s2">"An error occurred!"</span> <span class="o">}</span> </code></pre></div></div> <p>You can see that PowerShell doesn’t notice the error and just continues:<br /> <img src="/images/20190712/20190712_03_ErrorHandling.png" alt="Error handling will not do anything with the error" /></p> <p>Even adding -ErrorAction will not work.</p> <h2 id="how-to-add-error-handling-yourself">How to add error handling yourself</h2> <p>The Azure CLI runs on JSON: it will try to give you JSON results on every call, so we can use that to see if we got any data back from the call. After converting the result, we can test to see if itif</span> <span class="o">(!</span><span class="nv">$output</span><span class="o">)</span> <span class="o">{</span> <span class="nb">Write-Error</span> <span class="s2">"Error creating storage account"</span> <span class="k">return</span> <span class="o">}</span> </code></pre></div></div> <p>Do remember to wrap <strong>every</strong> call you need to run with this setup, and return to prevent PowerShell to continue with the next statement.</p> <p>Writing the error to the output helps with:</p> <ol> <li>Displaying the error correctly</li> <li>Blocking the release in Azure DevOps, which is were I needed this the most.</li> </ol> <p><img src="/images/20190712/20190712_04_ErrorHandlingCorrectly.png" alt="" /></p> <h3 id="shorthand">Shorthand</h3> <p>There is a shorthand version of this code that you can use, if you don’t care about the information in the output (Thanks <a href="">Rasťo</a> !). You can use PowerShells <a href="">about automatic variable</a>: <code class="highlighter-rouge">$?</code> to just check if the result was successful or not.<> <span class="k">if</span> <span class="o">(!</span><span class="nv">$?</span><span class="o">)</span> <span class="o">{</span> <span class="nb">Write-Error</span> <span class="s2">"Error creating storage account"</span> <span class="k">return</span> <span class="o">}</span> </code></pre></div></div> <p>As stated in the documentation:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Contains the execution status of the last operation. It contains TRUE if the last operation succeeded and FALSE if it failed. </code></pre></div></div> <p>I am just not sure yet as to why this works. I suspect <code class="highlighter-rouge">$?</code> is looking at the <code class="highlighter-rouge">$LastExitCode</code>, because this will print out <code class="highlighter-rouge">1</code>:<> </code></pre></div></div> <h3 id="why-am-i-using-the-azure-cli">Why am I using the Azure CLI?</h3> <p>After posting this, I got asked why I am using the CLI to do this at all? Surely, Azure PowerShell or ARM Templates would be sufficient.</p> <p>Here is why:</p> <ol> <li>Azure PowerShell is not idempotent, so not so great to use in Azure Pipelines.</li> <li>CLI is much terser than ARM, although it feels like you need to do a little more work, linking resources together.</li> <li>Read <a href="">Pascal Naber</a> ‘s post: <a href="">Stop using ARM templates! Use the Azure CLI instead</a>.</li> <li>I am looking into doing this with <a href="">Terraform</a>, because the declaration is a lot shorter as well. It is a new paradigm to learn, and needs installation in your Azure Pipeline. The CLI was already available.</li> </ol> <h2 id="why-not-use-bash">Why not use bash?</h2> <p>The reasons for not using bash is that:</p> <ol> <li>It will not work on a Windows Azure Pipelines Agent (and that is what I am using here).</li> <li>You need to include <a href="">JQ</a> as a library to be able to parse the JSON. This seems like extra work to me. I also find the JQ syntax not that straightforward.</li> </ol> <p>Here is a shell example to make sure you are connected to the correct <a href="">Azure</a> Subscription, to be complete:</p> <div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Switch to the correct subscription</span> az account <span class="nb">set</span> <span class="nt">--subscription</span> <span class="k">${</span><span class="nv">SUBSCRIPTION_ID</span><span class="k">}</span> <span class="nv">output</span><span class="o">=</span><span class="k">$(</span>az account show | jq <span class="s1">'.'</span><span class="k">)</span> <span class="o">[[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="nv">$output</span><span class="s2">"</span> <span class="o">]]</span> <span class="o">&&</span> <span class="nb">printf</span> <span class="s2">"</span><span class="k">${</span><span class="nv">FAILURE</span><span class="k">}</span><span class="s2">Error using subscriptionId, halting execution</span><span class="k">${</span><span class="nv">NEUTRAL</span><span class="k">}</span><span class="se">\n</span><span class="s2">"</span> <span class="o">&&</span> <span class="nb">exit </span>1 <span class="nv">subscriptionId</span><span class="o">=</span><span class="k">$(</span><span class="nb">echo</span> <span class="nv">$output</span> | jq <span class="nt">-r</span> <span class="s1">'.id'</span><span class="k">)</span> </code></pre></div></div> GDBC: Link overview 2019-07-07T00:00:00+00:00 <p>Last month we got the opportunity to organize the Global DevOps Bootcamp (<a href="">link</a>) and it was a blast!</p> <p><img src="/images/20190618/2019-06-18_01_GDBC_Logo.png" alt="GDBC Logo" /></p> <p>I wanted to create an overview of all blogposts that I could find about the event, so here it is.</p> <h1 id="links">Links</h1> <h2 id="pre-event-registration">Pre-event registration</h2> <p><a href="">Jasper Gilhuis</a> wrote down how he handled the pre-event registration of venues and enable them to register the attendees. Read about it <a href="">here</a>.</p> <h2 id="azure-learnings">Azure Learnings</h2> <p>A post by myself about all the stuff I learned while creating the automation to roll out all the resources we needed in Azure: <a href="">link</a>.</p> <h2 id="monitoring-the-event">Monitoring the event</h2> <p>Posted by <a href="">Michiel van Oudheusden</a> on the things he did for all the <a href="">monitoring</a> of the infrastructure for the event.</p> <h2 id="48-hours-running-the-global-event">48 hours running the global event</h2> <p>A post by myself about the day leading up to the event and during the day itself. We ran all the infrastructure we needed as a team. You can read it <a href="">here</a>.</p> <h2 id="application-insights">Application Insights</h2> <p><a href="">Michiel van Oudheusden</a> shares how he enabled Application Insights to track dependecies in our infrastructure <a href="">here</a>.</p> <h2 id="challenges">Challenges</h2> <p>The GDBC team has opened up the challenges website we used during the event itself so everyone can go through the challenges (and behind the scenes videos) to keep on learning! Available under the creative commons (non-commercial) license on <a href="">gdbc-challenges.com</a></p> <h2 id="looking-back-at-the-event-day">Looking back at the event day</h2> <p>There are a lot of people who blogged about their day at the event, how they found out about it at all and those posts are awesome to read.</p> <p>Here is an overview:</p> <p><img src="/images/20190707/20190707_01_DonovanBrown.jpeg" alt="Donovan Brown in Sweden for GDBC" /></p> <ul> <li><a href="">Thomas Rümmler</a> helped organize an event in Stuttgart, Germany and wrote about the experience <a href="">here</a>.</li> <li>In Stockholm Sweden <a href="">Antariksh Mistry</a> was at a venue organized by <a href="">Soldify</a>. This was his first GDBC and you can read about his experience <a href="">here</a>.</li> <li>Some venues even created a video of their day! <a href="">This one</a> is from a venue in Bogotá, Colombia and <a href="">this one</a> is from Vancouver, Canada. This one is from <a href="">Quebec, Canada</a>.</li> <li><a href="">Dmitry Larionov</a> walked into a venue without really knowing anything about GDBC! Find out <a href="">here</a> what he thought of it :smile:.</li> <li>In Vancouver, Canada <a href="">Willy-Peter</a> wrote down their <a href="">feedback</a>.</li> <li><a href="">Hannupekka Sormunen</a> visited a venue in Helsinki, Finland and posted his whole dairy <a href="">here</a>.</li> <li>In Zaragoza (Spain), <a href="">Veronica Rivas</a> helped to organize their second GDBC an wrote about it <a href="">here</a>.</li> <li>Read <a href="">here</a> about the event in Sofia, Bulgaria, which they organized for the third time (perfect score!).</li> <li>The organizers in Toronto, Canada blogged <a href="">here</a> about the event.</li> <li><a href="">David Gardiner</a> helped organizing the third edition in Adelaide, Australia. Read about their day <a href="">here</a>.</li> </ul> <h2 id="behind-the-scenes">Behind the scenes</h2> <p>If you want to see what the team created to run the event, you can find all the behind the scenes video’s here: <a href=""></a>.</p> GDBC: Azure learnings from running at scale 2019-06-23T00:00:00+00:00 <p>On the 15th of June we got the opportunity to organize the Global DevOps Bootcamp edition of 2019 (see <a href="">link</a>) and we had a blast!</p> <p><img src="/images/20190618/2019-06-18_01_GDBC_Logo.png" alt="GDBC Logo" /></p> <p>For the 2018 edition we created challenges for the attendees to setup their CI/CD pipelines to push a web application into Azure. You can read up on the setup for that edition <a href="">here</a>.</p> <h2 id="next-level">Next level</h2> <p <a href="">Azure CLI</a>, we wanted to make it even easier: what if we setup <strong>everything</strong> for the teams?</p> <h2 id="team-setup">Team setup</h2> <p>So we would start them with a working web application, running in Azure including a database connection and application insights to boot. We also wanted to set them up with a complete <a href="">Azure DevOps</a> setup, so everything from a Git repository, CI/CD pipelines, working service connections, package management, the whole deal.</p> <h2 id="find-out-more">Find out more</h2> <p>If you want to know more about the whole setup, checkout the YouTube playlist the team has made around the event <a href="">here</a>. I helped creating the Azure DevOps scripts (together with <a href="">René</a>, <a href="">Jasper</a> and <a href="">Sofie</a>) and created the Azure setup myself. You can also look at this video explaining the Azure DevOps and Azure parts:</p> <iframe width="900" height="506" src="" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p>If you really want to find out more, I would love to come talk about all of it on conferences or meetups! You can find my session <a href="">here</a>. I am sending this session in for conferences, either alone or together with <a href="">René</a>.</p> <h1 id="size">Size</h1> <p>We decided to cap the venue count this year to a 100 venues. This would give us a limit in the amount of resources we needed to create and we could calculate an indication of the costs of those resources. We got 4 sponsored Azure subscriptions from <a href="">Microsoft</a> with limited budgets on them. From the tickets the venue organizers added in Eventbrite we could guess the scale we needed.</p> <p>In the end we created:</p> <ul> <li>1340 App Services,</li> <li>96 SQL servers,</li> <li>1340 SQL databases,</li> <li>1340 Azure DevOps Teams in 7 Azure DevOps organizations (each with a full setup)</li> <li>96 AAD Venue Team groups and users,</li> <li>1340 AAD Team groups and users,</li> <li>1340 AAD Team Service Principals</li> <li>1340 team role assignments in 1340 + 96 resource groups</li> </ul> <h1 id="learnings">Learnings</h1> <p <a href="">blogpost</a> about how to enable all the providers for a subscription. In our case I decided to not use that, and enable the providers we needed by hand in the four subscriptions we had. The main reason behind this was that we needed to make all the venue organizers <strong>and</strong> the team accounts <code class="highlighter-rouge">Owner</code>.</p> <h2 id="azure-learning-cloud-resources-are-not-always-finite">Azure learning: Cloud resources are not always finite</h2> <p>I found out that Azure has limits on the amount of available resources you can create in each <a href="">region</a>..</p> <p>Imaging my surprise when I tested with creating SQL Servers in India:<br /> <img src="/images/20190623/20190623_01_SQL_Server_India.png" alt="SQL Servers not available in India" /></p> <p <a href="">Southeast Asia</a>.</p> <h2 id="azure-learning-portal-search-is-not-great">Azure learning: Portal search is not great</h2> <p>When you have a large Azure Subscription with a lot of resources in it, the search just is not that great. Searching for a wildcard in the beginning of a name isn’t possible, so searching for <code class="highlighter-rouge">%brisbane%</code> was not possible. That meant a lot of copy and pasting if I needed to find the teams resources to check them for something.</p> <p><img src="/images/20190623/20190623_05_PortalSearch.png" alt="Azure Portal" /></p> <h2 id="azure-learning-azure-sql-server-limits">Azure learning: Azure SQL server limits</h2> <p <a href="">CPU cores</a> you can allocate. There is a support process around it where you can ask to increase those quotas.</p> <p>That reminded me to check the quotas for SQL Servers, Databases and App Service Plans that we needed to create. Luckily I did: there is a soft limit of 20 Azure SQL Servers per region in a subscription that can be increased with support tickets and a <a href="">hard limit</a> of 200 Azure SQL Servers <strong>per subscription</strong>! <br /> <em>server</em>, since you cannot set ACL’s on the database level. Note: we did create SQL Server User accounts for the databases as well, so we had different passwords per team.</p> <h2 id="azure-learning-application-insights-not-available-in-all-regions">Azure learning: Application Insights not available in all regions</h2> <p <code class="highlighter-rouge">standard</code> or <code class="highlighter-rouge">big regions</code>. This is not the case! So the advise is: always check! We got tipped beforehand by an observant venue organizer in the last week before the event and we moved those web applications to East US.</p> <h2 id="azure-learning-2000-role-assignments-per-subscription">Azure learning: 2000 role assignments per subscription</h2> <p>At first, I was making role assignments on a user level, based on the fact that I had that call working. There is a limit on how many assignments you can make per <strong>subscription</strong>! Read more about it in the <a href="">docs</a>.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code. </code></pre></div></div> <h2 id="azure-learning-portal-only-shows-the-first-2000-resources-in-any-pane">Azure learning: portal only shows the first 2000 resources in any pane</h2> <p>This only makes sense and is also present in the REST API, but the portal ‘only’ shows 2000 resources in the listings. This is at least the case in the <code class="highlighter-rouge">All resources</code> view.</p> <p><img src="/images/20190623/20190623_04_AzurePortalLimits.png" alt="Azure Portal showing only 2000 resources" /></p> <h2 id="azure-learning-always-check-the-defaults-or-pick-your-own">Azure learning: always check the defaults or pick your own</h2> <p, <strong>the default changed</strong>!! Azure now picked a <code class="highlighter-rouge">Gen 5 with 4 vCores</code>.<br /> You can find out more information about pricing <a href="">here</a>. <a href="">another post</a>.</p> <h2 id="azure-learning">Azure learning:</h2> <p <strong>would not</strong>! I quickly added a check on it that would just delete the Application so the next run would recreate it.<br /> Not sure if this is caused by the <a href="">Fluent SDK</a> that I was using.</p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">servicePrincipal</span> <span class="p">=</span> <span class="k">await</span> <span class="n">AzureClientsFactory</span><span class="p">.</span><span class="n">AADManagementClient</span><span class="p">.</span><span class="n">ServicePrincipals</span> <span class="p">.</span><span class="nf">Define</span><span class="p">(</span><span class="n">spnName</span><span class="p">)</span> <span class="p">.</span><span class="nf">WithNewApplication</span><span class="p">(</span><span class="n">spnName</span><span class="p">)</span> <span class="p">.</span><span class="nf">DefinePasswordCredential</span><span class="p">(</span><span class="s">"ServicePrincipalPassword"</span><span class="p">)</span> <span class="p">.</span><span class="nf">WithPasswordValue</span><span class="p">(</span><span class="n">spnPassword</span><span class="p">)</span> <span class="p">.</span><span class="nf">Attach</span><span class="p">()</span> <span class="p">.</span><span class="nf">CreateAsync</span><span class="p">().</span><span class="nf">ConfigureAwait</span><span class="p">(</span><span class="k">true</span><span class="p">);</span> </code></pre></div></div> <h1 id="azure-devops-learnings">Azure DevOps learnings</h1> <p>I encountered the Azure DevOps REST API’s again and as in previous times, I am <strong>very</strong> grateful for all the experience and knowledge my colleagues <a href="">René</a>, <a href="">Jasper</a> and <a href="">Jesse</a> have with them! These API’s are <a href="">documented</a> but we seem to always need some exotic variant or a specific thing that is hard to find, for example setting default repository permissions on your Azure DevOps Organization, as documented by Jesse <a href="">here</a>.<br /> We knew that we needed to setup 1200+ team projects and the GDBC team has excellent contacts with the Azure DevOps product group, so we arranged 7 different Azure DevOps organizations. This would enable us to:</p> <ul> <li>spread the load on the service, we hit some weird quota’s and functionality issues last year,</li> <li>keep the latency low for the teams, since we could place them to an organization closest to them,</li> <li>run the CI/CD pipelines that would create the App Services and other resources as close as possible to the start of the event, so we could keep the costs low.</li> </ul> <h2 id="dossing-the-azure-devops-service">DOSsing the Azure DevOps service</h2> <p>Since we had 7 different organizations, <a href="">René</a> created a nice pipeline to start all the CI/CD pipelines after we created all the other setup (Git repo’s, Service connections, etc.).</p> <p><img src="/images/20190623/20190619_07_MassivePipeline.png" alt="Azure DevOps Full pipeline" /></p> <p>Getting the sponsoring from Microsoft also meant an increase in available <a href="">Microsoft Hosted Pipelines</a>..</p> <p><img src="/images/20190623/20190623_02_AzureDevOps_Pipelines.png" alt="Azure DevOps concurrent pipelines" /></p> <p>This would mean scheduling <strong>all 400 CI builds</strong> when we wanted, witch would all kick off their CD release on completion. This eventually meant rapid scaling of the pipelines in the region the Azure DevOps organization was linked to. Two off these regions had some serious issues handling that load!</p> <p><img src="/images/20190623/20190623_03_AzureDevOpsOutage.png" alt="Azure DevOps outage in Brazil and Australia" /></p> <p>Eventually this was all sorted out by SRE’s in the Azure DevOps team, with great, direct support for us. I think the learned something about their own service in the end!</p> <h2 id="azure-devops-functionality-view-in-progress-jobs-bug">Azure DevOps functionality View in-progress jobs (bug)</h2> <p>Regarding the fly-out of the ‘View in-progress jobs’: there is a call to the backend to load all the running jobs. The fly-out will only show <em>after</em> :grin:.</p> <p :smirk:.</p> <p>Another small issue I have with this fly-out is that in more and more places, this fly-out can be closed by clicking on the area outside of it (for example in the build progress view). This fly-out hasn’t gotten this treatment yet.</p> Before you know, it is in production 2019-06-19T00:00:00+00:00 <p>When I am working on something, usually software, I know from experience that a simple tool to test something out (e.g. a POC, Proof of Concept), can be in production in no-time. That is when I start to focus on everything we start to ignore:</p> <ul> <li>don’t write unit tests, it is only a POC;</li> <li>we don’t need to make this resilient, it is only to proof this will work;</li> <li>just name the project yyyyMMddd.TestProject.exe, we will never need to find it again, if this works;</li> <li>we don’t need to make this scalable yet, we’ll figure that out in the future.</li> </ul> <p>I am just as guilty of this, hence this post for later referral :smile::</p> <h2 id="guilty-as-charged">Guilty as charged!</h2> <p><img src="/images/20190619/20190619_08_TheCauseOfThis.png" alt="The Cause of this" /></p> <h1 id="case-in-point">Case in point</h1> <p>Recently I was a member of the core team for running the infrastructure for <a href="">Global DevOps Bootcamp</a>.<br /> <img src="/images/20190618/2019-06-18_01_GDBC_Logo.png" alt="GDBC Logo" /></p> <p>When we started, Jasper Gilhuis was trying to automate everything we needed to create the accounts and pages for the venues in Eventbrite (read more about his experience <a href="">here</a>) and he could not find an API for making the venue organizers a Co-admin in our Eventbrite event.</p> <p>We needed this because we wanted to have the venue organizers have visibility into their registered attendees and be able to send them in-mails from the platform. First we created the global event and added the local venues as a sub-event. Then we wanted to add their accounts as a co-admin. Do check out Jasper’s post for the why’s behind this setup.</p> <h1 id="every-tool-is-a-hammer">Every tool is a hammer</h1> <p>I believe that every tool is a hammer: even if you know you should do this, you know you can do it with your tool: before you know it, you are hammering in a nail with a phone (check YouTube, it happens!).</p> <p><img src="/images/20190619/20190619_01_Every_tool_is_a_hammer.jpg" alt="Image of Adam Savage's book: every tool is a hammer" /></p> <h2 id="selenium-is-on-of-my-hammers">Selenium is on of my hammers</h2> <p>Seeing that the Eventbrite API was not able to do what we wanted, and that the flow in the website didn’t seem that hard, I made a new console application for this to use Selenium to click through the website. My first contribution to GDBC this year! As this was a tool to help with the Eventbrite automation, this ended up in the Eventbrite repository.</p> <p><img src="/images/20190619/20190619_02_FirstCommit.png" alt="First commit in GDBC repository" /></p> <h1 id="we-need-to-store-some-state">We need to store some state!</h1> <p>After a while we needed to store the state of the venue registrations: which venues where already mailed and checking into our Slack channel, etc.? We tried to do this with a Table in an Azure Storage Account, but found out the hard way that you need to update the full document, otherwise you will only see the columns you just updated: the rest <em>will be gone</em>. So, us having a lot of experience with SQL server used the tool we knew that would work Azure SQL (another hammer!). Switching to that immediately triggered me to start with Entity Framework to make the communication as easy for me as could be (hammertime!).</p> <p><img src="/images/20190619/20190619_03_DBContext.png" alt="GDBC Db Context project added" /></p> <h1 id="we-need-to-show-the-venues-on-the-website">We need to show the venues on the website!</h1> <p>A team member asked to use the data we had to update the website and show the locations, registration urls and the number of venues on our main (<a href="">website</a>). Since this was already in the ConsoleApp, I added this as an extra startup action: Run the application like this and it will update a blobstorage container with the latest version of a data file that the website could then use to show the data 👇.</p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">ConsoleApp</span><span class="p">.</span><span class="n">exe</span> <span class="p">-</span><span class="n">exp</span> </code></pre></div></div> <p>Since I did not want to run this application by hand but every night, I found … another tool to do so!</p> <p><img src="/images/20190619/20190619_04_VenueUpdatePipeline.png" alt="Azure DevOps Venue Update Pipeline" /></p> <h1 id="provisioning-azure-resources">Provisioning Azure Resources</h1> <p>After a couple of weeks we needed to start rolling out Azure Infrastructure (for more information, check out this <a href="">YouTube video</a> where I explain what we did. All the resources we needed to create had a link back to the venue and teams that where in the database…. And so I added just another startup action to the tool to start running the steps to provision everything in Azure, based on the information in the database. Can you see where this is going???</p> <h1 id="startup-actions-for-when-you-want-to-run-the-tool-locally">Startup actions for when you want to run the tool locally</h1> <p>Initially this started out to be a tool so me or Jasper could run it locally, see all the Eventbrite screens fly by (later headless) and then do some other actions where added.</p> <p>Today, this looks like this (what a mess!):<br /> <img src="/images/20190619/20190619_05_Actions.png" alt="Startup actions of the executable" />)<br /> I wanted to have multiple options for the user to choose from and be able to run action 2 first and if that was executed correctly, run action 3 next.</p> <h1 id="running-the-action-in-a-pipeline">Running the action in a pipeline</h1> <p>After I added more and more actions for the tool to do, I needed to make it run inside of a pipeline, read some parameters that where passed in and then close the application. For this I always fall back to the <a href="">Mono.Options NuGet package</a>.</p> <p>This is way those settings look like today:<br /> <img src="/images/20190619/20190619_06_Actions.png" alt="Mono actions of the executable" />)</p> <p>So is this still the same application!? Unfortunately, it is… 😲. Every decision in this application was a combination of:</p> <ul> <li>YAGNI (You ain’t going to need it),</li> <li>This is just to run once,</li> <li>Lets make this work quickly</li> </ul> <p>In the end, it got the job done, as it is included in every stage of our main provisioning pipeline:</p> <p><img src="/images/20190619/20190619_07_MassivePipeline.png" alt="Massive Azure Pipeline" /></p> <h1 id="conclusion">Conclusion</h1> <ol> <li>Even if it is just a tool to test: write the code and name things like it will end up in production.</li> <li>Every tool is a hammer and will be used as a hammer a some point.</li> </ol> <p><img src="/images/20190619/20190619_02_AdamSavageOneDayBuild.jpg" alt="Adam Savage One day build" /></p> <h3 id="check-out-the-video-from-this-one-day-build-on-youtube">Check out the video from this one day build on <a href="">YouTube</a></h3> GDBC: 48 hours in the life of a team member 2019-06-18T00:00:00+00:00 <p>Last weekend we got the opportunity to organize the Global DevOps Bootcamp (<a href="">link</a>) and it was a blast!</p> <p><img src="/images/20190618/2019-06-18_01_GDBC_Logo.png" alt="GDBC Logo" /></p> <p>Thanks to <a href="">René van Osnabrugge</a>, <a href="">Marcel de Vries</a> and <a href="">Mathias Olausson</a> for coming up with the idea to create GDBC and sticking with the team to get this idea of the ground!<br /> Without them and our sponsors (<a href="">Xpirit</a>, <a href="">Solidify</a>, <a href="">Microsoft</a>) we could not have started with the event!</p> <h1 id="team-work">Team work!</h1> <p>To set everything up we send out a call to everyone who helped last year and also to their friends. In the end we had a team with around 15 members, each picking up tasks they could handle (or try something new!). Without all that countless effort of them we would not have been able to pull this off!</p> <h1 id="the-week-leading-up-to-gdbc">The week leading up to GDBC</h1> <p>During the last few months, we had a Monday call at 9:00 PM with the entire core team and of course René, Marcel and Mathias. In it, we discussed the progress, challenges we where facing and asking for help if needed. On and off there where more people involved (special mentions for <a href="">Sofie</a> and <a href="">Niels</a>, who supported us heavily in the last week!!!). Usually we spend half an hour to an hour keeping each other up to date.</p> <p><img src="/images/20190618/2019-06-18_02_CoreTeam.png" alt="GDBC Core Team" /></p> <p>We all do have day jobs to feed our families, so we actually worked as much as we could during the regular working days and then switched to GDBC when we could. For most of us that meant as soon as the kids or our partners where asleep. I’ve seen a lot of input, commits and feedback after midnight, so everyone was fully committed to this cause.</p> <p>We lived the event through <a href="">Slack</a>, where the whole team would communicate with everyone involved: from sponsors, core team to the local venue organizers.</p> <h1 id="wednesday-and-thursday">Wednesday and Thursday</h1> <p>Although we got a sponsorship from <a href="">Microsoft</a>, our funds where limited, so we started provisioning all the infrastructure we needed for the event on Wednesday. I will dive into that setup later on, but you can see my explanation here:</p> <iframe width="900" height="506" src="" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <h2 id="azure-devops-pipelines">Azure DevOps Pipelines:</h2> <p><img src="/images/20190618/2019-06-18_40-pipelines.png" alt="Azure DevOps Pipelines" /></p> <h2 id="resources-for-azure-subscription-1">Resources for Azure Subscription 1:</h2> <p><img src="/images/20190618/2019-06-18_41_Azure.png" alt="Azure Resources for subscription 1" /></p> <p>This didn’t go as easy as we hoped, luckily we anticipated throttling, availability issues of resources in the Azure regions we needed, etcetera, so we had plenty of time to do so. I will blog about the lessons we learned here separately and update this post with a link to it. Fortunately we had four different Azure Subscriptions to use so we could spread the load!</p> <p>As most of our infrastructure got available, we enabled the venue organizers to pick on of the teams available for them and start testing our infrastructure end-to-end with it. We already did that with some specific demo venues, so we got some confidence that this would mostly work. We needed their feedback, specifically because there is no place to test on, then production!</p> <h1 id="thursday-call">Thursday call</h1> <p>On Thursday we had an extra call: around 24 hours before the event would start, so we could cross the t’s and dot the last i’s. There where some last minute challenges that some of us where facing, like the flaky connection that had issues with the way the Parts Unlimited website now used the connection strings and a call to <a href="">Azure DevOps</a> that wasn’t tested with 1300 team projects :smile: .</p> <h2 id="firewall-rules-for-challenge">Firewall rules for challenge</h2> <p>To handle the firewall rules changes I’ve spend the rest of the evening with <a href="">Jakob</a> testing and updating all the SQL servers that we had provisioned so that the challenge could work.<br /> <img src="/images/20190618/2019-06-18_03_FlakyConnections.png" alt="" /></p> <h2 id="more-teams-needed">More teams needed!</h2> <p>During the day an organizer checked their setup and asked us for more teams! They checked the participants registrations, did the calculations and where worried they would not have enough resources to place everybody in the group size they had in mind. Only other options was making the team bigger. We had been discussing that we would communicate a stop on any changes, but decided to help this organizer out. We communicated the stop on changes a couple of minutes later to prevent us from scrambling at the last minute with changes.</p> <p><img src="/images/20190618/2019-06-18_04a.png" alt="Azure extra resource groups" /> <img src="/images/20190618/2019-06-18_04b.png" alt="Azure DevOps extra team projects" /></p> <h2 id="a-team-is-missing">A team is missing!</h2> <p>During the call that evening another organizer pinged us letting us know that he missed team-05 out of the 16 teams we provisioned for them. In Azure the resource group and the team users where created, but the team project was not available in Azure DevOps. As they also indicated they had sold out all the tickets, we needed to fix this. So I planned in a new run for that venue as well.</p> <h2 id="if-it-hurts-do-it-more-often">If it hurts, do it more often</h2> <p>It is really true: if it hurts, do it more often. Especially in a DevOps team, do the things that hurt more often you’ll get good at it and you have the change to automate them! I already pressed René on this last year: we needed to have pipelines setup for as much as we could, so we wouldn’t be locked to a specific developer and their laptop to kick off updates. Since we where creating a lot of stuff, these would also take up to hours of runtime before they’d be finished, so this wasn’t helpful. This meant that I had to setup pipelines for everything I did in Azure, otherwise René would not stop bothering me this year to do so, as he should! :kissing_heart:! Him, <a href="">Jasper</a> and <a href="">Sofie</a> spend their time setting up a pipeline for Azure DevOps, so we had that part automated from front to back.</p> <p>That meant that I could run through these steps in <strong>15 minutes</strong>:</p> <ul> <li>Update the database with teams we needed to create or update</li> <li>Create everything for them in Azure</li> <li>Create everything for them in Azure DevOps</li> <li>Kick off their CI/CD pipelines, so the web shops would be deployed</li> <li>Update their App Services to set up the correct DNS entries and SSL Certificates</li> </ul> <h1 id="48-hours-running-the-infrastructure-for-the-event">48 hours: running the infrastructure for the event</h1> <p>After having a late night on Thursday I had trouble getting a sleep. The following morning I was up at 5:30 AM still rushing with excitement: This was the big day! Last year was phenomenal, hopefully this year would be as awesome as then! I checked the last results for the pipelines that where running to verify that every resource we needed was in place (all green!).</p> <p>My oldest son (9 years old) had a presentation to give at his school and he wanted to practice it one more time, so I spend the hours before school with him so he could do so. After that I drove to the office to check in with the team there, for the final checks. I left at 4:00 PM to be home early, spend that time with the kids until they where a sleep and went to bed at 9:15 PM tried to sleep for some hours. My alarm clock woke me at 10:45 PM: time to login and join the Call Bridge we set up for it in Microsoft Teams!</p> <p>During the first checks my youngest son (4 years old) woke up and I tried to get him back to sleep. He needed a hug an a stuffed animal to keep him company.</p> <p><img src="/images/20190618/2019-06-18_00_SRE.png" alt="" /></p> <h2 id="11-pm-christchurch-new-zealand-starts-the-event">11 PM: Christchurch New Zealand starts the event!</h2> <p>There was a venue in Christchurch that would start the event officially and we decided to be online for them to see if everything was running as expected and offer help if needed. They had to go through the keynote presentations first, select the teams and then start logging in to the challenges website with their account we created. The team that decided to be on stand by was me, <a href="">René</a> and <a href="">Michiel</a>.</p> <h3 id="btw-the-challenges-has-been-open-sources-under-the-creative-commons---non-commercial-license-here">Btw: the challenges has been open sources under the Creative Commons - Non Commercial license <a href="">here</a>.</h3> <p>We already pinged them to let them know we where available and asked them to keep us up to date on how the first hours went.</p> <p><img src="/images/20190618/2019-06-18_05_Christchurch.png" alt="Christchurch attendees where arriving" /></p> <p>The keynote was finished at 12:00 AM and we watched the infrastructure with hawk eyes! And … everything worked! Crazy thing was that some organizers in Mexico and Redmond where actively testing with one of their teams, so they where on the scoreboard! Luckily <a href="">Geert</a>, <a href="">Niels</a> and <a href="">Chris</a> left us documentation in our wiki on how to remove them.</p> <p><img src="/images/20190618/2019-06-18_06_StartEvent.png" alt="Christchurch communication" /></p> <p>The teams in Australia where running by then and still everything look great on our end. Besides looking for issues there wasn’t really anything to do but check the social media channels and respond there as well (we got sponsoring from <a href="">Walls_io</a> to host a tweet wall <a href="">here</a>).</p> <h2 id="200-am-crashing-team-members">2:00 AM Crashing team members!</h2> <p>At 2:00 AM both René and Michiel really needed to go to bed and sleep to be worth anything in the morning. I was still rushing with excitement and stayed online if anything came up. I managed to do that until 4:00 AM: <br /> <img src="/images/20190618/2019-06-18_07_LastCall.jpg" alt="Last call" /></p> <p>So I messaged that to the venue organizers: <br /> <img src="/images/20190618/2019-06-18_06_SleepyTime.png" alt="Sleepy time" /></p> <p>Gave a last status update to the team: <img src="/images/20190618/2019-06-18_06_StatusUpdate.png" alt="Status update" /><br /> And went to bed, to find my youngest son laying in my side of the bed! Seems like he needed more than just a hug and a stuffed animal to hug and my wife had laid him in our bed.</p> <h1 id="20190615-global-devops-bootcamp-cest">2019/06/15 Global DevOps Bootcamp (CEST)</h1> <p>I actually managed to sleep until 6:30AM, even with the youngest kid taking up all the available space in bed (my side only :grinning:!). Quick shower and off to the office:<br /> <img src="/images/20190618/2019-06-18_08_FirstCall.jpg" alt="Brushing my teeth" /></p> <p>In Hilversum we planned to set up the Command Center / War Room for the event. Everything was planned around this because we knew from last years that West-Europe and the America’s would take up the most resources and chatter to handle. Luckily for us: <a href="">Jasper</a> had taken it upon himself to host our own venue there. As usual, our hospitality team has the perfect setup to host a venue like this.</p> <p>At the office, we have a <a href="">Ripple maker</a> to print on your cappuccino’s and <a href="">Jesse</a> needed a laptop to upload new GDBC images for it, so I helped him out with them:<br /> <img src="/images/20190618/2019-06-18_09_WakingUp.jpg" alt="Coffee first" /></p> <p>We had 10 colleagues from Xpirit in the office to help during the day with receiving guests, proctoring the teams and running the venue in Hilversum.</p> <h2 id="930-am-started-the-day-with-attendees-arriving">9:30 AM Started the day with attendees arriving</h2> <p><img src="/images/20190618/2019-06-18_10_Keynote.jpg" alt="Arriving attendees in Hilversum" /></p> <h2 id="1000-am-keynote-started">10:00 AM Keynote started</h2> <p>We started the keynote in Hilversum! After a welcome note from René, Marcel and Mathias it was time to learn something about DevOps and SRE from <a href="">Niall Murphy</a>, Director of Engineering for Azure Cloud Services and Site Reliability Engineering.<br /> <img src="/images/20190618/2019-06-18_11_Keynote.jpg" alt="Keynote started" /></p> <p>We later found out his reaction about speaking in front of 10.000 attendees:<br /> <img src="/images/20190618/2019-06-18_15.png" alt="Response Niall Murphy: Wow." /></p> <h2 id="we-kept-monitoring-during-the-keynote">We kept monitoring during the keynote:</h2> <p><img src="/images/20190618/2019-06-18_12.jpg" alt="" /></p> <h5 id="image-courtesy-by-jesse-houwing-find-more-here">Image courtesy by Jesse Houwing, find more <a href="">here</a></h5> <h2 id="and-during-the-rest-of-the-day">And during the rest of the day</h2> <p><img src="/images/20190618/2019-06-18_13.jpg" alt="" /></p> <h5 id="image-courtesy-by-jesse-houwing-find-more-here-1">Image courtesy by Jesse Houwing, find more <a href="">here</a></h5> <p><img src="/images/20190618/2019-06-18_14.jpg" alt="" /></p> <h5 id="image-courtesy-by-jesse-houwing-find-more-here-2">Image courtesy by Jesse Houwing, find more <a href="">here</a></h5> <h2 id="closing-off-in-hilversum-with-part-of-the-team">Closing off in Hilversum with part of the team</h2> <p>At 4:00 PM we closed the venue in Hilversum and send our own attendees home.<br /> Some of us decided to stay at the office for a while and help the rest of the teams in the America’s that where still busy with their day. We needed to stay around our laptops and decided to get some pizza delivered instead of going to a restaurant. After the pizza we all drove home to check our feeds from there. <br /> <img src="/images/20190618/2019-06-18_28_PizzaTime.jpg" alt="Pizza time" /></p> <h2 id="920-pm-closing-off">9:20 PM Closing off</h2> <p>I stayed online for a while at home until I could no longer. <a href="">Niels</a> volunteered to shut down the lights and man the fort until the last venue was done. My hero! After 7 hours asleep in 48 hours I really needed to get some sleep!!!<br /> <img src="/images/20190618/2019-06-18_29_LateNight.jpg" alt="Closing off" /></p> SonarQube setup on Azure App Service 2019-04-29T00:00:00+00:00 <p>As noted in a <a href="">previous post</a>, you can host a <a href="">SonarQube</a> on an Azure App Service, thanks to <a href="">Nathan Vanderby</a>, a Premier Field Engineer from Microsoft. He created an ARM template to run the SonarQube installation behind an Azure App Service with a Java host. This saves you a lot of steps mentioned above! You can find the scripts for it on <a href="">GitHub</a> or deploy it from the big <code class="highlighter-rouge">deploy on Azure</code> button.</p> <p><img src="/images/20190429/vikram-sundaramoorthy-1351879-unsplash.jpg" alt="" /> vikram sundaramoorthy"> Vikram Sundaramoorthy</span></a></p> <p>Several benefits you get from hosting SonarQube on an App Service:</p> <ul> <li>You don’t have to manage a Virtual Machine anymore.</li> <li>Setting up SSL becomes easier, thanks to the Azure App Service SSL being simpler (and free if you run on <code class="highlighter-rouge">*.azurewebsites.net</code>).<> <h1 id="set-up">Set up</h1> <p>After creating the basic SonarQube App Service from GitHub or the ARM template, you need to create a new SQL database. Do note that you need to set the database to the correct collation for it to work: <code class="highlighter-rouge">SQL_Latin1_General_CP1_CS_AS</code>.</p> <h2 id="creating-a-user">Creating a user</h2> <p>There are two options to create a new user in the database. You can create one like you would do in a full MSSQL server installation, by using the <code class="highlighter-rouge">master</code> database. Running this way on a Azure SQL Db has a couple of downsides:</p> <ol> <li>You now have a dependency on the master database. Every new connection will have to do a lookup in the master database and then route you through to the database you want to connect to. This will create a potential bottleneck from the master database, if you are running a lot of connections or a lot of databases on the server.</li> <li>Moving the database to another server takes more configuration, since the user configuration is not inside the database (see option two), but in the server itself.</li> </ol> <h3 id="creating-a-contained-user">Creating a contained user</h3> <p>A Contained user is a user account created <strong>inside the database itself</strong>, making it easier to move if needed. Run this statement in a query editor connected to your database. Of course, you can make the user a <code class="highlighter-rouge">data_reader</code>, <code class="highlighter-rouge">data_writer</code> or something else. Since I need the application using this connection also creating the tables, stored procedures, etc, I gave it DBO rights in this database.</p> <pre><code class="language-SQL">CREATE USER [MyUser] WITH PASSWORD = 'Secret'; EXEC SP_ADDROLEMEMBER N'db_owner', N'MyUser' </code></pre> <h3 id="creating-a-regular-mssql-user">Creating a regular MSSQL user</h3> <p>To create a regular MSSQL user if you do not want to have it contained in the database (see above), you can create a new SQL user by running this on the <code class="highlighter-rouge">Master</code> Database:</p> <pre><code class="language-SQL">CREATE LOGIN SonarQubeUI WITH password='<make a new secure password here>'; </code></pre> <p>Then create a login from the new user account by running this statement <code class="highlighter-rouge">on the new database</code> (you cannot use a <code class="highlighter-rouge">use database_name</code> statement in Azure SQL database, so you need to switch to it in the UI of your query editor):</p> <pre><code class="language-SQL">CREATE USER SonarQubeUI FROM LOGIN SonarQubeUI </code></pre> <h1 id="settings">Settings</h1> <p>Copy these settings from the previous steps for next use:</p> <ul> <li>SQL Server address (<code class="highlighter-rouge">mysqlserver.database.windows.net</code>)</li> <li>DatabaseName</li> <li>MSSQL Username and password</li> </ul> <h2 id="app-service-update">App Service update</h2> <p>In the <a href="portal.azure.com">Azure Portal</a>, find your new App Service and navigate to Advanced Tools –> Kudu <br /> <img src="/images/20190429/2019-04-29-01_AdvancedTools.png" alt="Advanced tools" /></p> <p>Open a debug console:<br /> <img src="/images/20190429/2019-04-29-02-Kudu%20Services.png" alt="" /></p> <p>And navigate to the configuration folder of your SonarQube and open the file <code class="highlighter-rouge">sonar.properties</code>:</p> <pre><code class="language-dos">D:\home\site\wwwroot\sonarqube-7.7\conf> </code></pre> <p>Update these properties:</p> <pre><code class="language-dos">sonar.jdbc.username=SonarQubeUI sonar.jdbc.password=<your sql server user password> sonar.jdbc.url=jdbc:sqlserver://mysqlserver.database.windows.net:1433;databaseName=SonarQubeDb </code></pre> <p>Note the portnumber and the property of <code class="highlighter-rouge">databaseName</code> instead of <code class="highlighter-rouge">database</code> (this was changed in SonarQube > 5.3). Also note that the database name your enter is CASE-SENSITIVE! SonarQube is running on Java, hence the sensitivity.</p> <p>After setting these properties. Stop and Start the App Service again: SonarQube only reads in this file at startup and then initializes the new database.</p> <h1 id="set-an-admin-password">Set an admin password!</h1> <p>By default, the admin login is admin/admin, so you want to change that ASAP (after setting up the database).</p> <h1 id="errors">Errors?</h1> <p>If the SonarQube server doesn’t load after the changes, you can find the errors in the SonarQube folder here</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/logs/sonar.log /logs/web.log </code></pre></div></div> <p>Remember to restart the App Service after changing anything in the config!</p> <h1 id="azure-active-directory">Azure Active Directory</h1> <p>Setting up authentication for the users with Azure Active Directory is very easy, thanks to the work of the <a href="">ALM Rangers</a>. Follow the setup <a href="">here</a>.</p> <p>Do note the Server base URL in SonarQube, I missed it the first time.<br /> By default, this is empty!</p> <p>Go to <code class="highlighter-rouge">Administration</code>, <code class="highlighter-rouge">Configuration</code>, <code class="highlighter-rouge">General</code> –> <code class="highlighter-rouge">Server base URL</code> to set the url to match the url of the App Service.</p> <h1 id="blocking-anonymous-users">Blocking anonymous users</h1> <p>If you don’t want to show the state of your projects to the entire world, don’t forget to change the default setting for this. Since the free edition is available for open source projects, this setting is <code class="highlighter-rouge">on</code> by default. You can find it under Administration:</p> <p><img src="/images/20190429/2019-05-04SecureSonarQubeServer.png" alt="" /></p> Webservice Plan Scaling in Azure Machine Learning Studio 2019-02-25T00:00:00+00:00 <p>I recently found that I had a web service plan running for my Machine Learning Studio (MLS) workspace in Azure. I was hosting some test webservices on it from a research session earlier on. The web service plan was not doing anything for me, but I did incur some costs running it. Since the default tier it picks during is already an S1, this can build up if you are paying the subscription yourself.</p> <p><img src="/images/20190225/hero-photo-1508962061361-bcb4d4c477f8.jpg" alt="" /></p> <h5 id="photo-by-agê-barros">Photo by <a href="">Agê Barros</a></h5> <p>Finding that web service plan started by looking at the resourcegroup the MLS workspace was created in.</p> <p>The Azure Resource Group looks like this: <img src="/images/20190225/01-ResourceGroup.png" alt="" /></p> <p>You can see the plan, but when you select it you have no extra options. No insights into the cost and no option to scale.</p> <p>To do any of this, you need to look inside of the workspace in MLS. To find the web services you deployed, open <code class="highlighter-rouge">Web Services</code>:</p> <p><img src="/images/20190225/02-AzMLS.png" alt="" /></p> <p>Here, you can only see the actual deployed services from the experiments you made (I don’t have any running here). You still cannot see the web service plan!</p> <h2 id="the-trick">The Trick</h2> <p>To actually find the web service plan and act on it, you need to go to a different interface for Azure ML:<br /> Go to: <a href=""></a> to manage your deployed services, then go to “Plans” and there you can administer your plans.</p> <p><img src="/images/20190225/03-MLS-Plans.png" alt="" /></p> <p>Select the plan and click on <code class="highlighter-rouge">Upgrade/Downgrade plan</code>:<br /> Now, you can actually scale the plan to your needs.<br /> <img src="/images/20190225/04-MLS-Scale.png" alt="" /></p> <p>If you aren’t using the service that much, there even is a Free Plan!</p> Fixing Azure Function Error RunResolvePublishAssemblies 2019-02-21T00:00:00+00:00 <p>I ran into an issue with a new Azure Function I created: when I tried to run it, I got an error message about a <code class="highlighter-rouge">RunResolvePublishAssemblies</code> setting.</p> <h1 id="the-target-runresolvepublishassemblies-does-not-exist-in-the-project">The target “RunResolvePublishAssemblies” does not exist in the project</h1> <p><img src="/images/20190221/2019_02_21_01_PowerShell_Example.png" alt="" /></p> <p>Digging around the internet did not give an indication where to look. Most examples pointed to years old <a href="">issues</a> that indicated this message for dotnet core version 1.0. I am running a preview version of 3.0, so that could be the issue.</p> <p>Testing creating another Function but with Visual Studio did have the same result: the error occurs there as well.</p> <h1 id="finding-the-issue">Finding the issue</h1> <p>Eventually I grabbed another working project with Azure Functions that we where running for a couple of months and went through all its settings to figure out what the issue could be.</p> <p>Checked items:</p> <ul> <li>The project SDK had been set correctly to <code class="highlighter-rouge">Microsoft.NET.Sdk</code></li> <li>The <code class="highlighter-rouge">host.json</code> pointed to a correct function host version (<code class="highlighter-rouge">2.0</code>)</li> </ul> <p>Eventually I found the culprit! Because the <code class="highlighter-rouge">global.json</code> isn’t present, the dotnet core version wasn’t fixed to any version and that is why it uses a version that doesn’t work.</p> <h1 id="the-fix">The fix</h1> <p>Add a <code class="highlighter-rouge">global.json</code> file into the root of the project folder with this content, pinning the sdk version to something working for Azure Functions, and build the project again.</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="s2">"sdk"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="s2">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2.1.502"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p><img src="/images/20190221/2019_02_21_01_PowerShell_Working.png" alt="" /></p> Setting up Azure Monitor to trigger your Azure Function 2019-01-21T00:00:00+00:00 <p>I wanted to trigger an Azure Function based on changes in the Azure Subscription(s) we where monitoring. The incoming data can than be used to do interesting things with: keeping track of who does what, see new resources being deployed or old ones being deleted, etc. Back when I started working on this, there was no <a href="">Event Grid</a> option to use in Azure Functions, so I started with linking it to <a href="">Azure Monitor</a> events. I haven’t checked the current options, so I cannot compare them yet.</p> <p>In this blog I wanted to show how you can do this, both by using the Azure Portal and the <a href="">Azure CLI</a>.</p> <h2 id="architecture">Architecture</h2> <p>To get <a href="">Azure Monitor</a> to send in the changes that we need to see, I use this architecture:<br /> <img src="/images/2019_01_21_Azure_Monitor_Architecture.png" alt="" /></p> <h2 id="steps">Steps</h2> <p>To configure Azure Monitor to send all activities into an EventHub and then into our Function, you’ll need to execute several steps.</p> <ol> <li>Create a new event hub</li> <li>Configure the Activity Monitor to send the changes into the event hub</li> <li>Configure the Event Hub to send the messages to the changes function</li> </ol> <h1 id="use-the-azure-portal">Use the Azure Portal</h1> <p>How to do this manually via the Azure Portal is described below.</p> <h3 id="create-a-new-event-hub-to-send-the-activity-log-to">Create a new event hub to send the Activity Log to:</h3> <p><img src="/images/2019_01_21_Azure_Monitor_CreateEventHub.png" alt="" /><br /> The default setting of 1 throughput unit is enough for this setup, as mentioned by Microsoft’s documentation.</p> <h3 id="create-the-export-of-the-activity-log">Create the export of the Activity Log</h3> <p>Go to Activity Log, hit the <code class="highlighter-rouge">export</code> button:</p> <p><img src="/images/2019_01_21_Azure_Monitor_Activity_log_Configuration.png" alt="" /></p> <p><strong>PICK ALL REGIONS!!</strong> Most activities we want to see are GLOBAL and those would be missed otherwise.</p> <p><img src="/images/2019_01_21_Azure_Monitor_LinkEventHubToActivityLogExport.png" alt="" /></p> <h2 id="configure-the-event-hub-to-send-the-messages-to-the-azure-function">Configure the event hub to send the messages to the Azure Function</h2> <p>Choose the EventHub entity you picked for the Azure Monitor to export the activities to:<br /> <img src="/images/2019_01_21_Azure_Monitor_EventHubChooseHub.png" alt="" /></p> <p>You can add a consumer group to it, named ‘$default` by default.</p> <h3 id="get-the-access-policy">Get the Access Policy</h3> <p>Go to <code class="highlighter-rouge">Shared Access Policies</code> and create a policy. We only need to have the <code class="highlighter-rouge">Listen</code> rights so we can listen to incoming events.<br /> <img src="/images/2019_01_21_Azure_Monitor_EventHubAccessPolicy.png" alt="" /></p> <p>Copy either on of the <code class="highlighter-rouge">Connection strings</code> and configure the Azure Function host with it:</p> <p><img src="/images/2019_01_21_Azure_Monitor_AzureFunctionEventHub.png" alt="" /></p> <p>No need to restart the function app: the platform does that for you.</p> <h1 id="use-the-azure-cli">Use the Azure CLI</h1> <p>How to do this via the Azure CLI is described below.</p> <h2 id="step-1-create-an-event-hub">Step 1: Create an Event Hub</h2> <p>First we need to check if there already is an Event Hub and if we could use it. Use these commands to list the available namespaces in the current subscription:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az login <span class="c1"># login with an account that has the correct access rights to deploy resources on subscription level </span> az eventhubs namespace list </code></pre></div></div> <p>To search for the namespaces in a specific resource group and subscription, you can also add those parameters:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$resourceGroup</span> <span class="o">=</span> <span class="s2">"<resourceGroupName>"</span> <span class="nv">$subscription</span> <span class="o">=</span> <span class="s2">"<subscriptionId>"</span> az eventhubs namespace list --resource-group <span class="nv">$resourceGroup</span> --subscription <span class="nv">$subscription</span> </code></pre></div></div> <p>You’ll get back <code class="highlighter-rouge">[]</code> as an empty JSON array indicating that there are no namespaces in that list.</p> <p>If there is an existing namespace, you can use that to create the Log Profile with. Of course, you’ll need to check how this namespace is being used and where it sends the messages!</p> <h3 id="create-a-new-namespace-with-the-cli">Create a new namespace with the CLI</h3> <p>Use this command to create a new namespace:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az eventhubs namespace create --resource-group <span class="nv">$resourceGroup</span> --subscription <span class="nv">$subscription</span> --name EventHubNameSpaceCLI </code></pre></div></div> <p>This will take a couple of minutes to complete and will return the newly created namespace. You can save the JSON response in an object with this command:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$namespace</span> <span class="o">=</span> az eventhubs namespace show --resource-group <span class="nv">$resourceGroup</span> --subscription <span class="nv">$subscription</span> --name EventHubNameSpaceCLI | <span class="nb">ConvertFrom-JSON</span> </code></pre></div></div> <h3 id="create-a-new-eventhub-in-the-namespace">Create a new EventHub in the namespace</h3> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$eventhub</span> <span class="o">=</span> az eventhubs eventhub create --subscription <span class="nv">$subscription</span> --resource-group <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$namespace</span>.Name --name EventHubCLI | <span class="nb">ConvertFrom-Json</span> </code></pre></div></div> <h3 id="create-a-new-authorization-rule-in-the-eventhub">Create a new authorization rule in the EventHub</h3> <p>To be able to connect to the new EventHub, we need an authorization rule. That will have an Id that will be used as a connection string.</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ruleName</span> <span class="o">=</span> <span class="s2">"<authorization rule name>"</span> <span class="nv">$rule</span> <span class="o">=</span> az eventhubs eventhub authorization-rule create --resource-group <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$namespace</span>.Name --eventhub-name <span class="nv">$eventhub</span>.Name --subscription <span class="nv">$subscription</span> --name ListenRule --rights Listen | <span class="nb">ConvertFrom-Json</span> <span class="c1"># The Id of the rule that you need:</span> <span class="nv">$rule</span>.Id </code></pre></div></div> <h2 id="step-2-create-an-azure-monitor-log-profile">Step 2: Create an Azure Monitor Log Profile</h2> <h3 id="reconnaissance">Reconnaissance</h3> <p>First, connect the CLI and check if there already is a profile available</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az login <span class="c1"># login with an account that has the correct access rights to deploy resources on subscription level </span> az monitor log-profiles list <span class="c1"># list the current log-profiles</span> </code></pre></div></div> <p>Note: this will run against the currently selected (or default) subscription. You could get one of two results:</p> <ol> <li>No profile set for this subscription: <code class="highlighter-rouge">[]</code></li> <li>There already is a profile for this subscription:<br /> <img src="/images/2019_01_21_Azure_Monitor_AzureMonitorExportProfile.png" alt="" /></li> </ol> <p>If there already is a profile, carefully read through the results to see if it contains everything we need.<br /> <em>Note</em>: there can only be one profile per subscription. If there is only one subscription with a profile and that is set to export to the correct EventHub, that is fine. From top to bottom this is the information you see in the image above:</p> <ul> <li>Profile categories: We needs all of these, so if necessary, add the missing categories.</li> <li>The SubscriptionId for this profile</li> <li>The locations for which this profile will export activities. We needs at least Global and the regions that your resources have been deployed to.</li> <li>The ServiceBusRuleId will indicate the namespace and authorization rule the activities will be send to. The name of this setting comes from the fact that EventHubs are build on top of Azure ServiceBusses. We needs this to be set correctly to send the activities to the EventHub that the function will listen to.</li> <li>The StorageAccount indicate if this profile also exports the activities to a StorageAccount. This can be used for accountability reporting for instance: it contains all the information to find out what account executed what actions within the Azure Subscription.</li> </ul> <h2 id="existing-profile">Existing profile</h2> <p>If there already is an export configured to a Storage Account (but no ServiceBusRuleId), you can check the locations and categories, if those are sufficient, you can update the profile to also have a ServiceBusRuleId set to send the data into the EventHub you need.</p> <p>If the locations or categories are <strong>not</strong> sufficient, you need to check the Storage Account that is being used if to see if it hurts if you send more information there. This depends on the setup that is used on top of that information.</p> <h3 id="cannot-update-the-existing-profile">Cannot update the existing profile?</h3> <p>If changing this profile is an issue, you are toast. You can create a new profile on a different subscription, but all activities on the default subscription will only be sent to the profile on that subscription!</p> <h2 id="no-profile">No profile</h2> <p>If there is no profile setup, you can create a new profile with the necessary settings like this:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az monitor log-profiles create --name <span class="s2">"default"</span> --location null --locations <span class="s2">"global"</span> <span class="s2">"eastus"</span> <span class="s2">"westus"</span> <span class="s2">"westeu"</span> <span class="s2">"northeu"</span> --categories <span class="s2">"Delete"</span> <span class="s2">"Write"</span> <span class="s2">"Action"</span> --enabled <span class="nb">true</span> --days 1 --service-bus-rule-id <span class="s2">"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.EventHub/namespaces/<nameSpaceName>/authorizationrules/RootManageSharedAccessKey"</span> </code></pre></div></div> <p>Important parameters for us:</p> <ul> <li>Locations: the locations you want to send the activities for: always include <code class="highlighter-rouge">global</code></li> <li>Categories: what are the activity categories you want to receive?</li> <li>Service-bus-rule-id: the EventHub rule to send the activities to.</li> </ul> <h3 id="finding-the-service-bus-rule">Finding the service bus rule</h3> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az login <span class="c1"># login to the account</span> az account <span class="nb">set</span> --subscription <span class="s2">"<SubscriptionId you want to use>"</span> <span class="c1"># switch to the correct subscription</span> <span class="nv">$resourceGroup</span> <span class="o">=</span> <span class="s2">"<resource group name>"</span> <span class="c1"># name of the resourcegroup the EventHub is in</span> az eventhubs namespace list -g <span class="nv">$resourceGroup</span> <span class="c1"># list all the eventhub namespaces in the given resourcegroup</span> </code></pre></div></div> <p>Find the name of the namespace you want to use and use that in the next set of commands</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$EventHub</span> <span class="o">=</span> az eventhubs namespace show -g <span class="nv">$resourceGroup</span> --name <span class="s2">"<name of the namespace>"</span> | <span class="nb">ConvertFrom-Json </span>az eventhubs eventhub list -g <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$EventHub</span>.Name </code></pre></div></div> <p>Find the name of the event hub in the list and use that in the next set of commands:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$NameSpace</span> <span class="o">=</span> az eventhubs eventhub show -g <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$EventHub</span>.Name --name <span class="s2">"<name of the eventhub>"</span> | <span class="nb">ConvertFrom-Json</span> <span class="c1"># find all authorization rules</span> az eventhubs eventhub authorization-rule list --resource-group <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$NameSpace</span>.Name --eventhub-name <span class="nv">$EventHub</span>.Name <span class="nv">$AuthRule</span> <span class="o">=</span> az eventhubs eventhub authorization-rule show --resource-group <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$EventHub</span>.Name --eventhub-name <span class="nv">$NameSpace</span>.Name | <span class="nb">ConvertFrom-Json</span> <span class="c1"># search for the rule with the name you want</span> <span class="nv">$ruleName</span> <span class="o">=</span> <span class="s2">"<authorization rule name>"</span> <span class="nv">$rule</span> <span class="o">=</span> az eventhubs eventhub authorization-rule show --resource-group <span class="nv">$resourceGroup</span> --namespace-name <span class="nv">$NameSpace</span>.Name --eventhub-name <span class="nv">$EventHub</span>.Name --name <span class="nv">$ruleName</span> | <span class="nb">ConvertFrom-Json</span> </code></pre></div></div> <h2 id="linking-the-eventhub-connection-to-the-azure-function">Linking the EventHub connection to the Azure Function</h2> <p>The Azure Function needs to be told about the connection to the EventHub that it needs to listen on. For that, you need the Id of the authorization rule that we created in the previous section.</p> <p>You can retrieve that Id from the rule we saved:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># The Id of the rule that you need:</span> <span class="nv">$rule</span>.Id </code></pre></div></div> <p>And then you can set that into the corresponding app setting for it:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az functionapp config appsettings <span class="nb">set</span> --resource-group <span class="nv">$resourcegroup</span> --name <span class="nv">$functionapp</span> --settings <span class="nv">EventHubConnectionAppSetting</span><span class="o">=</span><span class="nv">$rule</span>.Id </code></pre></div></div> <p>The name of the setting we are changing here (<code class="highlighter-rouge">EventHubConnectionAppSetting</code>) needs to match the name of the Connection you gave the parameter on the function:</p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">[FunctionName("EventHubActivitiesFunction"))]</span> <span class="k">public</span> <span class="k">static</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">Run</span><span class="p">(</span> <span class="p">[</span><span class="nf">EventHubTrigger</span><span class="p">(</span><span class="s">"insights-operational-Logs"</span><span class="p">,</span> <span class="n">Connection</span> <span class="p">=</span> <span class="s">"EventHubConnectionAppSetting"</span><span class="p">)]</span> <span class="n">EventData</span> <span class="n">eventHubMessage</span><span class="p">,</span> <span class="n">ILogger</span> <span class="n">log</span><span class="p">)</span> <span class="p">{</span> <span class="p">...</span> <span class="p">}</span> </code></pre></div></div> <p>For more documentation about the Event Hub binding, check <a href="">docs.microsoft.com</a>.</p> Docker for Windows: fix unauthorized errors 2019-01-16T00:00:00+00:00 <p>After installing <a href="">Docker for Windows</a> (recently renamed to Docker Desktop) I could not get the basic command <code class="highlighter-rouge">docker run hello-world</code> working. I checked my install, read more docs, got confused if it was in my Hyper-V setup, the networking stack in it, or something else. Finally a light bulb went off and I found the solution!</p> <h1 id="the-issue">The issue</h1> <p>After installation Docker present you with a login screen. Since that login worked just fine, I’d expect that everything is setup correctly.</p> <p>The next step is then to verify that you can run at least <code class="highlighter-rouge">hello-world</code>:<>That message: ‘Error response from daemon: Get: unauthorized: incorrect username or password.’ can send you down a wild goose-chase to figure out what is wrong!!!</p> <p>Since it tells you the call back to the docker registry is not authorized, you think you need to login again, even though Docker tells you, you are logged in…. Hmm, already strange, let’s try that nonetheless.</p> <h1 id="logging-in-again">Logging in again</h1> <p>If I run <code class="highlighter-rouge">docker login</code>, it shows me the currently logged in user, which I did via the login interface after the installation:</p> <p><img src="/images/2019_01_16_Docker_For_Windows_Login.png" alt="" /><br /> Notice that I am using my e-mail address here. The login is just fine:</p> <p><img src="/images/2019_01_16_Docker_for_Windows_Email_Logged_In.png" alt="" /></p> <p>Still, calling <code class="highlighter-rouge">docker run hello-world</code> gives me the same error:<>After testing all sorts of stuff, reinstalling Docker (reboots required) and searching around some more, I <em>finally</em> got to a comment somewhere on another issue… And the issue is…. I am logged in with my <strong>e-mail</strong> address and <strong>not</strong> with my “insert curse” username!</p> <h1 id="fixing-it">Fixing it</h1> <p>Switching to use the <strong>username</strong> in Docker will switch the login for the session, and all of a sudden it just works!</p> <p><img src="/images/2019_01_16_Docker_for_windows_logged_in_user.png" alt="" /><br /> Very odd that docker does accept both for the login here, while the e-mail address is not working. Somebody probably enabled that for the web front-end and didn’t test the CLI.</p> <p>Maybe someone else will find this post when hitting the same issue and it saves them a lot of time!</p> Missing Azure Functions Logs 2019-01-13T00:00:00+00:00 <p>I was testing with our Azure Function and had set the cron expression on the timer trigger to <code class="highlighter-rouge">"0 0 */2 * * *"</code>, based on the example from the Microsoft <a href="">documentation</a>. When I went to the logs a day later, I noticed that some of the runs weren’t there! <img src="/images/emily-morter-188019 Emily Mor Emily Morter</span></a></p> <h2 id="missing-logs-">Missing logs ??</h2> <p>I added a red line were I noticed some of the logs missing. <img src="/images/20190113_01_Every_2_hours.png" alt="" /></p> <p>At first, I thought that the trigger wasn’t firing, or maybe something was wrong with my cron expression. I tested several other expressions and redeployed the function, but to no avail.</p> <h2 id="the-search">The search</h2> <p>Eventually, I found a comment deep down in a <a href="">GitHub issue</a> that actually pointed me in the right direction!</p> <h2 id="application-insights-sampling">Application Insights Sampling</h2> <p>The logs you see in an Azure function are provided by Application Insights. Due to large data ingestion during our testing period (we had the trigger fire every minute to test with), I had enabled <strong>sampling</strong> on the Application Insights instance! That change was made after seeing a bill going upwards of € 500,- during the testing period 😄.</p> <h2 id="finding-the-sampling-setting">Finding the sampling setting</h2> <p>To correct the sampling settings (running once every two hours is significantly less data then every minute!), you need to go to the Application Insights instance.</p> <p>Go to <code class="highlighter-rouge">Usage and estimated costs</code> and click on the <code class="highlighter-rouge">data sampling</code> button: <img src="/images/20190113_03_Settings.png" alt="" /></p> <p>You can now change the sampling setting: <img src="/images/20190113_02_Sampling.png" alt="" /></p> <p>Wait for a couple of runs to execute and you can verify that it now shows all the logs again:<br /> <img src="/images/20190113_04_Fixed.png" alt="" /></p> Azure Functions Connection Monitoring 2018-10-24T00:00:00+00:00 <p>Last week I noticed our Azure Function wasn’t running anymore and I got a pop-up in the <a href="">Azure Portal</a> indicating that we reached the limit on our open connections. The popup message contains something like <code class="highlighter-rouge">Azure Host thresholds exceeded: [Connections]</code> and links to this <a href="">documentation page</a>. The documentation already hints at the usual suspects: HttpClient holds on to the connections longer then you’ll usually need. Since the whole Azure Functions sandbox has several hard limits, usage of an HttpClient in the default pattern is a common way to hit the Connection Count limit. The documentation also notes an example for a DocumentClient and SqlClient, although the latter already uses connection pooling.</p> <p><img src="/images/20181024/2018_10_24_nick-seliverstov-516549-unsplash.jpg" alt="Header image" /> NICK SELIVERSTOV">">NICK SELIVERSTOV</span></a></p> <h2 id="searching-around">Searching around</h2> <p>When searching for this pattern, you can find a lot of examples that show you how the HttpClient does things and how to fix it (declare the client as static so it isn’t disposed on every function execution). You can even find examples from <a href="">Troy Hunt</a> and from the <a href="">ALM Rangers</a>.</p> <h2 id="monitoring">Monitoring</h2> <p>Since we are using the <a href="">Azure Fluent SDK</a> to retrieve information from an Azure Subscription, and that instantiates several HttpClients inside of it, I wanted to start monitoring the connection count first, to see if it was a gradual ramp up (first time finding it was 24 hours <strong>after</strong> deployment of a change), or something else.</p> <p>Monitoring would give a better idea overall about the issue and it’s part of the DevOps practice: you cannot improve things if you aren’t monitoring them first.</p> <h2 id="searching-around-for-monitoring">Searching around for monitoring</h2> <p>Since I couldn’t directly find how to monitor the connection count, I even opened an <a href="">issue</a> on the documentation repository to see if someone could help. Sure enough, someone responded with the location where to find the metric.</p> <p>Since it took me a lot of searching and overlooking the first selection option, I am documenting the full process here.</p> <h2 id="finding-the-metric">Finding the metric</h2> <p>Of course, the information is only available when you are running <a href="">Application Insights</a>.</p> <ol> <li>Go to the Application Insights instance connected to the Function</li> <li>Go to ‘metrics’</li> <li><strong>Change the resource</strong> from your ‘Application Insights’ instance to ‘App Service’ to connect to the App Service that is hosting the function.</li> <li>There is only one ‘metric namespace’ to choose from, and it is already selected.</li> <li>Select the ‘Connections’ metric: <img src="/images/20181024/2018_10_24_01_Metrics.png" alt="Azure Metrics" /></li> </ol> <h2 id="fixing-the-used-connection-count">Fixing the used connection count</h2> <p>After monitoring we changed the instantiation of the clients we used from the Azure Fluent SDK to static instances and you can see that the connection count has improved a lot: <img src="/images/20181024/2018_10_24_02_Metrics.png" alt="Improvement" /></p> SonarQube setup for Azure DevOps 2018-10-20T00:00:00+00:00 <p>During installation and setting up a <a href="">SonarQube</a> server for usage in an Azure DevOps Build I found some things that I didn’t remember from previous installation and wanted to log that in this post, so the next time I have somewhere to find these things in one place.</p> <h2 id="updated-5-1-2019">Updated 5-1-2019</h2> <p>Read the last section of this post if you want to use an even easier way of setting up and maintaining SonarQube: run it behind an Azure App Service!</p> <p><img src="/images/2018_10_20_Zach_Lucero_Dog.png" alt="Cool dog by Zach Lucero" /> Lucero">">Zach Lucero</span></a></p> <h2 id="installation-on-azure">Installation on Azure</h2> <p>For installation on an Azure environment I used the same <a href="">Azure QuickStart ARM template</a> I used before. Somehow, each time I need this template, something has changed underneath and I get to <a href="">fix the template</a>. This time the download URL for the SonarQube installer was changed as well as a new version got released. This has now been included in the template: because it is open source, I could find the places that needed to be updated and send in a <a href="">pull request</a> to Microsoft with the fix. I love open source! Such a pleasure to find an issue, look at the code and present a fix to the repository maintainer!</p> <p><img src="/images/2018_08_12_SonarQube.png" alt="SonarQube logo" /></p> <p>You can follow the usual steps from the ARM template: download and install the Java Development Kit on the SonarQube server, restart the SonarQube service and you’re up and running with the <strong>server side</strong>.</p> <h2 id="things-to-know-for-next-time">Things to know for next time</h2> <p>There are a couple of things that you need to think of when starting an installation yourself. The ARM template is already a great help in it, but you need to think of some other things. Those are mostly client side, so on the Azure DevOps Agent.</p> <h3 id="bring-a-valid-certificate">Bring a valid certificate</h3> <p>As noted <a href="">before</a>, the template uses a self signed certificate, which will not work with Azure DevOps: the tasks from the marketplace need a valid certificate that it trusts for the connection with the server. Therefore you need to provide a valid certificate and setup a DNS entry to match the URL in the certificate.</p> <h3 id="download-or-install-the-sonarqube-extension">Download or install the SonarQube extension</h3> <p>Go to the <a href="">marketplace</a> and download or request installation in your Azure DevOps subscription.</p> <h3 id="the-run-analysis-task-has-java-as-a-requirement">The ‘Run Analysis Task’ has java as a requirement</h3> <p>This was a gotcha that I forgot this time: the <code class="highlighter-rouge">Run Analysis Task</code> has a demand requirement that it needs Java (specifically the Java Runtime Environment 8.0) <strong>on the agent</strong>. This also means that you cannot run it on a hosted agent: those do not have the JRE installed! Only the JDK is installed, which doesn’t add support for the <code class="highlighter-rouge">java</code> demand. I raised an <a href="">issue</a> on the hosted agent with a request for it.</p> <h3 id="building-on-a-new-agent">Building on a new agent?</h3> <p>When you have a new agent you could install <a href="">Visual Studio Community edition</a> on it, that will provide you with the <code class="highlighter-rouge">msbuild</code> capability on the agent. This does <strong>not</strong> give you the tools to run the unit tests on the server, which is needed for the <code class="highlighter-rouge">Run unit test</code> task. These tools will provide SonarQube with the necessary information it needs to do, well basically, anything! You can install a licensed Visual Studio Enterprise on it, but then you need to update that license every once in a while. You probably don’t want that, because of the occasional error it will give you, for which you need to login to the server.</p> <p>To help fix this, you can use the <a href="">Visual Studio Test Platform Installer</a> task to install all the tools VSTest needs to run.</p> <h3 id="want-css-analysis-from-sonarqube">Want CSS analysis from SonarQube</h3> <p>Out of the box, SonarQube can scan your CSS files for issues with over 180 available <a href="">rules</a>. To do this, the agent needs to have <a href="">Node.js</a> installed. This is already available on a hosted agent, but you cannot use that yet because of the dependency on JRE!</p> <h3 id="sonarqube-css-issue-on-large-solution">SonarQube CSS issue on large solution</h3> <p>Currently there is an <a href="">issue</a> on SonarQube with larger solutions or CSS files. The process seems to run out of memory somewhere. In the Azure DevOps Build log, you’ll see these as the last steps being logged:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>INFO: Quality profile for cs: Sonar way INFO: Quality profile for css: Sonar way INFO: Quality profile for js: Sonar way INFO: Quality profile for xml: Sonar way INFO: Sensor SonarCSS Metrics [cssfamily] WARNING: WARN: Metric 'comment_lines_data' is deprecated. Provided value is ignored. INFO: Sensor SonarCSS Metrics [cssfamily] (done) | time=1937ms INFO: Sensor SonarCSS Rules [cssfamily] </code></pre></div></div> <p>For now the recommended fix is: <strong>do not use the CSS analysis</strong>, which isn’t great, but better then the alternative: currently the <code class="highlighter-rouge">Run Analysis Task</code> just hangs until the maximum runtime of your build has been reached. Al that time, your build server will run at 100% CPU (if you have 1 CPU available, 2 CPU’s got me 50% utilization)! It took me quite some searching around to find this one, so it’s good to document it here.</p> <p>The current fix is to start the analysis task with a parameter that redirects the CSS files to a non-existing analyzer:<br /> <code class="highlighter-rouge">d:sonar.css.file.suffixes=.foo</code> or do that globally for your entire SonarQube server via the settings on the CSS analysis there (which would be easier if you have multiple projects with this issue).</p> <h1 id="update-05-01-2019-run-it-in-an-azure-app-service">Update (05-01-2019) Run it in an Azure App Service!</h1> <p><a href="">Nathan Vanderby</a>, a Premier Field Engineer from Microsoft, has created an ARM template to run the SonarQube installation behind an Azure App Service with a Java host. This saves you a lot of steps mentioned above! You can find the scripts for it on<a href="">GitHub</a>.</p> <p>Several benefits you get this way:</p> <ul> <li>You don’t have to manage a Virtual Machine anymore.</li> <li>Setting up SSL becomes easier, thanks to the Azure App Service SSL being simpler.<> Azure DevOps Pipeline for GitHub Open Source Projects 2018-09-10T00:00:00+00:00 <p>Microsoft <a href="">announced</a> today that they have a ‘new’ product: Azure DevOps! With that announcement came another one: Azure DevOps pipelines for GitHub open source projects with unlimited minutes! I wanted to see what the integration with GitHub would look like, so I tried it out.</p> <p>Note: of course, you could already create pipelines for GitHub repo’s, but only inside of a VSTS account and not <strong>with unlimited build/release minutes!</strong> If you had you own private agent installation, you could build with that.</p> <p>Now all open source projects can utilize the Azure DevOps pipelines!</p> <p>Here are the steps you’d take to add a new pipeline from a GitHub repository.</p> <p>From your GitHub repo, go to <code class="highlighter-rouge">MarketPlace</code> and search for Azure. The marketplace entry is named <code class="highlighter-rouge">Azure Pipelines</code>: <br /> <img src="/images/2018_09_10-01-GitHub-Marketplace.png" alt="" /></p> <p>Click on <code class="highlighter-rouge">Set up a plan</code>:<br /> <img src="/images/2018_09_10-02-Setup-a-plan.png" alt="" /></p> <p>Choose <code class="highlighter-rouge">Free</code>:<br /> <img src="/images/2018_09_10-03-Azure-Pipelines.png" alt="" /></p> <p>Choose to what repositories you want to enable a pipeline for. I have picked a <a href="">.NET tool</a> that I created to talk to VSTS.</p> <p>One more authentication in GitHub to make sure you have the rights to enable the market place integration and you get send to Azure DevOps. <br /> <img src="/images/2018_09_10-04-Installing-Azure-Pipelines.png" alt="" /></p> <p>You are now in an editor to setup a new Azure DevOps account. If you already have one, you can choose that: <img src="/images/2018_09_10-05-Create-Azure-DevOps-project.png" alt="" /><br /> Next, Azure DevOps starts creating your project. <br /> <img src="/images/2018_09_10-06-Signup.png" alt="" /></p> <p>It will ask you which repository you want to use.<br /> <img src="/images/2018_09_10-07-New-pipeline-Pipelines.png" alt="" /> <br /> It detects the kind of project in the repository and gives you a default option based on that. You can choose other options, but I’ll just go with the .NET desktop build for this project, because that is what this project is.<br /> <img src="/images/2018_09_10-08-New-pipeline-Pipelines.png" alt="" /></p> <p>A yaml file is created for your (named <code class="highlighter-rouge">azure-pipelines.yml</code>) which will be saved inside of your repository: that also means you now have a two-way binding between GitHub and Azure DevOps: it can also write changes back to the GitHub repository. <img src="/images/2018_09_10-10-New-pipeline.png" alt="" /></p> <p><code class="highlighter-rouge">Save and run</code> does exactly what is says, and if left with the default setting, will commit the new yml file to the chosen repository: <br /> <img src="/images/2018_09_10-11-New-pipeline.png" alt="" /><br /> Now we have a running pipeline (free of charge! how nice is that!). <br /> <img src="/images/2018_09_10-11-Running.png" alt="" /></p> <p>After the build is done and it succeeded, you get a summary page:<br /> <img src="/images/2018_09_10-12-rajbos.VSTSClient.png" alt="" /></p> <h2 id="status-badge">Status badge</h2> <p>If you want to show your build status on a project page, you can also get the build status badge for the project, right from Azure DevOps.<br /> Note: you could do that for any project before.</p> <p>Go to the build definition, find the menu, go to <code class="highlighter-rouge">status badge</code>: <br /> <img src="/images/2018_09_10-12-Status-Badge-Create.png" alt="" /></p> <p>And copy the necessary code/markdown into the place you want to show it.<br /> <img src="/images/2018_09_10-12-Status-badge.png" alt="" /></p> <p>And the end result inside the readme of my repository:<br /> <img src="/images/2018_09_10-View-badge.png" alt="" /></p> <h2 id="email-notification">Email notification</h2> <p>Because I have an email address connected to my GitHub account, I also receive pipeline notifications by email:<br /> <img src="/images/2018_09_10-Email-notification.png" alt="" /></p> <h2 id="triggers">Triggers</h2> <p>You have all the regular options for your pipeline available. For example: the new Azure DevOps project has been provisioned with a continuous integration trigger: this way, each commit will trigger a new build and will let the committer know how the build went.<br /> <img src="/images/2018_09_10-13-Triggers.png" alt="" /></p> <h2 id="want-more">Want more?</h2> <p>Since you now have an Azure DevOps project available, you can also enable other features for this project. This way you can leverage the powerful options Azure DevOps gives you.</p> <p>Go to <code class="highlighter-rouge">Project settings</code>–> <code class="highlighter-rouge">Services</code> and enable the services you like to use. <img src="/images/2018_09_10-15Settings·Services.png" alt="" /></p> <p>Reload the page and the enabled services are available. Here I enabled the <code class="highlighter-rouge">Boards</code> service: <br /> <img src="/images/2018-09_10-Work-Items-Boards.png" alt="" /></p> Azure error setting up export from Activity Log to Event Hub 2018-09-05T00:00:00+00:00 <p>While working to setup an export from Activity Log to an Event Hub I got no response on a save action. This took some time to figure out why this happened, so I thought it could be helpful for someone else.</p> <p><img src="/images/adam-solomon-472458-unsplash.jpg" alt="" /></p> <h4 id="photo-by-adam-solomon-on-unsplash">Photo by <a href="">Adam Solomon on Unsplash</a></h4> <h2 id="issue-when-saving">Issue when saving</h2> <p>When saving the export setting via this blade:<br /> <img src="/images/2018_09_05_Export_activity_log_failure_setup.png" alt="" /></p> <p>I got this error:<br /> <img src="/images/2018_09_05_Export_activity_log_failure_setup_notification.png" alt="" /></p> <p>After scratching my head a little I checked the browsers console log:<br /> <img src="/images/2018_09_05_Export_activity_log_failure_setup_consolelog.png" alt="" /></p> <p>Well, what do you know! Apparently the resource provider <code class="highlighter-rouge">microsoft.insights</code> hasn’t been registered yet! Would have been a nice message inside of the Portal itself, but at least now I can fix it!</p> <h2 id="the-fix-using-the-portal">The fix using the portal</h2> <p>Go to your subscriptions, pick the correct one and navigate to resource providers:<br /> <img src="/images/2018_09_05_Export_activity_log_failure_setup_register.png" alt="" /></p> <p>Register the <code class="highlighter-rouge">microsoft.insights</code> provider and save the export option again. Problem solved!</p> <h2 id="the-fix-using-powershell">The fix using PowerShell</h2> <p>You can also fix this via PowerShell, as you can read on <a href="">Pascal Nabers</a> blog.</p> GDBC DevOps pipelines in VSTS 2018-09-02T00:00:00+00:00 <h2 id="global-devops-bootcamp">Global DevOps BootCamp</h2> <p>In June 2018 I was part of the team behind <a href="">Global DevOps BootCamp</a> (or GDBC in short). The goal of the boot camp is to create a world wide event where everyone could get a taste of DevOps on the Microsoft Stack. It is an amazing combination between getting your hands dirty and sharing experience and knowledge around VSTS, Azure, DevOps with other community members. This years edition was great, with 75 registered venues worldwide and 8.000 registered attendees! The feedback we’ve got was wonderful: a true community event where everybody was asking for more!</p> <p><img src="/images/2018_08_31_Unsplash.jpg" alt="" /></p> <h4 id="photo-by-rawpixel-on-unsplash">Photo by <a href="">rawpixel on Unsplash</a></h4> <h2 id="challenges">Challenges</h2> <p>During the GDBC participants could try to complete 60+ challenges that the GDBC team had created. Those challenges where created by a team of more than 15 people (global organizers and venue organizers helped out), so they had quite a lot of changes during the preparation period. If you want to see and dive into the challenges, you can! The challenges have been open sourced for the community and can be found here: <a href="">GitHub/XpiritBV/GDBC2018-Challenges</a>.</p> <p>The challenges would be created as work items inside a VSTS team, where participants would be assigned to, so they could collaborate on the challenges there. By dragging the work items to the “done” column, they’d indicated that they finished the challenge and received points for it, which they could follow on a leaderboard (more info in this <a href="">post</a>).</p> <h2 id="pipelines-for-the-challenges">Pipelines for the Challenges</h2> <p>Early on, it was decided to save all the challenges in a git repository in VSTS, practicing what we wanted the participants to use (and because we are nerds!). The challenge definitions where saved as markdown files, with a yaml header to indicate if the challenge was a bonus challenge, if the challenge had additional help available and other properties. In 2017, the team only found out issues with the templates when the organizers ran their scripts to create the actual work items in VSTS.</p> <p>To help the team with this, I started setting up a build and release pipeline in VSTS. Since there were a lot of moving parts with a complete team helping out, it would be helpful to run several checks during the changes to make sure all challenges where setup correctly.</p> <p>Eventually I even created another pipeline to push the changes into a VSTS team, so the challenge maintainers could see the end result as soon as possible. All tThis really helped fixing issues quickly.</p> <h2 id="pipeline-on-push">Pipeline on push</h2> <p>We had a tool available to convert the markdown, read the yaml headers, parse all links and images in the markdown and eventually create work items from that. After adding additional checks to it, it could be used to check the challenges for completeness. The tool, some PowerShell scripts, the challenges all had there own git repositories, so we could plug them in their own build pipelines: The tool was .NET core, the challenges would be published as artifacts and the same for the PowerShell scripts. You can see them linked as dependencies in the screenshot below. <img src="/images/2018_08_31_Release.png" alt="Check challenges from PR" /><br /> In the environment was just one task: run the checks from the .NET core tooling: <img src="/images/2018_08_31_ReleasePR_Detail.png" alt="PR Detail" /><br /> The trigger for this was a pull request (PR) being created for the challenges repository, so the person creating the PR would get an email indicating the level of correctness of the challenges.</p> <h2 id="pipeline-to-production">Pipeline to production</h2> <p>After the pull request was checked by the pipeline described above, then merged into master, the second pipeline would be triggered. This time a couple more steps where involved.</p> <p>The artifacts where the same: <br /> <img src="/images/2018_08_31_ReleasePipeline.png" alt="Release to production" /></p> <p>The tasks where as follows:</p> <p><img src="/images/2018_08_31_ReleasePipelineDetails.png" alt="Provision stories" /></p> <ol> <li>Convert the markdown file to json<br /> This is the same tool that does perform the checks and saves a json for easy formatting.</li> <li>Zip up the help directories<br /> The help needs to be a link to a single file that the participants could request and open when they needed it.</li> <li>Save the zip files into a DropBox share and get unique links for them. The unique link couldn’t be guessed, to prevent anyone from cheating.</li> <li>Update the database for the scoreboard, with the correct points and help links for the challenges.</li> <li>Clear the test team’s sprint from previous versions of the work items.</li> <li>Create new work items based on the new challenge content.</li> </ol> <p>After this pipeline completed, the whole team could check the end result inside of VSTS, by checking the setup for the test team.</p> <p><img src="/images/2018_08_31_TestTeamWorkItems.png" alt="" /></p> <p>These pipelines helped the team find issues early on, so we could make sure the quality would be where we wanted it to be.</p> <p>Hopefully this post gives you more insights into some of the work we did to help a team out. If there are parts of this pipeline that you’d want more detail of, please let me know.</p> Retrieving AppSettings for an App Service with the Fluent SDK 2018-08-23T00:00:00+00:00 <p>I am using the <a href="">Azure Fluent SDK</a> to retrieve information about the Azure setup and I wanted to retrieve the AppSettings from an App Service (or function app, or logic app). The simple solution didn’t work and searching around didn’t reveal any information about it. Finding something that did work (initial testing with a different service principle didn’t change the results), so here we are…</p> <p><img src="/images/2018_08_23_Research.jpg" alt="" /></p> <h6 id="photo-by-osman-rana-on-unsplash">Photo by <a href="">Osman Rana on Unsplash</a></h6> <h2 id="expected-simple-solution">Expected simple solution</h2> <p>Because all other information I needed came from a WebApp, retrieved by the default call, I expected that loading the AppSettings from the SiteConfig would deliver all settings:</p> <div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">IAzure</span> <span class="n">AzureConnection</span> <span class="o">=</span> <span class="n">GetAzureConnection</span><span class="p">();</span> <span class="n">var</span> <span class="n">webApp</span> <span class="o">=</span> <span class="n">await</span> <span class="n">AzureConnection</span><span class="p">.</span><span class="n">AppServices</span><span class="p">.</span><span class="n">WebApps</span><span class="p">.</span><span class="n">GetByIdAsync</span><span class="p">(</span><span class="n">webAppResourceId</span><span class="p">);</span> <span class="n">var</span> <span class="n">settings</span> <span class="o">=</span> <span class="n">webApp</span><span class="p">.</span><span class="n">Inner</span><span class="p">.</span><span class="n">SiteConfig</span><span class="o">?</span><span class="p">.</span><span class="n">AppSettings</span><span class="p">;</span> <span class="c1">// settings is null because SiteConfig is null </span></code></pre></div></div> <p>Unfortunately, the SiteConfig stays completely empty. As mentioned I checked and tested it with another service principle with more rights, but that wasn’t the issue. Searching around on the web didn’t yield any new information.</p> <h3 id="research-the-http-traffic">Research the Http traffic</h3> <p>So next I checked the Http traffic being send to see if the information came back from the HttpCall. [Note: the Fluent SDK is a wrapper around a generated HttpClient against the REST api’s]</p> <h4 id="traffic-from-the-fluentsdk-calls">Traffic from the FluentSDK calls</h4> <p>First up was seeing what Http calls where being made. For this I used <a href="">Fiddler</a>.</p> </code></pre></div></div> <p>You can see the loading of the WebApp information and next loading of the config for that WebApp.</p> <h4 id="powershell">PowerShell</h4> <p>Next I wanted to see if I could load the settings from a different stack, like e.g. PowerShell. Lo and behold, the PowerShell call <strong>did yield AppSettings!!!! WTF?!</strong></p> <p>From PowerShell:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$webApp</span> <span class="o">=</span> Get-AzureRmwebApp -ResourceGroupName <span class="nv">$resourceGroup</span> -Name <span class="nv">$webAppName</span> <span class="nv">$webAppSettings</span> <span class="o">=</span> <span class="nv">$webApp</span>.SiteConfig.AppSettings </code></pre></div></div> POST /subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/WebAppName/config/appsettings/list POST /subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/WebAppName/config/connectionstrings/list </code></pre></div></div> <p>Here you can see the same calls to the WebApp ad the Config, but then all of a sudden there are two <strong>other</strong> calls into the config endpoint! You can see it’s loading both the AppSettings and the ConnectionStrings.</p> <p>Luckily the Fluent SDK is <a href="">open source</a> (hurray for Microsoft!) so I started searching for the class that would handle the json that was being retrieved from the REST call.</p> <h2 id="solution">Solution</h2> <p>Searching in the Fluent SDK’s source code if eventually stumbled onto a new ManagementClient: the <code class="highlighter-rouge">WebSiteManagementClient</code>, specifically helps with the settings for those websites!</p> <p>So the final code came down to this:</p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">WebSiteManagementClient</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">WebSiteManagementClient</span><span class="p">(</span><span class="n">AzureCredentials</span><span class="p">)</span> <span class="p">{</span> <span class="n">SubscriptionId</span> <span class="p">=</span> <span class="n">SubscriptionId</span><span class="p">};</span> <span class="kt">var</span> <span class="n">settings</span> <span class="p">=</span> <span class="k">await</span> <span class="n">WebSiteManagementClient</span><span class="p">.</span><span class="n">WebApps</span><span class="p">.</span><span class="nf">ListApplicationSettingsAsync</span><span class="p">(</span><span class="n">resourceGroupName</span><span class="p">,</span> <span class="n">appServiceName</span><span class="p">);</span> <span class="kt">var</span> <span class="n">appSettings</span> <span class="p">=</span> <span class="n">settings</span><span class="p">.</span><span class="n">Properties</span><span class="p">;</span> </code></pre></div></div> Using a self signed certificate on a SonarQube server with VSTS/TFS 2018-08-12T00:00:00+00:00 <p>Recently I got a customer request to help them with provisioning a <a href="">SonarQube</a> server hosted in Azure. Fortunately there is an <a href="">ARM template</a> available for it: <a href="">link</a>.</p> <p>I ran into some issues with the ARM template at first and then tried to use the new SonarQube server within VSTS.</p> <h2 id="tldr">TL;DR</h2> <p>I didn’t manage to get the SonarQube VSTS Tasks working with the self-signed certificate. I think it’s probably possible, but you’ll be much easier off</p> <p><img src="/images/2018_08_12_SonarQube.png" alt="SonarQube logo" /></p> <h1 id="arm-template-issues">ARM template issues</h1> <p>At my first go with it, it took some time to figure out that the reason we couldn’t connect to it came from the way the self-signed certificate was created: the template didn’t create the certificate with a <a href="">fully qualified domain name</a>. A couple of years ago that would’ve worked, but with the tighter security rules in browsers that doesn’t work anymore. Luckily I changed that with a small adjustment in the script and a <a href="">pull request</a> later that problem has been fixed.</p> <p>A little later I found out that SonarQube updated their download links, deprecating the older TLS 1.2 versions which gave another issue. Another <a href="">pull request</a> later and that is also fixed.</p> <h1 id="java-installation">Java installation</h1> <p>Now that those issues have been handled, you’ll probably find that you’ve missed the comments in the readme: the ARM template <strong>cannot</strong> provision the Java JDK needed for installation, because Oracle will not let you download it from an open folder (like SonarQube does)!</p> <p>For this step you’ll need to RDP into the server and install the JDK by hand. <br /> <strong>Do note</strong>: don’t forget to change the default passwords for logging in to the SonarQube installation!!</p> <p>After that you’ll find out that the template provisions an IIS installation to host the SSL certificate and then be the proxy for the SonarQube server.</p> <p><img src="/images/2018_08_12_SonarQube_Project_page.png" alt="SonarQube project page" /></p> <h1 id="using-the-sonarqube-server-in-vsts--tfs">Using the SonarQube server in VSTS / TFS</h1> <p>When you have the server up and running, you’ll want to use it in VSTS. If you start adding the necessary steps to your build (find out more about it <a href="">here</a>, you’ll find out that the builds will fail with some obscure messages connecting to the SonarQube server. If you are using a <a href="">private agent</a>, you can log into the server and try to remediate these issues.</p> <p><img src="/images/2018_08_12_SonarQube_VSTS.png" alt="SonarQube Tasks" /></p> <p>First, you’ll hit it in the “Prepare analysis on SonarQube” step. Thinking it runs on the agent server on Windows, you can trust the server’s certificate in your local certificate store, using the <a href="">certificate snap-in</a>. Double check the user the agent is running on or trust the certificate machine-wide. Don’t forget to check your proxy configuration if you have one in between!</p> <p>Unfortunately: this doesn’t work! Nest, you double check and find out a part of this step actually creates a local Java Virtual Machine <a href="">JVM</a> that has its own version of the local certificate store. To add your own certificate in it, you can follow the steps from <a href="">here</a>.</p> <p>Next, you’ll find that NodeJS is used to send the requests to the SonarQube server! Great, now that also has its own version of the trust chain setup…</p> <h1 id="end-result">End result</h1> <p>And this is where we couldn’t figure out how to include the certificate here. Given the time that it already cost to get here and the expectation that a hosted agent could still be needed (where you cannot trust your own certificates and need to have an official one), we stopped searching around for the solution and got an actual certificate. That prevented all these errors and also enables the usage of a hosted agent.</p> VSTS Personal Access Token for an Agent: Revoke after use 2018-08-03T00:00:00+00:00 <p>Today I was listening to <a href="">RadioTFS episode 163</a> on my commute, with guests <a href="">Wouter de Kort</a> and <a href="">Henry Been</a>. During the show Wouter mentioned that he always revoked his <a href="">VSTS Personal Access Token</a> after using it, especially when used for a Build Agent.</p> <p><img src="/images/2018_08_03_VSTS.png" alt="" /></p> <p>Apparantly the PAT is only used for the initial authentication to VSTS/TFS and after that it isn’t needed anymore! That indeed means that you can revoke the token after it has been used and that you don’t need to keep the token around. Until this day, I had alwas copied the token into <a href="">keepass</a> for keepsake, but I don’t need to do that anymore. This will also save me from seeing the expiration date and wondering if I need to update this token!</p> <p>If I ever need to register another agent, I can always create a new one. Revoking the old one also means that it cannot be used anymore, so that’s also less of a security concern then :-).</p> <p>Off course I had to test it first to see if it actually works, but it did :-). So wanted to save this for later in this post.</p> <p><img src="/images/2018_08_03_PAT.png" alt="" /></p> Chocolatey Server in Azure 2018-07-20T00:00:00+00:00 <p>Recently I wanted to demo an example of how you can rollout <a href="">Chocolatey</a> packages via your own choco server. Sometimes we cannot save every binary in VSTS to use it in a pipeline as an artifact and therefor I needed a different artifact server. Chocolatey provides a NuGet wrapper around binaries that you can easliy track different versions with.</p> <p>Since that worked out an I now have a local document with the neccesary steps to do so, I wanted to share that for later reuse.</p> <p><img src="/images/chocolatey.png" alt="chocolatey" /></p> <p>Ofcourse, Microsoft just announced that they started working on a different artifact server in VSTS (called <a href="">Universal Package Management</a>), next to the already available NuGet / npm / Maven <a href="">package management</a>.</p> <p>Since I needed something to show, I started researching how you can do this with your own Chocoserver. Unfortunately, the installation for Choco.Server on Windows consists of <a href="">multiple steps</a> that can take up some time. That just doesn’t feel right, so I wanted to see if I could wrap that inside of a PowerShell script being triggered from an <a href="">ARM template</a> and Azure Automation DSC. I couldn’t seem to find the time to really start on it, but fortunately enough, my colleague <a href="">Reinier van Maanen</a> needed a bit of a challenge and picked this up. You can find his ARM template <a href="">here</a>, together with the necessary PowerShell script.</p> <p>Beneath you can find the steps to get a working Choco.Server and Client up and running with a first basic package.</p> <h1 id="install-a-chocoserver">Install a Choco.Server</h1> <p>First, you’ll need a server to host the packages. You can host that with a couple of steps thanks to <a href="">Reinier</a>.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Git clone cd arm.chocolateyserver Connect-AzureRmAccount Open parameters.json - change dnsNameForPublicIP (max. 15 characters!) - change allowRdpFromThisIpAddress .\Deploy-AzureResourceGroup.ps1 -ResourceGroupLocation "West Europe" -StorageAccountName "ChocoARM" </code></pre></div></div> <p><del>Do note that the ARM template will report failure on the DSC step. To still get a working server you’ll need to log in to the new VM and execute the last 5 lines by hand.</del> Update: thanks to <a href="">Reinier</a> this is now fixed! The ARM template now uses 2 steps to make sure the IIS management cmdlets are available for the second step. You can read how he fixed that <a href="">here</a>.</p> <p>To check it, you can navigate to <a href=""></a>. We are aware of the use of http, we need still to add that step to the script. We decided we could live with it for a small demo and blocking it from your own IP-address.</p> <p><img src="/images/2018_07_20_Choco_Server_packagelistexample.png" alt="Chocopackagelisting" /></p> <h1 id="create-package-on-different-machine">Create package on different machine</h1> <p>Install choco (elevated PowerShell) with this command:</p> <div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">Set-ExecutionPolicy </span>Bypass -Scope <span class="k">Process</span> -Force; <span class="nb">iex</span> <span class="o">((</span><span class="nb">New-Object </span>System.Net.WebClient<span class="o">)</span>.DownloadString<span class="o">(</span><span class="s1">''</span><span class="o">))</span> </code></pre></div></div> <p>You are now ready to create new packages and push it to the Choco.Server. I did this on my own laptop, so therefor I am using the public IP address of the Azure server. To wrap the files Chocolatey uses the well known <a href="">NuSpec</a> file, known from its use in <a href="">NuGet</a>.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>choco new testpackage creates the default nuspec in that folder change it, add notepad.exe into the \tools directory remove ps1 files from the tools directory choco pack on the folder containing the nuspec file. choco push testpackage.{your.version.number.here}.nupkg --source http://{public.ip.of.chocoserver}/chocolatey --api-key={please update your key in the web.config} -force </code></pre></div></div> <p>Note: the <code class="highlighter-rouge">-force</code> is necessary because we are using http instead of https.</p> <h1 id="client-machine-in-the-same-resourcegroupnsgnetwork">Client machine (in the same resourcegroup/nsg/network!)</h1> <p>For demo purposes I created another Azure Windows Server in the same resourcegroup, connected to the same network so the servers could easily connect to each other.<br /> On that extra server, verify that it can see the Choco.Server by navigating to <a href=""></a> to see if you get a valid result.<br /> Then you can install the chocolatey client, link you own server as a source and use that server to install your own package.</p> <p>Steps:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Install choco with the PowerShell command (see 'create package') choco source add --name=internal_machine --source= disable Internet Explorer Enhanced Security Configuration (IE ESC) to prevent the http page from loading (you are on a server). This step should be simpler when you are using https (as we should). choco install testpackage Files are copied into C:\ProgramData\chocolatey\bin\ by default </code></pre></div></div> <p>All in all, I found the process pretty straight forward. Now to start using the Choco.Server and push new versions to it!</p> GDBC DevOps on the Leaderboard 2018-06-16T00:00:00+00:00 <h2 id="global-devops-bootcamp">Global DevOps Bootcamp</h2> <p>On the 16th of June 2018, <a href="">Xpirit</a> and <a href="">Solidify</a> organised a global event around the topic of DevOps and improving your release cadence. It is an ‘out of the box’ event with a lot of self organisation where people around the global gathered on their free saturdays to learn something new about DevOps.</p> <p>People interested in hosting a local venue went to the site <a href=""></a> and started from there. Anybody, anywhere could host an event. Eventually, 76 venues registered with 8.000 participants!</p> <p>The teams from Xpirit and Solidify provided completly configured <a href="">VSTS</a> accounts, with challenges, webhooks, users and a filled git repository:</p> <p><img src="/images/2018_06_16_GDBC_Challenges.png" alt="Challenges" /></p> <h2 id="showing-how-we-worked">Showing how we worked</h2> <p>We got several questions during the event how we organised the leaderboard application and some participants where astonisched we used the same tools for this as they had been working on today! That’s why I wanted to share some of the stuff we did and what happened during the day!</p> <h2 id="getting-points-for-work-items">Getting points for work items</h2> <p>Everytime a workitem’s state changed, a preconfigured webhook was triggered. When the team moved a work item to state ‘done’, they would get points for that work item. Points were depending on the amount of work neccesary to complete the challenge. The team could also request help, by adding a tag to the work item. Doing so would cost them half of the points for that work item, but also provided a link to a zipfile containing step by step instructions.</p> <h2 id="leaderboard-application">Leaderboard application</h2> <p>To organise all this, <a href="">Peter Groenewegen</a> and <a href="">Geert van de Cruijsen</a> created a leaderboard application for last years event. You can find the source for it on GitHub:<br /> <a href="">github.com/xpiritbv</a></p> <p>We build on that for this years version, where Peter has added the webhook callback so VSTS could tell us when a workitem changed.</p> <p>Off course, this .NET core application is hosted in Azure on an App Service instance backed by an Azure Sql Database. The code is on GitHub and we created a build and release pipeline in VSTS:<br /> <img src="/images/2018_06_16_GDBC_Build.png" alt="Build" /></p> <p>This build would trigger when a pull request got merged into master and after succesfully running all unit tests would trigger a release. <img src="/images/2018_06_16_GDBC_Release.png" alt="Build" /></p> <h1 id="devops-for-the-leaderboard-application">DevOps for the leaderboard application</h1> <p>To check the application during the event, we created a dashboard to monitor the performance of the application and the database. <img src="/images/2018_06_16_GDBC_Dashboard.png" alt="Dashboard" /></p> <h2 id="during-the-event">During the event</h2> <p>The event started everywhere at 10:00 AM local time, so New Zealand and Australia got to be the first to use the application. We were a sleep during most of that timeframe, but we checked during the start to see if there where no errors. Luckily, that wasn’t the case!</p> <p><img src="/images/2018_06_16_GDBC_By_Jesse_Houwing.jpg" alt="DevOps with Geert van der Cruijsen by Jesse Houwing" /><br /> <em>Checking issues together with Geert, image by <a href="">Jesse Houwing</a>.</em></p> <h3 id="emea-region-starting-with-the-challenges">EMEA region starting with the challenges</h3> <p>When we started in Hilversum, The Netherlands, 10:00 AM CET, we noticed the average page load time climbing up and up. Appearently, a lot of venues in the EMEA region where using the leaderboard and where updating the workitems, causing some load on the webhook as well!</p> <p>We quickly scaled the App Service Plan and the Azure Sql Database to fix the page loads. This was important, because the webhooks where also on the same endpoint. When a webhook fails a couple of times, it will be disabled! That would mean teams not getting new points for the challenges they would complete!</p> <p>Thanks to the power of Azure and our team being enabled to fix things while running, we mitigated the issue.</p> <h4 id="failing-webhooks">Failing webhooks</h4> <p>A couple of hours later, someone spotted errors in Application Insights for the cals into the webhook. Checking the callstacks and exception messages, we found the culprit. We made sure we checked the workitems coming in to find their correct tags so we could find the points they where gathering, but we didn’t aniticipated our participants creating their own workitems and tasks to distribute the work between them! This meant the webhook was being called with workitems that didn’t have <strong>any</strong> tags! So: a simple edge case we missed in our unit tests!</p> <p>A commit, push, pull request, review and merge later, the CI/CD pipeline we created kicked in and the application was pushed to production! Just like the teams where learning to use today!</p> <h4 id="other-issues">Other issues</h4> <p>Somehow, some teams managed to trigger the webhook in such a way that we got a duplicate record in the database. We found out about this, again through Application Insights, and fixed the issue quickly. How they managed to trigger this, is something we will look into before using the application again.</p> <h2 id="can-you-spot-where-we-scaled-the-database">Can you spot where we scaled the database?</h2> <p><img src="/images/2018_06_16_GDBC_WebApp.png" alt="WebApp" /></p> <h2 id="usage-throughout-the-day">Usage throughout the day</h2> <p>There was a noticable bump for the period EMEA region was live: <img src="/images/2018_06_16_GDBC_Users.png" alt="Users" /></p> <h2 id="closing">Closing</h2> <p>All in all, I think we managed to keep the leaderboard and the webhooks in the air without our users noticing much of these issues. Great to see what a team can do when the have control over their entire pipeline, even into production!</p> <p>That’s why we need to keep repeating the message: empower your teams!</p> DevOps and Telemetry: Support on the supporting systems 2018-04-02T00:00:00+00:00 <p><strong>Note</strong>> <h2 id="ssl-certificates---validity">SSL certificates - validity</h2> <h3 id="ssl-certificates-industry-standards-on-encryption">SSL certificates: industry standards on encryption</h3> <p><a href="">SSLLabs</a></p> <h3 id="security-headers">Security headers</h3> <p><a href=""></a></p> <blockquote class="twitter-tweet" data-<p lang="en" dir="ltr">Hi <a href="">@Scott_Helme</a> , was looking for an API on SecurityHeaders and found: <a href=""></a> But it is returning 500's. Any info on that?</p>— Rob Bos (@RobBos81) <a href="">April 5, 2018</a></blockquote> <script async="" src="" charset="utf-8"></script> DevOps and Telemetry: Supporting systems 2018-03-22T00:00:00+00:00 <p><strong>Note</strong> This is part 2> <p><img src="/images/20180329_03_Dashboard.png" alt="Dashboard example" /></p> <h1 id="telemetry-on-supporting-systems">Telemetry on supporting systems:</h1> <p>For a SaaS application running on an <a href="">Azure Web Service Plan</a>, you can use a lot of components, so I’ll focus on stuff I have used in the past and are most commonly used:</p> <ul> <li>App Service Plan</li> <li>Sql Database</li> <li>Blob Storage<br /> This should be enough to give a broad overview of the standard monitoring options.</li> </ul> <h2 id="available-information-from-application-insights">Available information from Application Insights</h2> <p>I’ll start at the point from the previous post where I added <a href="">Application Insights</a>). <img src="" alt="Application Insights Map" /></p> <h2 id="app-service-plan">App Service Plan</h2> <p>The default telemetry on an App Service Plan level are pretty basic. You can see CPU and Memory usage of the hosted web server, with data in and out. <img src="/images/20180329_04_AppServicPlan.png" alt="App Service Plan" /><br />. <em>Note</em> this menu item keeps moving around. I think this is the third location I’ve seen this item appear. Search around if you cannot find it.</p> <h2 id="app-service">App Service</h2> .</p> <h2 id="database">Database</h2> <p>One example of the additional telemetry data that I got from Application Insights was for the Azure Sql Database that was used. Thanks to Application Insights, you’ll get the following (very handy!) information:</p> <ul> <li>Query duration</li> <li>Number of times a query has been run against the database</li> <li>Where that query has been called from (e.g. in your application code) Some of this information is also visbible from the Azure Sql Database itself, but I think Application Insights provides nicer reporting on it.<br /> This came in very handy when searching for performance issues throughout the application. I made this information available in a separate dashboard to be used for hunting down those issues, because I didn’t need them for the operational overview that I made sure to have always visible on a seperate screen.<br /> <em>Note:</em> some of this information is also available from <a href="">Query Insights</a>.</li> </ul> <p><img src="/images/20180329_QueryPerformanceInsights.png" alt="Query Performance Insights" /></p> <h3 id="database-transaction-units-dtu">Database Transaction Units (DTU)</h3> <p>Next up is the standard database telemetry that Azure allready logged for us: the most interesting parts are database size, maximum database size and even more important: DTU measurements! For a more in dept explanation about DTU’s, check Microsofts explanation <a href="">here</a>.</p> <p!).</p> <p><img src="/images/20180329_02_DTU.png" alt="DTU" /></p> <p>Anyway: make sure to test with a respresentitave load before running it in production! Seeing a chart link above is not great in production.<br /> Changing the service tier is possible, but you really do not want to do this during a long running transaction. This will only make your performance issue last longer :-).</p> <h2 id="blob-storage">Blob Storage</h2> <p>On blob storage you’ll want to start with monitoring these metrics: Ingress (traffic <strong>into</strong> the storage account, so uploads), Egress (traffic <strong>out of the</strong> storage account, so downloads). Next up will be the latency and number of requests.<br /> <a href="">here</a>.</p> <h2 id="next-up">Next up:</h2> <p.</p> <p>I’ll update this post with a link when Part 3 is available.</p> DevOps and Telemetry: Insights into your application 2018-02-23T00:00:00+00:00 <p>I’.</p> <p.</p> <p>This series of posts will go through my own journey in this aspect of DevOps.</p> <p><img src="/images/20180314_02.jpeg" alt="insights" /></p> <p>When I was the team lead for a multitenant SaaS product we hosted for our customers, I made sure we enabled our product owner, management <strong>and the dev team</strong> to get important insights in the availability, performance and usage of our systems.<br /> <a href="">SSLLabs</a> for validity and key strength), SAS Key validity and more.</p> <h2 id="series">Series</h2> <ul> <li>Part 1 - This post, my journey with telemetry and starting with logging</li> <li><a href="2018-03-29-DevOps-and-Telemetry-Insights-supporting-systems">Part 2</a> - Supporting systems and how to gather that information</li> <li>Part 3 - ?</li> </ul> <h3 id="part-1-in-the-series">Part 1 in the series</h3> <p.</p> <h3 id="system">System</h3> <p.</p> <h1 id="part-1">Part 1</h1> <p>What information do you need to check?</p> <h2 id="are-we-up">Are we up?</h2> <p>Maybe the first information we started gathering was information about the incoming requests to the web application: raw number of requests and request errors, together with information like server side duration, user agent, request path, tenantid and userid. <img src="/images/20180314_health.png" alt="health" /><br />.</p> <p>We decided to test and later implemented logging that same information into <a href="">table storage</a>, with a sensible keying strategy that would effectively shard that information inside the table storage. That way, inserting and reading that data would no longer hit our database performance.</p> <p).</p> <h4 id="issue-with-this-method">Issue with this method</h4> <p!</p> <h3 id="next-step---rolling-our-own-solution">Next step - rolling our own solution</h3> <p.</p> <h3 id="a-better-way-for-us">A better way (for us)</h3> <p>Since rolling your own solution can take quite some time to get things right, we also looked into other available ways to get the necessary insights into our application. Since we where running on Azure, and even with ASP.NET, implementing <a href="">Application Insights</a> was a low friction step. These days, you can start by adding Application Insights as an <a href="">extension</a>.</p> <p>After the next rollout, we were gathering the (basic) data so that Application Insights could give us:</p> <ul> <li>Easy and high level tracking of the performance of our application.</li> <li>Finding errors in both our own requests, but also to our dependencies like SQL server, blob storage, key vault, etc.</li> <li>A very easy way to initiate alerts to our admins: even adding sending an email to a distribution group is a simple way to get started.</li> </ul> <p><img src="/images/20180314_03.png" alt="insights" /></p> <p>With this information available, we created dashboards inside the Azure portal that we’d share between the team members that had access to the Azure subscription.</p> <p!).</p> <h4 id="next-step-log-additional-information">Next step: log additional information</h4> <p <a href="">here</a>.</p> <h4 id="next-step-long-term-data-retention">Next step: long term data retention</h4> <p.</p> <p>A way to do so into Power BI is described <a href="">here</a>.</p> <h2 id="next-up">Next up:</h2> <p <a href="2018-03-29-DevOps-and-Telemetry-Insights-supporting-systems">here</a></p> VSTS Bulk Change WorkItemType 2018-02-23T00:00:00+00:00 <h2 id="update-proces-templates">Update proces templates</h2> <p>Recently I had a customer request to update their process definition in Visual Studio Team Services (VSTS). They had 30+ different processes migrated from TFS (Team Foundation Server), so they were all Hosted XML processes.</p> <p>Somehow they had the process setup like this:</p> <ul> <li>Epic –> Product Backlog Item</li> </ul> <p>Which they requested me to convert to this:</p> <ul> <li>Epic –> Feature –> Product Backlog Item</li> </ul> <p>In TFS you would grab <a href="">witadmin</a> and change the process in a pretty straigthforward manner (once you’ve figured out all places you need to update!). Unfortunately, you cannot run the change commands in witadmin against VSTS, only the list and export commands will work: see <a href="">here</a>. Microsoft is working on a REST api to perform administrative tasks against VSTS. Luckily for me, they have also wrapped the REST calls in a nice C# NuGet package (<a href="">link</a>)!</p> <p>Making changes to the process template isn’t available (yet?), although there is a strange method available named ‘UpdateWorkItemTypeDefinitionAsync’ in the ‘WorkItemTrackingClient’. The only info I can find about this is <a href="">here</a>, which seems to indicate that you can only update (maybe add) a specific Work Item Type. Since I also needed to update the tree structure in the ProcessConfiguration.xml file, I still need to export the process, make the necessary changes in the xml files, zip it back up and upload the file back into VSTS.</p> <p><img src="/images/20180226_01.png" alt="VSTS screenshot" /></p> <h2 id="updating-work-items-to-the-new-type">Updating work items to the new type</h2> <p>After doing so, the request was to convert <strong>all</strong> the old Epics to the new Features. Off course, you can do this by using a query on the Epic work item type and using the UI to change them to Features, but this would take a lot of manual actions to do.</p> <p>Luckily you can do this with the <a href="">TeamFoundationServer.Client</a> NuGet package! When you start looking into this, I can highly recommend using Microsofts Github repo containing a lot of samples <a href="">here</a>.</p> <p>It took me some figuring out to get a good workflow in an application, so I have made the tools source available on my <a href="">github account</a>.</p> Using excerpts in Jekyll 2017-12-29T00:00:00+00:00 <p>I wanted to include at least some more information in the index page of my blog instead of just the publish date and title, so I searched around for some help to include an excerpt in Jekyll and found some help on <a href=""> this</a> blog.</p> <p>The solution was very straightforward, but I’ll include it here for future reference.</p> <h4 id="index-page">Index page</h4> <p>In the index page, you can search the content of a post, check for specific tags and use the text between them:</p> <div class="language-html highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"><!-- index.html --></span> <span class="nt"><p</span> <span class="na">class=</span><span class="s">"post-excerpt"</span><span class="nt">></span> <span class="nt"></p></span> </code></pre></div></div> <p>Note: if the specified tags aren’t found in the content, the first <strong>20</strong> words will be used.</p> <h4 id="posts">Posts</h4> <p>In a post, you can now include the excerpt tags to add a specific excerpt:</p> <div class="language-html highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"><!-- _posts/some-random-post.html --></span> <span class="nt"><p></span> Here's all my content, and <span class="c"><!--excerpt.start--></span>here's where I want my summary to begin, and this is where I want it to end<span class="c"><!--excerpt.end--></span>. <span class="nt"></p></span> </code></pre></div></div> Trying out Jekyll 2017-12-17T00:00:00+00:00 <p>Trying out Jekyll on top of GitHub pages as a new blogging platform. For know, I just needed a simple way to create posts, but add some stuff I am missing on my current method (<a href="">WithKnown</a>), like RSS and Google Analytics.</p> <p>So far I like the easy setup (like: no installation whatsoever!) and the fact that it uses Jekyll to generate static pages.</p> <p>I started with the excelent guide of <a href="">Jonathan MCGlone</a>.</p> Azure App Service - a quick way to take your app Offline 2017-11-19T00:00:00+00:00 <p>After searching for the third time on how to do this, I thought it would be time to write about this here 😬.</p> <p>If you have an Azure App Service that for some reason should just display a message to the user, indicating that it isn’t available, you can do this.</p> <p>I have had several reasons to do this:</p> <p>single app service host, without a deployment slot and a big db update ( > 10 minutes db hitting a spending limit and no wish to update the limit moving dns names and certs between app service plans (were recreated with a better name)</p> <h3 id="app_offlinehtm">App_offline.htm</h3> <p.</p> DotNetCore: Adding HTTPS to your MVC webapp 2016-08-20T00:00:00+00:00 <p> I wanted to use https in my dotnetcore application (v. 1.0.0-rc2-final) and had to dig around the web quite a bit to find the most recent and working method to accomplish this. Eventually a link in the MVC github site lead to an example how to fix this (<a title="link" href="" target="_blank">link</a>). </p> <p> First, the most easy way I've found to do this, is to add some custom middleware for redirecting all http requests to https: </p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">HttpsRedirectMiddleware</span> <span class="p">{</span> <span class="k">readonly</span> <span class="n">RequestDelegate</span> <span class="n">_next</span><span class="p">;</span> <span class="k">public</span> <span class="nf">HttpsRedirectMiddleware</span><span class="p">(</span><span class="n">RequestDelegate</span> <span class="n">next</span><span class="p">)</span> <span class="p">{</span> <span class="n">_next</span> <span class="p">=</span> <span class="n">next</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">Invoke</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">context</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span> <span class="p">(!</span><span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">IsHttps</span><span class="p">)</span> <span class="p">{</span> <span class="nf">HandleNonHttpsRequest</span><span class="p">(</span><span class="n">context</span><span class="p">);</span> <span class="p">}</span> <span class="k">else</span> <span class="p">{</span> <span class="k">await</span> <span class="nf">_next</span><span class="p">(</span><span class="n">context</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="k">void</span> <span class="nf">HandleNonHttpsRequest</span><span class="p">(</span><span class="n">HttpContext</span> <span class="n">context</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// only redirect for GET requests, otherwise the browser might not propagate the verb and request</span> <span class="c1">// body correctly.</span> <span class="k">if</span> <span class="p">(!</span><span class="kt">string</span><span class="p">.</span><span class="nf">Equals</span><span class="p">(</span><span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">HttpContext</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Method</span><span class="p">,</span> <span class="s">"GET"</span><span class="p">,</span> <span class="n">StringComparison</span><span class="p">.</span><span class="n">OrdinalIgnoreCase</span><span class="p">))</span> <span class="p">{</span> <span class="n">context</span><span class="p">.</span><span class="n">Response</span><span class="p">.</span><span class="n">StatusCode</span> <span class="p">=</span> <span class="m">403</span><span class="p">;</span> <span class="p">}</span> <span class="k">else</span> <span class="p">{</span> <span class="kt">var</span> <span class="n">newUrl</span> <span class="p">=</span> <span class="kt">string</span><span class="p">.</span><span class="nf">Concat</span><span class="p">(</span> <span class="s">"https://"</span><span class="p">,</span> <span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Host</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">(),</span> <span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">PathBase</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">(),</span> <span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">Path</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">(),</span> <span class="n">context</span><span class="p">.</span><span class="n">Request</span><span class="p">.</span><span class="n">QueryString</span><span class="p">.</span><span class="nf">ToUriComponent</span><span class="p">());</span> <span class="n">context</span><span class="p">.</span><span class="n">Response</span><span class="p">.</span><span class="nf">Redirect</span><span class="p">(</span><span class="n">newUrl</span><span class="p">,</span> <span class="n">permanent</span><span class="p">:</span> <span class="k">true</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <p> I've added this right before the Mvc call: </p> <div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">services</span><span class="p">.</span><span class="nf">AddMvc</span><span class="p">();</span> </code></pre></div></div> Links to Visual Studio Extensions 2016-06-04T00:00:00+00:00 <p>Some links to important Visual Studio extensions for later reference:</p> <ul> <li>Open Bin folder Visual Studio Extension: <a href="">Visual Studio Gallery</a></li> <li>Wakatime: <a href="">wakatime.com</a></li> <li>SlowCheetah <a href="">Visual Studio Gallery</a></li> <li>SonarLint <a href="">sonarlint.org</a></li> <li>T4MVC <a href="">github.io</a></li> </ul> Not geting new windows 10 preview builds after reverting to an older build? 2016-03-03T00:00:00+00:00 <p>If.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsSelfHost\Applicability\RecoveredFrom </code></pre></div></div> <p><a href="">Source</a></p> Integrate SonarQube with TFS 2015 update 1 2016-01-23T00:00:00+00:00 <p>While migrating CI stuff from Jenkins into TFS 2015 SP1 I ran into <a href="">this</a> blog post from Microsoft explaining how to include <a href="">SonarQube</a>.</p> <p!</p> <p>In Jenkins, we used <a href="">OpenCover</a> to generate the coverage data, which integrates into the calls to the SonarQube runner. Its open souce, which is a big plus.</p> <h2 id="step-1">Step 1:</h2> <p>Create a new location on the server to place the coverage reports into. I’ve used <code class="highlighter-rouge">C:\OpenCover\</code> for this, and let OpenCover generate a file for each assembly we’re testing, using the name of the assembly as the filename for the xml report.</p> <h2 id="step-2">Step 2:</h2> <p>Add arguments to the SonarQube build step to tell Sonar where to find the coverage report. This needs to be done in the start step:</p> <p>Add this argument to ‘Additional Settings’:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/d:sonar.cs.opencover.</p> <h2 id="step-3">Step 3:</h2> <p.</p> <p>Add these arguments:</p> <div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code> " </code></pre></div></div> <h2 id="step-4">Step 4:</h2> <p>Run the build and see some coverage results!</p> Windows 10: Enable 8.1 fly-out style WiFi / VPN menu 2016-01-09T00:00:00+00:00 <p>I :-(.</p> <p>Today, I’ve found out that there is a simple registry setting to revert the dialogs to the old Windows 8 style.</p> <p>Registry key:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Control Panel\Settings\Network\ReplaceVan </code></pre></div></div> <p>Values:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0 - Default 1 - Network settings in settings app 2 - Windows 8 style </code></pre></div></div> <p>Note: you’ll need adminstrator rights to change the settings, or use a tool like <a href="">RegOwnershipEx</a> to change the settings. Take ownership of the folder and then open the register editor to change the value.</p> <p>Thanks to <a href="">Nick Craver</a>, for mentioning it was possible to change it.</p> Visual Studio 2015 update 1 - Service update 1 2016-01-03T00:00:00+00:00 <p>Just putting this out here for future reference: there is a service update for VS2015 update 1 to fix some issues. I needed this update to fix an error in VS with <strong>T4MVC</strong>.</p> <p>Knowledge base article: <a href="">Link</a></p> WakaTime 2015-05-19T00:00:00+00:00 <p><a href=""></a></p> <p>Plugin for Visual Studio (and other editors) to log hours spend in the editor. Free account only retains the information for a couple of weeks and gives you an overview of time per project/solution and per language. Really neat to see those stats.</p> <p>Currently I have this extension installed on both my laptop, pc and in a VM designated for SharePoint development.</p> ASP.NET Demoproject started 2015-05-19T00:00:00+00:00 <p>Tonight,.</p> <p>Github: <a href=""></a></p>
|
https://rajbos.github.io/blog/atom.xml
|
CC-MAIN-2019-39
|
refinedweb
| 31,174
| 53.1
|
Wire Signals
Yet Another Event Streams Library
Introduction
I’m from Poland but now I live in Berlin and work for Wire, an end-to-end encrypted messenger. And I work there in the Android team even though I write my code in Scala. About two-thirds of Wire Android code is written in Scala making it unique among Android apps — most of them being implemented in Java and/or Kotlin. Wire is a messenger and as such it must be very responsive: it has to quickly react to any events coming from the backend, as well as from the user, and from the Android itself. So, during the last four years, the Android team developed its own implementation of event streams and so-called “signals” — a build-up on the top of event streams.
This text is about them — about the theory, practice, and some corner cases you may encounter if you want to code your own implementation of it. But it’s also a bit about the
wire-signals open-source library, which we developed, and that library can do it for you. There are of course other, bigger solutions to the same problem, like Akka, Monix, JavaRx, or maybe, in case of Android, LiveData or Kotlin Flow. Personally, I like Akka very much. But I also think that it’s good to start small and then, if needed, you can switch to something bigger — or stay with this smaller solution if it’s enough to you.
Theory
Let’s start with a bit of theory. Basically, we are talking here about the Observer pattern. You can read all about it in this very old book “Design Patterns: Elements of Reusable Object-Oriented Software” by a so-called Gand of Four (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides). The idea behind is to solve two problems:
- How can a one-to-many dependency between objects be defined without making the objects tightly coupled?
- How can an object notify an open-ended number of other objects? [link]
“Open-ended” meaning that we don’t know how many of them are there and new ones can pop up at any moment.
The main object here is called a producer or a subject. It’s the source of events. The other objects are observers, also known as listeners or subscribers. The terminology is a bit liberal. Some people may even argue that there are differences between those terms, but I think the main reason behind it is that the pattern is with us for a long time and it was implemented in various ways. Here I will call it simply an event stream and subscribers. We will talk about producers too, briefly, in a minute.
The event stream can produce — or better to say “broadcast” — an event at any moment. If there are no subscribers present the event will be wasted. We may say that the event stream does not have an internal state. It does not store the event for future use — it just broadcasts it and forgets about it, but that wouldn’t be entirely true. The event stream needs to store the set of references to the subscribers. The set is empty at first, but then any object in the app which fulfills the criteria of being a subscriber can subscribe to it. Criteria being that it can receive an event of the given type. The subscriber is added to the set and from now on every time the event stream broadcasts a new event, the subscriber will receive it.
There can be of course more than one subscriber. The event will be then sent to all of them. To finish receiving events the subscriber needs to unsubscribe. The fact that it has to be explicit that the subscriber unsubscribes is a bit of a drawback. In a complex application, the programmer may forget to call it when the subscriber is destroyed or simply stops being needed. In JVM this may lead to a memory leak. The garbage collector will not be able to collect the subscriber because there will still be a now-defunct reference to it in the event stream. We can solve this problem with some sort of automatic unsubscription. We can for example model subscriptions as case classes holding references to both the subscriber and the event stream, and use them to unsubscribe in some sort of
onStop methods. Or we can use event contexts — implicit object which oversee subscriptions. You can look to the
wire-signals documentation for more details.
But where the events come from? In the original Observer pattern, we assume the event stream is also the producer of events. It’s even there in the book that we solve a problem of one-to-many dependency. We have one producer that somehow creates the data and sends it away. But producing data and sending are two operations and we can decouple them. So the producer becomes whatever part of the app that can produce the data and then give it to the event stream, and the event stream becomes this object responsible for handling subscribers and broadcasting the events to them. And since now the producer is separate from the event stream, nothing is preventing us from having more than one producer. The connection becomes many-to-many, with the event stream sitting in the middle. This way producers and subscribers don’t have to know about each other. They only have to know about the event stream that operates on data of the given type. But, in contrast to subscribers, producers don’t even have to register in the event stream.
Practice
Let’s say our producer here is an
OkHttpClient. OkHttp is an HTTP client library very often used on Android. Let’s say we have an instance of it and we open a web socket — that is, a connection to the backend — and we wait for some data from it. So we don’t actually produce anything, it’s more like another link in a chain, but from the point of view of our Android app this is how our events are produced.
val stream = EventStream[String]()
okHttpClient.newWebSocket(request, new WebSocketListener {
override def onMessage(webSocket: OkWebSocket, text: String): Unit = {
debug(l"WebSocket received a text message.")
stream ! text
})
And the subscriber here can then be modelled as a trait and be implemented as such.
trait Subscriber[E] {
def onEvent(event: E): Unit
}class EventStream[E] {
private var subscribers: Set[Subscriber[E]]
def subscribe(subscriber: Subscriber[E]): Unit = subscribers += subscriber
def publish(event: E): Unit = subscribers.foreach(_.onEvent(event))
}
But forcing every class to extend
Subscriber to enable it to receive events is tedious. It’s verbose. We’re too lazy for that. Fortunately, we can turn this subscribe method above a foreach that will take a function.
class EventStream[E] {
private class EventSubscriber[E](f: E => Unit) extends Subscriber[E] {
override def onEvent(event: E): Unit = f(event)
}
...
def foreach(f: E => Unit): Unit = subscribers += new EventSubscriber(f)
...
@inline def !(event: E): Unit = publish(event)
}stream.foreach { msg => … }
Oh, and I took this opportunity to also introduce the exclamation mark operator as a shorthand for publish.
For example, this is now how we can display the event. We can use this foreach to subscribe to the stream and for example we can use it this way to display the text of the message in the message view.
val messageView: View = …
stream.foreach { str => messageView.setText(str) }
Okay. At this point you might have had a thought that this
Subscriber trait here — it’s a one method trait, so why don’t we just turn it into a function itself? Well, yes, I’m usually all for it. If our trait turns out to have only one method, I think it means we should really consider turning it into a function. But there’s a reason why I didn’t do it. And we’ll talk about it in a moment.
Transformations
But first, transformations. If broadcasting events was all an event stream could do, it wouldn’t really be that interesting. Often subscribers either want to transform the event somehow before using it — and many subscribers may want to transform the event in the same way — or they are interested only in a certain subset of events coming from the event stream. Or both. Or the logic can be even more complicated. For example, the event can be interesting to the subscriber only if a certain event from another event source was received before it.
To address those needs event streams implementations come with a long list of methods such as
map,
flatMap,
filter,
collect,
zip, and so on. With them, you can move the logic which would otherwise have to be implemented in the subscriber, to the event stream. In fact, each of them creates a new event stream that has the original one as its producer. The map creates a new event stream that publishes the transformed events, and the filter creates an event stream that publishes the event from the parent only if a certain condition is fulfilled.
class EventStream[E] {
def map[V](f: E => V): EventStream[V]
def flatMap[V](f: E => EventStream[V]): EventStream[V]
def filter(f: E => Boolean): EventStream[E]
def collect[V](pf: PartialFunction[E, V]): EventStream[V]
def zip(stream: EventStream[E]): EventStream[E]
...
}
You may notice how all those methods are basically the same as in the case of standard collections. We can treat an event stream as a collection — a special kind of a collection, but still. You can compare it to a relationship between an
Option and a
Future. A collection that has only one or zero elements. The same relationship exists between standard sequence and an event stream, where in case of event stream we have undefined number of elements but we don’t have access to them immediately but only after some time.
And since we can treat an event stream as a collection, especially that we can use
flatMap on it, allows us use for-comprehensions:
val stream1 = EventStream[String]()
val stream2 = EventStream[Int]()
val resultStream = for {
str <- stream1
i <- stream2
} yield
s"$str: $i" // resultStream is of the type EventStream[String]
which is equivalent to:
val resultStream = stream1.flatMap { str => stream2.map(i => s”$str: $i”) }
It might not look like it, because the example is simple, but the ability to use for-comprehensions for event streams is a major boost for the readability of our code. Imagine a whole long list of consecutive transformations of event streams, each based on some of the ones executed above, but also on other data. And you can just read it here, just like this: one line, one transformation after another.
Threads
A moment ago I skipped over one important issue. I said that a subscriber waits for an event coming from the event stream. But waiting means something different depending on if we talk about working with only one thread, or with two or more. Consider this:
def main(): Unit = {
val stream = EventStream[String]() // 1
stream.foreach(println) // 2
stream ! “Hello, world!” // 3
}
If this code works on only one thread, we will create a stream in the first line, subscribe to it in the second, and then just after publishing
“Hello, world!” in the third line the control will come back to the event stream which will call the println function, the println function will print out the string, and only then the main method will finish. But… but that’s not how modern programs work.
In Android, for example, it’s customary to work with at least two threads belonging to separate execution contexts:
UI, and
Background. The
UI thread should be used only to display and refresh things on the screen. If our code does not touch the UI directly, it should work on the
Background thread. So if we have a list of items we want to display we do it on the
UI thread, but to retrieve those items from the storage we should use the
Background thread…
val storage: MyStorage[Item] = …
val adapter = new MyItemsAdapter[Item](
this,
android.R.layout.simple_list_item_1,
storage.allItems
)val listView: ListView = findViewById(R.id.listview)
listView.setAdapter(adapter)storage.onChanged.foreach { newItems =>
adapter.updateItems(newItems)
}…
storage.updateItems(...)
You may already see the problem. If we don’t have a way to differentiate between
UI and
Background then when an update happens to the storage, the
foreach method of the
onChanged event stream, and consequently updateItems on the adapter will be called on the same thread. Fortunately, event streams are exactly the tool we need to jump from one thread to another with little ceremony. We use the foreach method to subscribe to the event stream. That works because the foreach method is a bit special. It’s different from map and flatMap in one important detail.
def foreach(f: Event => Unit)(implicit executionContext: ExecutionContext): Unit
In standard collections foreach is called immediately, so no execution context is needed. But in event streams, we can differentiate between the execution context of the source and the execution context of the subscriber. We can implement the foreach so that it will take not only the function to execute but also a reference to the execution context in which the function should be executed in. When a new event comes, the event stream goes through the collection of those subscribers, and for each calls the function f… but not immediately. Instead the behaviour here is that it wraps the function f in the future and run it in the execution context of the subscriber.
trait Subscriber[E] {
def onEvent(event: E): Unit
}class EventSubscriber[E](f: E => Unit, ec: ExecutionContext) extends Subscriber[E] {
override def onEvent(event: E): Unit = Future { f(event) }(ec)
}class EventStream[E]{
def foreach(f: E => Unit)(implicit ec: ExecutionContext): Unit =
subscribers += new EventSubscriber(f, ec)
}
(again, not actual
wire-signals code, but close enough)
Okay, let’s go through it step by step:
1.
storage.updateItems makes some changes to the items in the storage on the
Background thread,
2. then we have
foreach on the
storage.onChanged on the
UI thread, which subscribes to the
onChanged with the implicit execution context of the
UI,
3. and then when the change happens on the
Background thread — instead of executing that
adapter.updateItems on the same thread as the update happened on the
Background — we wrap the call to
adapter.updateItems in a
Future and we call it in the execution context of the
UI as soon as possible.
Of course there is a special case when the execution context of the publisher is the same as the subscriber’s execution context. That possibility can be implemented and used as well, but I’d suggest that the one above should be the default one. That we wrap the call in the future and that we execute it at some point in the future. We shouldn’t care about when exactly, just us we don’t care about it if the execution contexts are different. So, in a way, this gives use better consistency in how the code behaves — in that we don’t know the same details about how it behaves.
I’m aware that it does not really answer the question why the
Subscriber trait cannot be a function instead. The trait is a bit more complicated than the trivial version I presented before, but anyway, I think it could be a function. If there’s one way I want
wire-signals to stand out among other libraries like it, it’s that I want it to be minimalistic. And at this moment this part of the code seems to me to be too complicated. I want to work on it. And if you think you can help me in any way, please reach out.
Signals
And finally… signals! A signal is not a commonly agreed name like an event stream. It’s also not an implementation of a popular pattern, just as event streams are an implementation of the Observer pattern. But it is a pattern nonetheless and it’s a pattern that came up out of necessity at my company and developed quite naturally. We implemented it, tested it, used it in several distinct places in our code — even though arguably all of them within the same Android application — and finally we documented it, and moved the code to a separate open-source library. At more-or-less the same time Google came up with Android LiveData which in many ways is a very similar concept. But it’s more tied to Android, while signals are a platform-independent implementation… as long as that platform understands Scala.
So, what is a signal? In short, a signal is an event stream with a cache. It’s a very simple, small distinction, but it’s also a very powerful one.
Whereas an event stream holds no internal state — except for the collection of subscribers — and just passes on events it receives, a signal keeps the last value it received. A new subscriber function registered in an event stream will be called only when a new event is published. A new subscriber function registered in a signal will be called immediately (or as soon as possible in the given execution context); and it will be called with the current value of the signal (unless the signal is not initialized yet), and then again it will be called when the value changes. A signal is also able to compare a new value published in it with the old one. The new value will be passed on only if it is different. Thus, a signal can help us with optimizing performance on both ends:
- as a cache for values which otherwise would require expensive computations to produce them every time we need them,
- and also as a way to ensure that subscriber functions are called only when the value actually changes, but not when the result of the intermediate computation is the same as before.
You can think of it as of traffic lights. You’re the driver. You come to the crossroads and check the traffic lights. That means you subscribe to them but also you immediately get the current value and can act on it. If it’s green, you go. If it’s red or yellow, you stop and wait for a change. So, one advantage is that in some cases you don’t have to wait. If it was an event stream, you would have to stop each time and wait until a new event would tell you that it’s safe to go. In the case of a signal, if you see green light as the current value, you don’t have to stop. But there’s also another advantage in that if — for any reason — the lights compute their new value and that value is the same as before, no new event comes. In that case, for you it’s completely transparent that anything was computed. You wait for a different value, not the same value computed again.
This second example is quite unrealistic if we talk about traffic lights, but consider this:
val signal = Signal[Int]()
signal.foreach {
case n if n % 2 == 0 => complexComputations()
case _ =>
}signal ! 1
signal ! 2 // complexComputations executed
signal ! 2
signal ! 4 // complexComputations executed
signal ! 6 // complexComputations executed
signal ! 7
This is a signal of ints, and let’s say we want to perform some complex computations only if the value of the signal is even. And let’s assume, for the sake of argument, that we actually don’t have to perform complex computations every single time, but only if the value of the signal becomes even initially, or if it changes from odd to even later on. As we can see here, now the computations are also performed when the value changes from one even number to another even number. That’s not optimal. But let’s say it’s not invalid. Let’s say the result is the same as before and we only waste some CPU time this way.
But we can do better.
signal.map(_ %2 == 0).foreach {
case true => complexComputations()
case _ =>
}signal ! 1 // false
signal ! 2 // true, complexComputations executed
signal ! 2 // true, nothing changes
signal ! 4 // true, nothing changes
signal ! 6 // true, nothing changes
signal ! 7 // false
We can change the signal. We can map it and then make a foreach only after the mapping. And that new mapped signal is of a boolean value. The foreach of that signal will be called only if the boolean changes becomes true, that is, if the original number value becomes even from being odd previously, or from an empty, uninitialized signal. But if the number changes from one even number to another even number, the value of the mapped signal just stays the same, and so, because no change happened, the foreach part is not executed at all. And the complex computations are not executed.
Thank you
I think with this I will finish this text. It’s already quite long. If you want to know more:
- You can look at GitHub where the code of
wire-signalsis stored:
- You can read documentation, together with Scala docs:
- Or you can just add it to your build.sbt and give it a try:
libraryDependencies += "com.wire" %% "wire-signals" % "0.3.2"
|
https://medium.com/swlh/wire-signals-81918bbcc07f?source=post_internal_links---------3----------------------------
|
CC-MAIN-2021-21
|
refinedweb
| 3,589
| 70.94
|
C# Hello World Example
C# is the newest enterprise programming language created by Microsoft to compete with Java. Let's start with the classic Hello World example.
C# Hello World code
At first glance, the C# Hello World example is very similar with Java, as the code below shows.
using System; class Test { static void Main() { Console.WriteLine("Hello World"); Console.WriteLine(add(3,4)); } static int add(int x, int y) { return x + y; } }
It has a class name, a void main method, even the namespace looks similar, in Java it will be package and keyword is import.
The Console.WriteLine is similar with System.out.println.
Compile and run the example
We will compile and run the example from the Developer Command Prompt cmd window that installed with Visual Studio.
The prompt will prepare all the settings for compiling. Execute csc with the file name as parameter.
It will generate an executable file. Unlike Java in which you get a class file that contains byte code and executed in JVM. C# generate code can directly run on your machine.
But the exe generated by C# compiler is different from files generated by a C++ compiler. It's a CLR exe.
The CLR exe contains CLR header and IL code which is the byte code used in C#. At runtime, the IL code will be converted to native code by CLR.
C# executable still needs the virtual machine, but the virtual machine is integrated into Windows and pretend like the native executable.
The CLR exe contains code that will show a dialog in case the CLS is not installed.
|
http://makble.com/c-hello-world-example
|
CC-MAIN-2018-34
|
refinedweb
| 270
| 66.13
|
sub make_command
{
my @arr = @_;
for (@arr)
{
$_ = qq["$_"] if /\s/;
}
my $_command_to_execute = join(' ',@arr);
return ($_command_to_execute);
}
Of course, the multiple argument form of system isn't all that portable, either. On Windows, system(@ARGS) is system("@ARGS"), so you need to take proper care of quoting. On more unixish operating systems, you cannot quote the commands if you're going to use system(LIST), so some more care than in the OP is needed.
Of course it would be nice if Win32 system() knew when to add double quotes around strings...
On Windows, system(@ARGS) is system("@ARGS"), so you need to take proper care of quoting.
No it's not, so no you don't.
>perl -e"@cmd = ('perl', '-le print for @ARGV', 'foo', 'bar'); system
+@cmd;"
foo
bar
>perl -e"@cmd = ('perl', '-le print for @ARGV', 'foo bar'); system @cm
+d;"
foo bar
[download]
The Windows command line cannot handle some characters. You need to verify for their presence, but you won't be able to quote them.
ActivePerl does since 5.6.1. Can't verify Perl in general.
Just adding double quotes is not enough, not even on Windows. How to you pass a double quote as argument to the invoked program (system('/usr/local/bin/foo','-bar','"','-baz','""'))? You need some kind of escaping, and that becomes more and more ugly while the distance to a classic unix /bin/sh grows. I've heard that cmd.exe uses the ^ for some escaping, while command.com just does not have any escaping at all.
Adding double quotes also is not safe on Unix-like systems, as it does not disable shell interpolation, so you probably want to use single quotes there, or better: Use system(@ARGS).
I agree that system(@ARGS) on Windows should emulate system(@ARGS) on Unix-like systems, preferably by not messing with cmd.exe/command.com at all. Unfortunately, the Windows API does not offer a way to pass more than a single string to an invoked program, and splitting that string into C's argv is left to the application (or its runtime library). Avoiding command.com/cmd.exe is only the first step.
Alexander
Yes
No
Results (278 votes). Check out past polls.
|
http://www.perlmonks.org/index.pl?node_id=762248
|
CC-MAIN-2017-13
|
refinedweb
| 375
| 73.47
|
31 December 2010 10:34 [Source: ICIS news]
By Caroline Murray
VALENCIA, Spain (ICIS)--European polyethylene terephthalate (PET) will remain tight and prices will stay firm through much of 2011 because of bullish feedstocks and limited availability, according to industry sources.
"Next year will be a sellers’ market," a PET manufacturer and buyer of upstream monoethylene glycol (MEG) and purified terephthalic acid (PTA) said.
Bottle-grade PET values in Europe have increased by over 26%, from lows of €1,050/tonne ($1,400/tonne) in August to above €1,300/tonne FD (free delivered) ?xml:namespace>
Domestic PET supply was limited. This was because of earlier plant consolidation as well as uncompetitive imports. Asian customers substituting high-priced cotton for polyester PET resulted in less material flowing into
The lure of strong demand and high prices in Asia was diverting upstream MEG away from
Ethylene, the raw material for MEG was at a two-year high in January and MEG producers were seeking to claw this back from customers.
The first half of 2011 will be peppered with planned MEG turnarounds in Europe and the
"By the second half of the year I expect better availability but by quarter three [downstream] anti-freeze demand kicks in again and the prices will go up again," according to an MEG trader.
PTA supply does not flow freely and prices have been jumping up in line with its raw material, paraxylene (PX).
The €700s/tonne dominated PTA in 2010 but by November prices had surpassed €800/tonne and continued to show signs of continued increases.
"The [PTA] market was expected to be long in 2010 but demand is good and imports are not competitive," a buyer of PTA explained.
Despite new PTA capacity coming up in 2011, "the market will continue to be better for producers", the PTA buyer said.
An overhang of PX production problems in the fourth quarter will make for a snug start to 2011, according to a producer. PX plants will run in accordance with demand, so it only takes one hiccup for availability to tighten further.
PX prices jumped from €805/tonne to a surprising €1,030/tonne in the fourth quarter alone.
Price-wise,
A fibres and filaments producer summed it up by saying: "Europe is a victim of what is going to happen in
Poor harvests have led to a short supply of cotton. This remains an underlying factor for PET going into 2011, as consumers, particularly in
Exchange rates will carry on playing a crucial role in these global commodity markets.
"I expect a volatile year... There will be more price movement than in 2010 because of the exchange rate and because of what
All this, however, is set against a backdrop of a poor economic situation, supported in 2010 by considerable stimulus packages across the world.
"There is less money available...families, consumers could be worse off in 2011 than they were in 2010," a PET producer said.
The onset of better cotton harvests could result in customers refusing to pay the high prices. This though, was unlikely to occur before the second half of the year, sources said.
"How long can the polyester chain really stand the increases? Fibre producers will cut production," a PET producer said, echoing speculation from other buyers and sellers.
($1 = €0.75)
|
http://www.icis.com/Articles/2010/12/31/9422501/outlook-11-europe-pet-to-stay-firm-on-feedstocks-tight-supply.html
|
CC-MAIN-2014-49
|
refinedweb
| 554
| 57.4
|
Python Boolean array in NumPy
In this post, I will be writing about how you can create boolean arrays in NumPy and use them in your code.
Overview
Boolean arrays in NumPy are simple NumPy arrays with array elements as either ‘True’ or ‘False’. Other than creating Boolean arrays by writing the elements one by one and converting them into a NumPy array, we can also convert an array into a ‘Boolean’ array in some easy ways, that we will look at here in this post.
In this process, all elements other than 0, None and False all are considered as True.
Boolean Array using dtype=’bool’ in NumPy – Python
Let’s take an example:
import numpy as np import random array = [] for _ in range(10): num = random.randint(0,1) array.append(num) print(f'Original Array={array}') # prints the original array with 0's and 1's nump_array = np.array(array,dtype='bool') print(f'numpy boolean array:{nump_array}') # prints the converted boolean array
Here the output will look somewhat like this:
output:
Boolean Array using comparison in NumPy
Example:
import numpy as np import random array = np.arange(10,30) print('1st array=',array,'\n') array_bool = array > 15 print(f'First boolean array by comparing with an element:\n{array_bool}\n\n') array_2 = [random.randint(10,30) for i in range(20)] # second array using list comprehension print(f'Second array:\n{array_2}') array2_bool = array_2 > array print(f'second boolean array by comparing second array with 1st array:\n{array2_bool}')
In the above piece of code, I have formed the ‘array’ is created using
numpy.arrange() function. And the elements are from 10 to 30 (20 elements).
Now form the boolean array (array_bool) by comparing it with 15 if the elements are greater than 15 they are noted as True else False.
The second array is created using simple, ‘List comprehension’ technique. And of the same length as the ‘array’ and elements are random in the range 10 to 30(inclusive). Now the second boolean array is created using comparison between the elements of the first array with the second array at the same index.
Output:
**Note: This is known as ‘Boolean Indexing’ and can be used in many ways, one of them is used in feature extraction in machine learning. Or simply, one can think of extracting an array of odd/even numbers from an array of 100 numbers.
Converting to numpy boolean array using .astype(bool)
For example, there is a feature array of some images, and you want to just store the bright pixels and eliminate the dark pixels(black=0). You can do this by converting the pixels array to boolean and use the Boolean array indexing to eliminate the black pixels!
Example:
import numpy import random random.seed(0) arr_1 = [random.randint(0,1) for _ in range(20)] print(f'Original Binary array:\n{arr_1}\n') arr_bool = numpy.array(arr_1).astype(bool) print(f'Boolean Array:\n{arr_bool}')
Output:
Also read:
|
https://www.codespeedy.com/python-boolean-array-in-numpy/
|
CC-MAIN-2020-45
|
refinedweb
| 501
| 52.39
|
WebReference.com - Excerpt from Inside XSLT, Chapter 2, Part 6 (5/5)
Inside XSLT
Using Internet Explorer to Transform XML Documents
There's one more topic to discuss during this overview of stylesheets, and that's how to use stylesheets in the Internet Explorer. As we saw in Chapter 1, you can use JavaScript to read in XML and XSL documents, and use the MSXML3 parser to perform the transformation. (For more information on this, see Chapter 10. You can also read about the Internet Explorer support at). [Editor's note: this link is no longer available; try the msdn XML section at.]
However, if you want to open an XML document directly in Internet Explorer by navigating
to it (for example, by typing its URI into the Address box), you're relying on the browser to use the
<?xml-stylesheet?> and
<xsl:stylesheet> elements itself, which
means you need to make a few changes if you're using IE 5.5 or earlier.
Internet Explorer 6.0 and Getting and Installing the MSXML Parser
Note: IE 6.0 is just out as this book goes to press, and it does support full XSLT syntax (except that you still must use the type "text/xsl" for stylesheets like this:
<?xml-stylesheet type="text/xsl" href="planets.xsl"?>instead of "text/xml"). If you're using IE 5.5 or earlier, you can also download and install the latest version of the MSXML parser directly from Microsoft, replacing the earlier one used by the Internet Explorer. When you do, you don't need to make the modifications listed in this section. For more information, see. The download is currently at. (Note, however, that Microsoft seems to reorganize its site every fifteen minutes or so.) If you're using IE 5.5 or earlier, I urge you to download MSXML so that you won't have to modify all your XSLT stylesheets to use them in IE, or upgrade to version 6.0 or later.
It's necessary to modify both planets.xml and planets.xsl for IE version 5.5 or earlier.
To use planets.xml with IE, you convert the type attribute in the
<?xml-stylesheet?>
processing instruction from "text/xml" to "text/xsl":
Listing 2.14: Internet Explorer Version of planets.xml
<>
You must also convert the stylesheet planets.xsl for use in IE version 5.5 or
earlier. A major difference between the W3C XSL recommendation and the XSL implementation in IE
is that version 5.5 or earlier does not implement any default XSL rules-see Chapter 3 (note that
IE version 6.0, just out as this book goes to press, does not have this problem). That means
for IE version 5.5 or earlier, I have to include an XSL rule for the root node of the document,
which you specify with "/". I also have to use a different XSL namespace in the stylesheet,
"", and omit the
version attribute in the
<xsl:stylesheet> element:
Listing 2.15. Internet Explorer Version of planets.xsl
<?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:template <HTML> <HEAD> <TITLE> The Planets Table </TITLE> </HEAD> <BODY> <H1> The Planets Table </H1> <TABLE BORDER="2"> <TR> <TD>Name</TD> <TD>Mass</TD> <TD>Radius</TD> <TD>Day</TD> </TR> <xsl:apply-templates/> </TABLE> </BODY> </HTML> </xsl:template> <xsl:template <xsl:apply-templates/> </xsl:template> <xsl:template <TR> <TD><xsl:value-of</TD> <TD><xsl:value-of</TD> <TD><xsl:value-of</TD> <TD><xsl:value-of</TD> </TR> </xsl:template> </xsl:stylesheet>
And that's it! Now we've successfully implemented planets.xml and planets.xsl for direct viewing in the Internet Explorer. Those are the changes you must make to use this browser when you navigate to XSL-styled XML documents directly.
That completes this overview of working with stylesheets in XSL. The next chapter looks at the heart of stylesheets-templates-in more detail.
Created: October 16, 2001
Revised: October 16, 2001
URL:
|
http://www.webreference.com/authoring/languages/xml/insidexslt/chap2/6/5.html
|
CC-MAIN-2017-26
|
refinedweb
| 664
| 67.86
|
Web development has seen a huge advent of Single Page Application (SPA) in the past couple of years. Early development was simpleâreload a complete page to perform a change in the display or perform a user action. The problem with this was a huge round-trip time for the complete request to reach the web server and back to the client.
Then came AJAX, which sent a request to the server, and could update parts of the page without reloading the current page. Moving in the same direction, we saw the emergence of the SPAs.
Wrapping up the heavy frontend content and delivering it to the client browser just once, while maintaining a small channel for communication with the server based on any event; this is usually complemented by thin API on the web server.
The growth in such apps has been complemented by JavaScript libraries and frameworks such as Ext JS, KnockoutJS, BackboneJS, AngularJS, EmberJS, and more recently, React and Polymer.
Let's take a look at how React fits in this ecosystem and get introduced to it in this chapter.
In this chapter, we will cover the following topics:
What is React and why do we use React?
Data flows in the component
Component displays the view based on state of the component
Component defines display of the view, irrespective of data contained, thus reducing the dependency and complexity of state for display
User interactions may change state of component from handlers
Components are reused and re-rendered
ReactJS tries to solve the problem from the View layer. It can very well be defined and used as the V in any of the MVC frameworks. It's not opinionated about how it should be used. It creates abstract representations of views. It breaks down parts of the view in the Components. These components encompass both the logic to handle the display of view and the view itself. It can contain data that it uses to render the state of the app.
To avoid complexity of interactions and subsequent render processing required, React does a full render of the application. It maintains a simple flow of work.
React is founded on the idea that DOM manipulation is an expensive operation and should be minimized. It also recognizes that optimizing DOM manipulation by hand will result in a lot of boilerplate code, which is error-prone, boring, and repetitive.
React solves this by giving the developer a virtual DOM to render to instead of the actual DOM. It finds difference between the real DOM and virtual DOM and conducts the minimum number of DOM operations required to achieve the new state.
React is also declarative. When the data changes, React conceptually hits the refresh button and knows to only update the changed parts.
This simple flow of data, coupled with dead simple display logic, makes development with ReactJS straightforward and simple to understand.
Who uses React? If you've used any of the services such as Facebook, Instagram, Netflix, Alibaba, Yahoo, E-Bay, Khan-Academy, AirBnB, Sony, and Atlassian, you've already come across and used React on the Web.
In just under a year, React has seen adoption from major Internet companies in their core products.
In its first-ever conference, React also announced the development of React Native. React Native allows the development of mobile applications using React. It transpiles React code to the native application code, such as Objective-C for iOS applications.
At the time of writing this, Facebook already uses React Native in its Groups and Ads Manager app.
In this book, we will be following a conversation between two developers, Mike and Shawn. Mike is a senior developer at Adequate Consulting and Shawn has just joined the company. Mike will be mentoring Shawn and conducting pair programming with him.
It's a bright day at Adequate Consulting. Its' also Shawn's first day at the company. Shawn had joined Adequate to work on its amazing products and also because it uses and develops exciting new technologies.
After onboarding the company, Shelly, the CTO, introduced Shawn to Mike. Mike, a senior developer at Adequate, is a jolly man, who loves exploring new things.
"So Shawn, here's Mike", said Shelly. "He'll be mentoring you as well as pairing with you on development. We follow pair programming, so expect a lot of it with him. He's an excellent help."
With that, Shelly took leave.
"Hey Shawn!" Mike began, "are you all set to begin?"
"Yeah, all set! So what are we working on?"
"Well we are about to start working on an app using. Open Library is collection of the world's classic literature. It's an open, editable library catalog for all the books. It's an initiative under and lists free book titles. We need to build an app to display the most recent changes in the record by Open Library. You can call this the Activities page. Many people contribute to Open Library. We want to display the changes made by these users to the books, addition of new books, edits, and so on, as shown in the following screenshot:
"Oh nice! What are we using to build it?"
"Open Library provides us with a neat REST API that we can consume to fetch the data. We are just going to build a simple page that displays the fetched data and format it for display. I've been experimenting and using ReactJS for this. Have you used it before?"
"Nope. However, I have heard about it. Isn't it the one from Facebook and Instagram?"
"That's right. It's an amazing way to define our UI. As the app isn't going to have much of logic on the server or perform any display, it is an easy option to use it."
"As you've not used it before, let me provide you a quick introduction."
"Have you tried services such as JSBin and JSFiddle before?"
"No, but I have seen them."
"Cool. We'll be using one of these, therefore, we don't need anything set up on our machines to start with."
"Let's try on your machine", Mike instructed. "Fire up"
"You should see something similar to the tabs and panes to code on and their output in adjacent pane."
"Go ahead and make sure that the HTML, JavaScript, and Output tabs are clicked and you can see three frames for them so that we are able to edit HTML and JS and see the corresponding output."
"That's nice."
"Yeah, good thing about this is that you don't need to perform any setups. Did you notice the Auto-run JS option? Make sure its selected. This option causes JSBin to reload our code and see its output so that we don't need to keep saying Run with JS to execute and see its output."
"Ok."
"Alright then! Let's begin. Go ahead and change the title of the page, to say,
React JS Example. Next, we need to set up and we require the React library in our file."
"React's homepage is located at. Here, we'll also locate the downloads available for us so that we can include them in our project. There are different ways to include and use the library.
We can make use of bower or install via npm. We can also just include it as an individual download, directly available from the fb.me domain. There are development versions that are full version of the library as well as production version which is its minified version. There is also its version of add-on. We'll take a look at this later though."
"Let's start by using the development version, which is the unminified version of the React source. Add the following to the file header:"
<script src=""></script>
"Done".
"Awesome, let's see how this looks."
<!DOCTYPE html> <html> <head> <script src=""></script> <meta charset="utf-8"> <title>React JS Example</title> </head> <body> </body> </html>
"So Shawn, we are all set to begin. Let's build our very first React App. Go ahead and add the following code to the JavaScript section of JSBin:"
var App = React.createClass({ render: function(){ return(React.createElement("div", null, "Welcome to Adequate, Mike!")); } }); React.render(React.createElement(App), document.body);
"Here it is. You should see the output section of the page showing something similar to the following:"
Welcome to Adequate, Mike!
"Nice Mike. I see that we are making use of this React object to create classes?"
"That's right. We are creating, what are called as Components in React."
"The entry point to the ReactJS library is the React object. Once the
react.js library is included, it is made available to us in the global JavaScript namespace."
"
React.createClass creates a component with the given specification. The component must implement the render method that returns a single child element as follows:"
var App = React.createClass({ render: function(){ return(React.createElement("div", null, "Welcome to Adequate, Mike!")); } });
React will take care of calling the render method of the component to generate the HTML."
Note
Even if the render method needs to return a single child, that single child can have an arbitrarily deep structure to contain full-fledged HTML page parts.
"Here, we are making use of
React.createElement to create our content. It's a singleton method that allows us to create a
div element with the "
Welcome to Adequate, Mike! contents.
React.createElement creates a
ReactElement, which is an internal representation of the DOM element used by React. We are passing null as the second argument. This is used to pass and specify attributes for the element. Right now, we are leaving it as blank to create a simple div."
"The type of
ReactElement can be either a valid HTML tag name like
span,
div,
h1 and so on or a component created by
React.createClass itself."
"Once we are done creating the component, it can be displayed using the
React.render method as follows:"
React.render(React.createElement(App), document.body);
"Here, a new
ReactElement is created for the
App component that we have created previously and it is then rendered into the HTML elementâ
document.body. This is called the
mountNode, or mount point for our component, and acts as the root node. Instead of passing
document.body directly as a container for the component, any other DOM element can also be passed."
"Mike, go ahead and change the text passed to the div as
Hello React World!. We should start seeing the change and it should look something similar to the following:"
Hello React World!
"Nice."
"Mike, while constructing the first component, we also got an overview of React's top-level API, that is, making use of
React.createClass,
React.createElement, and
React.render."
"Now, the component that we just built to display this hello message is pretty simple and straightforward. However, the syntax can get challenging and it keeps growing when building complex things. Here's where JSX comes in handy."
"JSX?"
"JSX is an XML-like syntax extension to ECMAScript without any defined semantics. It has a concise and familiar syntax with plain HTML and it's familiar for designers or non-programmers. It can also be used directly from our JavaScript file!"
"What? Isn't it bad?"
"Well, time to rethink the best practices. That's right, we will be bringing our view and its HTML in the JavaScript file!"
"Let's see how to start using it. Go ahead and change the contents of our JavaScript file as follows:"
var App = React.createClass({ render: function(){ return <div> Hello, from Shawn! </div>; } }); React.render(React.createElement(App), document.body);
"As you can see, what we did here was that instead of using
createElement, we directly wrote the
div tag. This is very similar to writing HTML markup directly. It also works right out of the JavaScript file."
"Mike, the code is throwing some errors on JSBin."
"Oh, right. We need to make use of the JSX transformer library so that React and the browser can understand the syntax. In our case, we need to change the type of JavaScript, which we are using, to be used to interpret this code. What we need to do is change from JavaScript to JSX (React), from the dropdown on the JavaScript frame header, as follows:"
"That should do it."
"Looks good, Mike. It's working."
"Now you will see something similar to the following:"
Hello, from Shawn!
"That's good to start, Shawn. Let's move back to the task of building our app using Open Library's Recent changes API now. We already have a basic prototype ready without using ReactJS."
"We will be slowly replacing parts of it using ReactJS."
"This is how the information is displayed right now, using server-side logic, as follows:"
"First task that we have is to display the information retrieved from the Open Library Recent Changes API in a table using ReactJS similar to how it's displayed right now using server-side."
"We will be fetching the data from the Open Library API similar to the following:"
var data = [{ "when": "2 minutes ago", "who": "Jill Dupre", "description": "Created new account" }, { "when": "1 hour ago", "who": "Lose White", "description": "Added fist chapter" }, { "when": "2 hours ago", "who": "Jordan Whash", "description": "Created new account" }];
"Let's use this to prototype our app for now. Before that, let's take a look at the simple HTML version of this app. In our
React.render method, we start returning a table element, as follows:"
var App = React.createClass({ render: function(){ return <table> <thead> <th>When</th> <th>Who</th> <th>Description</th> </thead> <tr> <td>2 minutes ago</td> <td>Jill Dupre</td> <td>Created new account</td> </tr> <tr> <td>1 hour ago</td> <td>Lose White</td> <td>Added fist chapter</td> </tr> <tr> <td>2 hours ago</td> <td>Jordan Whash</td> <td>Created new account</td> </tr> </table> } });
"This should start displaying our table with three rows. Now, go ahead and add a heading at top of this table from the
React App, as follows:"
⦠return <h1>Recent Changes</h1> <table> â¦. </table> â¦
"There, something like that?" asked Shawn. "Oh, that didn't work."
"That's because React expends our render method to always return a single HTML element. In this case, after you added the
h1 heading, our app started returning two elements, which is wrong. There'll be many cases when you will come across this. To avoid this, just wrap the elements in a
div or
span tag. The main idea is that we just want to return a single element from the render method."
"Got it. Something like this?"
⦠return <div> <h1>Recent Changes</h1> <table> â¦. </table> </div> â¦
"Awesome! Looks good. Now, let's change our table that is displaying static information, to start fetching and displaying this information in the rows from the JSON data that we had before."
"We'll define this data in the render method itself and see how we would be using it to create our table. We'll basically just be looping over the data and creating elements, that is, table rows in our case, for the individual data set of events. Something like this:"
â¦> }); â¦
"Notice how we are using
{} here.
{} is used in JSX to embed dynamic information in our view template. We can use it to embed the JavaScript objects in our views, for example, the name of a person or heading of this table. As you can see, what we are doing here is using the
map function to loop over the dataset that we have. Then, we are returning a table row, constructed from the information available from the row object â the details about when the event was created, who created it and event description."
"We are using JSX syntax here to construct the rows of table. However, it is not used as the final return value from render function."
"That's correct, Shawn. React with JSX allows us to arbitrarily create elements to be used in our views, in our case, creating it dynamically from the dataset that we have. The rows variable now contains a part of view that we had used at a different place. We can also build another component of the view on top of it."
"That's the beauty of it. React allows us to dynamically create, use, and reuse the parts of views. This is helpful to build our views, part by part, in a systematic way."
"Now, after we are done with building our rows, we can use them in our final render call."
"So now, the return statement will look something similar to the following:"
⦠return <table> <thead> <th>When</th> <th>Who</th> <th>Description</th> </thead> {rows} </table> â¦
"Here's how the complete render method now looks after building up rows with static data:"
render: function(){> }) return <table> <thead> <th>When</th> <th>Who</th> <th>Description</th> </thead> {rows} </table>}
"That's starting to look like where we want to reach."
"Do we define our data and everything else in the render method?"
"I was just getting to that. Our component should not contain this information. The information should be passed as a parameter to it."
"React allows us to pass the JavaScript objects to components. These objects would be passed when we call the
React.render method and create an instance of the
<App> component. The following is how we can pass objects to it:"
React.render(<App title='Recent Changes'/>, document.body);
"Notice how are using the
<App/> syntax here, instead of
createElement. As I mentioned previously, we can create elements from our components and represent them using JSX as done earlier."
React.render(React.createElement(App), document.body)
"The preceding code becomes the following:"
React.render(<App/>, document.body)
"That looks even more cleaner", said Shawn.
"As you can see, we are passing the title for our table as the
title parameter, followed by the contents of the title. React makes this data passed to the component as something called
props. The
props, short for properties, are a component's configuration options that are passed to the component when initializing it."
"These
props are just plain JavaScript objects. They are made accessible to us within our component via the
this.props method. Let's try accessing this from the
render method, as follows:"
⦠render: function(){ console.log(this.props.title); } â¦
"That should start logging the title that we passed to the component to the console."
"Now, let's try to abstract the headings as well as the JSON data out of the
render method and start passing them to the component, as follows:"
var data = [{ "when": "2 minutes ago", "who": "Jill Dupre", "description": "Created new account" }, â¦. }]; var headings = ['When', 'Who', 'Description'] <App headings = {headings} data = {data} />
"There. We pulled the data out of the
render method and are now passing it to our component."
"We defined the dynamic headers for our table that we will start using in the component."
"Here the curly braces, used to pass the parameters to our component, are used to specify the JavaScript expressions that will be evaluated and then used as attribute values."
"For example, the preceding JSX code will get translated into JavaScript by React, as follows:"
React.createElement(App, { headings: headings, data: data });
"We will revisit
props later. However, right now, let's move on to complete our component."
"Now, using the passed data and headings via
props, we need to generate the table structure in the app's
render method."
"Let's generate the headings first, as follows:"
var App = React.createClass({ render: function(){ var headings = this.props.headings.map(function(heading) { return(<th> {heading} </th>); }); } });
"Notice, how we are using
this.props.headings to access the passed information about headings. Now let's create rows of the table similar to what we were doing earlier:"
var App = React.createClass({ render: function(){ var headings = this.props.headings.map(function(heading) { return(<th> {heading} </th>); }); var rows = this.props.data.map(function(change) { return(<tr> <td> { change.when } </td> <td> { change.who } </td> <td> { change.description } </td> </tr>); }); } });
"Finally, let's put the headings and rows together in our table."
var App = React.createClass({ render: function(){ var headings = this.props.headings.map(function(heading) { return(<th> {heading} </th>); }); var rows = this.props.data.map(function(change) { return(<tr> <td> {change.when} </td> <td> {change.who} </td> <td> {change.description} </td> </tr>); }); return(<table> {headings} {rows} </table>); } }); React.render(<App headings = {headings} data = {data} />, document.body);
"The table is now displayed with the passed dynamic headers and JSON data."
"The headings can be changed to
["Last change at", "By Author", "Summary"] and the table in our view will get updated automatically."
"Alright, Shawn, go ahead and add a title to our table. Make sure to pass it from the props."
"Ok," said Shawn.
"Now, the render method will be changed to the following:"
⦠return <div> <h1> {this.props.title} </h1> <table> <thead> {headings} </thead> {rows} </table> </div> â¦
"While the call to
React.render will change to the following:"
var title = 'Recent Changes'; React.render(<App headings={headings} data={data} title={title}/>, document.body);
"Awesome. You are starting to get a hang of it. Let's see how this looks in completion shall we?"
var App = React.createClass({ render: function(){ var headings = this.props.headings.map(function(heading) { return(<th> {heading} </th>); }); var rows = this.props.data.map(function(row){ return <tr> <td>{row.when}</td> <td>{row.who}</td> <td>{row.description}</td> </tr> }) return <div><h1>{this.props.title}</h1><table> <thead> {headings} </thead> {rows} </table></div> } }); var data = [{ "when": "2 minutes ago", "who": "Jill Dupre", "description": "Created new account" }, { "when": "1 hour ago", "who": "Lose White", "description": "Added fist chapter" }, { "when": "2 hours ago", "who": "Jordan Whash", "description": "Created new account" }]; var headings = ["Last updated at", "By Author", "Summary"] var title = "Recent Changes"; React.render(<App headings={headings} data={data} title={title}/>, document.body);
"We should again start seeing something as follows:"
"Here we have it, Shawn. Our very first component using React!", said Mike.
"This looks amazing. I can't wait to try out more things in React!", exclaimed Shawn.
In this chapter, we started with React and built our first component. In the process, we studied the top-level API of React to construct components and elements. We used JSX to construct the components. We saw how to display static information using React and then gradually replaced all the static information with dynamic information using props. In the end, we were able to tie all ends together and display mock data in the format that is returned from Open Library's Recent Changes API using React.
In the next chapter, we will dive deep into JSX internals and continue building our application for Recent Changes API.
|
https://www.packtpub.com/product/reactjs-by-example-building-modern-web-applications-with-react/9781785289644
|
CC-MAIN-2020-40
|
refinedweb
| 3,849
| 66.13
|
GridDataFormats 0.2.3
Reading and writing of data on regular grids in Python
The gridDataFormats package provides classes to unify reading and writing n-dimensional datasets. One can read grid data from files, make them available as a Grid object, and allows one to write out the data again.
The Grid class
A Grid consists of a rectangular, regular, N-dimensional array of data. It contains
- The position of the array cell edges.
- The array data itself.
This is equivalent to knowing
- The origin of the coordinate system (i.e. which data cell corresponds to (0,0,…,0)
- The spacing of the grid in each dimension.
- The data on a grid.
Grid objects have some convenient properties:
- The data is represented as a numpy.array and thus shares all the advantages coming with this sophisticated and powerful library.
- They can be manipulated arithmetically, e.g. one can simply add or subtract two of them and get another one, or multiply by a constant. Note that all operations are defined point-wise (see the NumPy documentation for details) and that only grids defined on the same cell edges can be combined.
- A Grid object can also be created from within python code e.g. from the output of the numpy.histogramdd function.
- The representation of the data is abstracted from the format that the files are saved in. This makes it straightforward to add additional readers for new formats.
- The data can be written out again in formats that are understood by other programs such as VMD or PyMOL.
Examples
In most cases, only one class is important, the gridData.Grid, so we just load this right away:
from gridData import Grid
From a OpenDX file:
g = Grid("density.dx")
From a gOpenMol PLT file:
g = Grid("density.plt")
From the output of numpy.histogramdd:
import numpy r = numpy.random.randn(100,3) H, edges = numpy.histogramdd(r, bins = (5, 8, 4)) g = Grid(H, edges=edges)
For other ways to load data, see the docs for gridData.Grid
Subtracting two densities
Assuming one has two densities that were generated on the same grid positions, stored in files A.dx and B.dx, one first reads the data into two Grid objects:
A = Grid('A.dx') B = Grid('B.dx')
Subtract A from B:
C = B - A
and write out as a dx file:
C.export('C.dx')
The resulting file C.dx can be visualized with any OpenDX-capable viewer, or later read-in again.
Resampling
Load data:
A = Grid('A.dx')
Interpolate with a cubic spline to twice the sample density:
A2 = A.resample_factor(2)
Downsample to half of the bins in each dimension:
Ahalf = A.resample_factor(0.5)
Resample to the grid of another density, B:
B = Grid('B.dx') A_on_B = A.resample(B.edges)
or even simpler
A_on_B = A.resample(B)
Note
The cubic spline generates region with values that did not occur in the original data; in particular if the original data’s lowest value was 0 then the spline interpolation will probably produce some values <0 near regions where the density changed abruptly.
- Author: Oliver Beckstein
- Bug Tracker:
- Download URL:
- Keywords: science array density
- License: GPLv3
- Categories
- Package Index Owner: Oliver.Beckstein
- DOAP record: GridDataFormats-0.2.3.xml
|
https://pypi.python.org/pypi/GridDataFormats/0.2.3
|
CC-MAIN-2017-09
|
refinedweb
| 545
| 59.4
|
We use the function
list_words() to get a list of unique words with more than three characters in lower case:
def list_words(text): words = [] words_tmp = text.lower().split() for w in words_tmp: if w not in words and len(w) > 3: words.append(w) return words
For a more advanced term-document matrix, we can use the Python textmining package from:
The
training() function creates variables to store the data needed for the classification. The
c_words variable is a dictionary with the unique words and its number of occurrences in the text (frequency) by category. The
c_categories variable stores a dictionary of each category and its number of texts. Finally,
c_text and
c_total_words store the ...
No credit card required
|
https://www.safaribooksonline.com/library/view/practical-data-analysis/9781785289712/ch04s05.html
|
CC-MAIN-2018-39
|
refinedweb
| 119
| 55.24
|
GNOME Bugzilla – Bug 635172
When calling into overrides for some reason we are converting str("2") to int(2)
Last modified: 2012-02-10 09:34:17 UTC
Created attachment 174764 [details]
Test case
I encountered this problem when trying to add data received via DBus to a ListStore. python-dbus uses subclasses of built-in types such as int and str, which you cannot add to the liststore:
Gtk-WARNING **: gtkliststore.c:660: Unable to convert from PyObject to gint
I remember someone else filing this but I can't find the bug. The issue is in PyGTK they have an override which can ask the store what type they are expecting. In PyGObject we do our best to guess by using the same routine that converts PyObject to regular types. The issue is the person might want to store a PyObject. I think I might need to break down and write the override even though it will slow down list insertion (and it already seems slow in the tests)
Created attachment 175738 [details] [review]
[gi] handle subtypes when inserting into tree models
* Often modules will give back basic types wrapped in a subtype.
This is the case with D-Bus where you may want to keep some of the
metadata around. More often than not, the developer is just looking
to use the basetype.
* This override checks the column type and handles basic types such as
gchararrays, ints, longs, floats and doubles, converting them to their
base types before sending them to the generic GI type marshaller.
* More types may need to be supported but these are the common cases where
apps break.
Finally got around to writing the patch. Let me know if this fixes your issues
Not sure if this issue is related to this bug, but when I subclass Gtk.ListStore like this:
class MyStore(Gtk.ListStore):
def __init__(self):
Gtk.ListStore.__init__(int, str)
I get:
Traceback (most recent call last):
+
Trace
224952
class MyStore(Gtk.ListStore):
cls._setup_vfuncs(cls)
base._setup_vfuncs(impl)
vfunc_name()))
'str' object is not callable
This should be added to the test suite.
(In reply to comment #4)
> Not sure if this issue is related to this bug, but when I subclass
> Gtk.ListStore like this:
>
> class MyStore(Gtk.ListStore):
>
> def __init__(self):
- Gtk.ListStore.__init__(int, str)
+ Gtk.ListStore.__init__(self, int, str)
or
+ super(MyStore, self).__init__(int, str)
You are trying to call Gtk.ListStore.__init__ on a int instead of a ListStore. I guess we should do some type checking there.
BTW. Can I commit this patch. Does it fix your issues?
Stupid mistake, still get the above error, though.
The patch doesn't apply with latest master, could provide an updated patch, please.
looks like I already applied it. It still doesn't work? Can you post a more complete example?
I found the issue. Python thinks "2" is an int not string. I need to make the checks take that into account
This works with pygobject 3.1.0 (and presumably 3.0 as well). I also verified that
print model[0][0], type(model[0][0])
(similar for the other entries) show the expected values.
|
https://bugzilla.gnome.org/show_bug.cgi?id=635172
|
CC-MAIN-2022-05
|
refinedweb
| 532
| 75.5
|
In python, a set is an unordered collection of items. The items in a set are unordered and thus cannot be accessed with indexes. Sets come in handy while performing mathematical operations in python. The operations are union, intersection, complement difference, etc. In this article, we shall learn about sets and how we can combine two sets in python.
Characteristics of a set
- All the items in a set are unique. Sets do not store duplicate elements.
- The elements in a set are unordered. Therefore, the items cannot be accessed with indexes, unlike tuples and lists—every time the elements are displayed in random order.
- The set itself is mutable, but the elements stored in the set are immutable.
How to create a set?
A set in python is initialized by using curly braces({ }). Thus, we can store different elements in a python set except for mutable data types such as lists and dictionaries.
my_set = { 1, 'ab', 100}
To create an empty set, we use the function set() without giving it any arguments. By default, if you use curly braces to create an empty set, python would consider it a dictionary.
my_set = set()
How to Combine Sets in Python?
Sets can be combined in python using different functions. There are functions such as union(), update() and reduce() for joining two sets. We can also combine two sets using bitwise operators such as the union operator (|) and the unpacking operator (*). Let us look at all the methods separately.
Using union() method to combine sets in python
Union() method is used to combine two sets in python and returns a combined set. We can perform a union of as many sets as we want. If two sets contain common items, then their union will only have one copy of that item because all elements in a set are unique.
The syntax of union() method is:
set1.union(set2, set3, set4......)
Lets us consider an example. Here, we have taken three sets: set1, set2, and set3. ‘set1’ has three strings – harry, hogwarts and hedwig. ‘set2’ has one float value – 9.75 and three strings – wand, hogwarts and london. ‘set3’ consists of one character value – h and one integer value – 100.
Here, only set1 and set2 have a common item – ‘hogwarts’ and rest other items are unique.
To perform union of set1 and set2, we run the below code.
set1.union(set2)
The output of the union between set1 and set2 is:
{9.75, 'harry', 'hedwig', 'hogwarts', 'london', 'wand'}
As you can see, here the common element ‘hogwarts’ only occurred once in the union set.
To perform union between set1, set2 and set3, we use the code given below.
set1.union(set2,set3)
The combination of the three sets is:
{100, 9.75, 'h', 'harry', 'hedwig', 'hogwarts', 'london', 'wand'}
Here, changing the order of sets while using the union() method does not change the output.
set3.union(set2,set1)
The above line of code also gives us the same result.
{100, 9.75, 'h', 'harry', 'hedwig', 'hogwarts', 'london', 'wand'}
If we do not pass any argument to the union method, it shall return a shallow copy of the given set.
Using update() Method to Combine Sets in Python
Another method that can be used to join two or more sets in python is the update() method. Finally, the union() method will store all the values uniquely from two or more given sets.
The syntax of update() method is:
set1.update(set2,set3,set4....)
The difference between union() and update() method is that union() returns the combined set but update() will not return any value. Instead, it will only update the value of the set calling the update() method (i.e., set1 from the above syntax) with the value of the set or sets given in the argument (i.e., set2, set3, set4…).
Let us take the same example of the above sets set1, set2 and set3.
set1 = {'hogwarts', 'harry', 'hedwig'} set2 = {'wand', 9.75, 'hogwarts', 'london'} set3 = {100, 'h'}
First we shall combine two sets – set1 and set2.
set1.update(set2)
The above code will not return any value. To view the update, we shall print set1.
print(set1)
The output is:
{'wand', 'hedwig', 9.75, 'london', 'harry', 'hogwarts'}
set2 remains unchanged whereas set1 gets updated with the combined values.
To combine all the three sets, we shall run the below code.
set1.update(set2,set3)
Now, we shall print the updated set1.
print(set1)
The output is:
{'wand', 'hedwig', 100, 9.75, 'london', 'harry', 'hogwarts', 'h'}
As we can see, all the set values have been combined. Therefore, we can also pass iterables as an argument to the update() function.
Using reduce() Method to Combine Sets in Python
Reduce() is used to apply a function to iterables to reduce it into a single value. We can also use reduce() to combine two given sets.
For that, we need to import an operator. An operator is a library in python which imports functions for mathematical, logical, and bitwise operations. For combining sets, we import operators for performing bitwise or operation.
The syntax of reduce() function is:
reduce(function, iterable)
First, we shall import functools and operator.
from functools import reduce import operator
set1 = {'hogwarts', 'harry', 'hedwig'} set2 = {'wand', 9.75, 'hogwarts', 'london'} set3 = {100, 'h'}
We shall use the operator.or_ as the function, and we shall pass a list of set1 and set2 as the iterable sequence.
reduce(operator.or_, [set1, set2])
The function reduce() shall return the bitwise or operation of set1 and set2. Operator.or_ shall work similarly to the union operation. The output will be:
{9.75, 'harry', 'hedwig', 'hogwarts', 'london', 'wand'}
We can also use reduce() for combining more than two sets. We can pass the more than two sets by including them into the iterable list.
reduce(operator.or_, [set1, set2, set3])
The output will be a combined set.
{100, 9.75, 'h', 'harry', 'hedwig', 'hogwarts', 'london', 'wand'}
Using itertools.chain() Method to Combine Sets in Python
Itertools in python is a module that has many functions used for working with iterators. chain() is one such function available in the itertools module. Itertools.chain() is used to chain multiple iterables and returns a single sequence.
The syntax for itertools.chain() is:
itertools.chain(set1,set2,set3....)
Here, we use itertools.chain() to combine two or more sets together.
Taking set1, set2 an set3 as above.
set1 = {'hogwarts', 'harry', 'hedwig'} set2 = {'wand', 9.75, 'hogwarts', 'london'} set3 = {100, 'h'}
First, we shall import itertools.
import itertools
Now, we shall pass set1, set2, and set3 as arguments to itertools.chain(). The function returns an object which we explicitly convert into a type set. Then, we shall print that set.
print(set(itertools.chain(set1,set2,set3)))
The combined set is:
{'wand', 'hedwig', 100, 9.75, 'harry', 'london', 'hogwarts', 'h'}
Using ‘|’ operator
We can also combine two or more sets in python using bitwise or operator (|). It is similar to finding the union of sets.
Taking three sets set1, set2 and set3.
set1 = {'hogwarts', 'harry', 'hedwig'} set2 = {'wand', 9.75, 'hogwarts', 'london'} set3 = {100, 'h'}
Performing bitwise or operation:
print(set1 | set2 | set3)
The output is:
{'wand', 'hedwig', 100, 'hogwarts', 9.75, 'london', 'h', 'harry'}
The above operation is similar to the mathematical union operation.
'set1 | set2 | set3' is similar to 'set1 U set2 U set3'
Using unpacking operator (*)
In python, the asterisk (*) is the unpacking operator. It unpacks values from iterable objects. We can use the unpacking operator for joining multiple sets.
Taking set1, set2 and set3.
set1 = {'hogwarts', 'harry', 'hedwig'} set2 = {'wand', 9.75, 'hogwarts', 'london'} set3 = {100, 'h'}
Now we shall unpack the three sets and assign the combined value to a new set ‘my_set’.
my_set = {*set1, *set2, *set3}
And now, we shall print the final set ‘my_set’.
print(my_set)
The combined set is:
{'wand', 'hedwig', 100, 'hogwarts', 9.75, 'london', 'h', 'harry'}
Do you have any doubts or any ideas to share? Let us know below in the comments.
Learning never exhausts the mind. Happy Learning!
|
https://www.pythonpool.com/learn-how-to-combine-sets-in-python/
|
CC-MAIN-2021-43
|
refinedweb
| 1,339
| 76.22
|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Show Existing Products of a customer in a tab
Hi
I'm very new in python and odoo so I need a help.
In project>task module I've added a selection field which for select customer. now I need when I select a customer then his products will show in a tab.
can anyone help me please.
xml
>
.py file
from openerp import api, fields, models
class project_task(models.Model):
_inherit = "project.task"
customer = fields.Many2one('res.partner', string="Search Customer Name")
products = fields.One2many('product.template', compute='_get_products', string="Products")
def _get_products(self):
if self.customer:
order_line_list = self.env['sale.order.line'].search([('name.id','=',self.order_partner_id.id)])
if order_line_list:
ol_list = []
for partner in order_line_list:
ol_list.append(supp.product_tmpl_id.id)
prod = self.env['product.template'].browse(ol_list)
self.products = prod
I want: From selection field when a customer will be selected then his bought product will show in a tab check it but I don't understand what you need, post some code of what you are doing
|
https://www.odoo.com/forum/help-1/question/show-existing-products-of-a-customer-in-a-tab-98601
|
CC-MAIN-2016-50
|
refinedweb
| 202
| 51.95
|
The following test asserts on mozilla-central revision f3f5d8a8a473 (options -m -n):
function MakeDay( year, month, date ) {
date = ToInteger(date );
var t = ( year < 1970 ) ? 1 : 0;
return ( (Math.floor(t/86400000)) + date - 1 );
}
function MakeDate( day, time ) {
if ( day == Number.POSITIVE_INFINITY || day == Number.NEGATIVE_INFINITY ) { }
}
function ToInteger( t ) {
var sign = ( t < 0 ) ? -1 : 1;
return ( sign * Math.floor( Math.abs( t ) ) );
}
var UTCDate = MyDateFromTime( Number("946684800000") );
function MyDate() {
this.date = 0;
}
function MyDateFromTime( t ) {
var d = new MyDate();
d.value = ToInteger( MakeDate( MakeDay( d.year, d.month, d.date ), d.time ) );
while (Uint32Array) if (0 == 100000) return;
}
Although this is the same assert as in Bug 684084, which is fixed in jaegermonkey but not on m-c, this seems to be another bug as I can reproduce on both branches.
Created attachment 561968 [details] [diff] [review]
patch
When deciding which calls to inline, we would allow inlining of functions which have not been analyzed. These functions were then analyzed in order to compile them, and such analysis could change types and break properties of the code which we checked while deciding to inline, and which the compiler later depended on (in this case, that inlined call sites have no type barriers).
Please could you use the "take this bug" checkbox when attaching patches, since it would save me needing to correct assignee each time on merging. Thanks :-)
Automatically extracted testcase for this bug was committed:
|
https://bugzilla.mozilla.org/show_bug.cgi?id=687125
|
CC-MAIN-2016-36
|
refinedweb
| 234
| 64.2
|
.
RefinedWiki Original Theme 3.1 for Atlassian’s Confluence is a massive release packed with many new features and improvements for both end users and admins. This blog post covers the highlights of the release.
Create Beautiful Blogs Your Customers will Love
Move over WordPress, there’s a new blogging platform in town. The combination of Confluence and RefinedWiki Original Theme 3.1 enables you to create beautiful blog posts designed to impress your customers. RefinedWiki‘s new blog mode transforms your Confluence space blog into a “real” external blog in terms of behavior and design. The look and feel makes sharing posts with Twitter, Facebook and Google just a click away. You can even access space blogs through the new namespace:.
Find previous blogs, fast
The latest release even improves blog post navigation with a new calendar, as well as, next and previous buttons. Learning more form your blog could not be any easier for your customers.
Stay updated about what’s newsworthy
The News macro displays the latest published blog posts, making it easy to stay up to date about what’s most newsworthy.
With the latest improvements to the News macro, users can view those blogs immediately thanks to a new preview popup. Staying up to date about the latest and greatest could not be any easier.
Improve Content Management with Categories
At the core of RefinedWiki Original Theme is Confluence’s Space Category. Similar to labels for pages and blogs, you can tag a Space with a category to better organize your Spaces. With RefinedWiki, your Space Categories make up your global top-navigation, so accessing your content is easy. There are three major category improvements in this release.
1. Category and Space logos in dropdown menus
Visualize your Spaces and categories with logos. It’s like giving your Space name a face.
2. More Categories, No Problem
Regardless of the number of categories you have, you can keep your top-navigation organized and manageable. An automatic horizontal scroll will appear when you have more categories than the width of your screen will allow.
3. Dynamic dropdown menus
The size of the drop down will adjust to the number of subcategories and spaces in each category.
Always Stay Updated from Any Page
The Confluence activity stream is now accessible from any page in Confluence. If you ever want to learn about the latest changes in Confluence, you can check the site’s activity stream without leaving the current page – just type the keyboard shortcut ‘shift + a’.
Improved admin UI
In the latest release, the admin UI is more solid and consistent.
All item sorting and Space Category management is as simple as drag and drop. Content management could not be any easier.
See it in Action!
New to RefinedWiki? See the video below to learn more:
Try it today!
Want to start using it today? Download a 30 day free trial
Starter License Holders can get the RefinedWiki Original Theme for just $10
The Original Theme is available in English, French, Dutch, German and Spanish. RefinedWiki has over 1 300 customers in more than 60 countries.
Note: Original Theme is a commercial plugin, also available free for non-profit organizations.
Atlassian Summit 2012
We are a proud sponsor of the Atlassian Summit 2012. Come and visit us in our booth and have a chat about how you can increase the usability of Confluence.
|
https://www.atlassian.com/blog/archives/transform-confluence-into-a-blogging-platform-with-refinedwiki-original-theme-3-1
|
CC-MAIN-2019-30
|
refinedweb
| 567
| 64.71
|
I did a very similar configuration a while back under the Arduino IDE, and everything was rock-solid.
My configuration:
Wemos D1 Mini
DS1631+ i2c temp sensor ( ... DS1731.pdf)
4.7k pull-ups and a .1uF cap near the sensor
a sample of the output:
Code: Select all
import time import machine from machine import Pin, I2C i2c = I2C(scl=Pin(5), sda=Pin(4), freq=1000) # pins d1 for clk and D2 for sda time.sleep(2) i2c.writeto(76,b'\x51') #set advanced temp sense to startup def read_advanced_temp(): i2c.writeto(76,b'\xAA') temp = i2c.readfrom(76,2) print(temp) while True: time.sleep(3) read_advanced_temp()
Code: Select all
b'\x1a\xd0' b'\x1b\x80' b'\x1c0' b'\x1c\xb0' b'\x1c\xa0'
|
https://forum.micropython.org/viewtopic.php?p=21606
|
CC-MAIN-2020-10
|
refinedweb
| 126
| 68.36
|
Hi, thanks in advance.
I know there is a way to connect to ssids and / or create a connection in C++.
But i need a Managed C++ way.
I have some NetworkInformation code. But I need to know how to connct to an SSID.
I have no networks programmation clues. So All i need, is to make a pannel for showing the ssids and to connect to them.
I don't know how to implement wlan in it and don't really know what it is.
I need a documention with the actual connection process, and the SSID idenfitifications.
Thanks.
Originally Posted by FenixEden
I know there is a way to connect to ssids and / or create a connection in C++.
So what native API would you use for this? You know, it's relatively easy to use native APIs in C++/CLI. After all that's what it's made for (exclusively, as at leat most people think )...
In the .NET Framework library I'd have expected something related in the System::Net namespace, specifically System::Net::NetworkInformation, but I couldn't find anything like that in there.
Perhaps there'll come some expert in both networting and .NET along who simply knows; at least I don't.
I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the boy sitting next to me.
This is a snakeskin jacket! And for me it's a symbol of my individuality, and my belief... in personal freedom.
How do I implement native API?
I know I should have a code example somewhere in a certain path of wlan api.
But there is none.
The expected path is simply not there.
Else than that, I see no answer to my question.
Can you find a Net expert?
Originally Posted by FenixEden
How do I implement native API?
You wouldn't implement it. It is (or would be) there already.
Once you found out which native API to use, we can go on here, discussing how to use it from C++/CLI.
I know I should have a code example somewhere in a certain path of wlan api.
But there is none.
The expected path is simply not there.
I can't remember ever heaving heard of anything named WLAN API, but I may have been wrong, so I simply hacked "WLAN API" into the MSDN Library search. It came up with a few thousand hits, quite some of which had the word "sample" in their subject lines. Perhaps some of them may be of interest to you.
Can you find a Net expert?
CodeGuru has a dedicated Network Programming section, so if some are aroud, they probably can be found there. I don't frequent that section myself, though.
I'll try there later...
thanks eric
Forum Rules
|
http://forums.codeguru.com/showthread.php?527975-Working-multiple-forms-using-Visual-C-2008&goto=nextnewest
|
CC-MAIN-2017-13
|
refinedweb
| 474
| 77.13
|
Summary
Create a field group for a feature class or table. Field groups are used when creating contingent values.
Learn more about contingent values
Usage
Fields used to create a field group cannot be system-maintained fields such as ObjectID or Shape or the subtype field.
If your data is stored in an enterprise geodatabase, you must be connected as the data owner to use this tool.
Field groups are compatible with ArcGIS Pro 2.3 and later geodatabases. If your geodatabase is an earlier version, you must upgrade your geodatabase to 2.3 or later.
Note:
Once a field group is added to a dataset, the dataset version is set to ArcGIS Pro 2.3. This means that the dataset can no longer be used in ArcMap.
Syntax
CreateFieldGroup_management (target_table, name, fields)
Derived Output
Code sample
Create a new field group.
import arcpy arcpy.CreateFieldGroup_management("C:\\MyProject\\myConn.sde\\mygdb.USER1.myFC", "MyFieldGroup”, ["Field1", "Field2", "Field3"])
Environments
Licensing information
- Basic: Yes
- Standard: Yes
- Advanced: Yes
|
https://pro.arcgis.com/en/pro-app/tool-reference/data-management/create-field-group.htm
|
CC-MAIN-2019-09
|
refinedweb
| 166
| 59.5
|
04 August 2011 18:28 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
Also dimming growth prospects in
The Chinese government measures to stem inflation by tightening access to capital have made it more difficult for smaller Huntsman downstream customers to finance working capital, Esplin said.
“In the near-term, we expect [
However, longer-term
CEO Peter Huntsman said in some of Huntsman’s end markets in
“But I don’t see that as really a long-term trend,” he added.
“I don’t see a slowdown, where all of a sudden the Chinese government has put the brakes on infrastructure projects and so forth", that would unusually affect Huntsman business, he said.
Peter Huntsman also said the company is still waiting to receive government approval for its planned methyl di-p-phenylene isocyanate (MDI) expansion at
The Chinese permitting process seemed to proceed at “snail’s pace”, Huntsman said.
“We are still working out over a year just to get the environmental permitting,” he added.
But he said there was nothing unique about Huntsman’s experience with obtaining permitting. BASF's MDI project in
“I believe that a lot of these expansions announced in Asia, particularly in China, are going to be coming on substantially later than some in the markets have been publishing and have been speculating,” he added.
Huntsman’s share price fell by more than 20% in Thursday morning trading in
The shares traded at $14.12, down $3.87 or 21.5%, at 12:15 hours New York time (16:15 GMT), on the New York Stock Exchange.
($1 = €0.70)
For more on Huntsman,
|
http://www.icis.com/Articles/2011/08/04/9482699/huntsman-expects-more-modest-growth-in-china-execs.html
|
CC-MAIN-2014-42
|
refinedweb
| 269
| 60.55
|
k6 Loki extension load testing
Grafana k6 is a modern load-testing tool. Its clean and approachable scripting API works locally or in the cloud. Its configuration makes it flexible.
The xk6-loki extension permits pushing logs to and querying logs from a Loki instance. It acts as a Loki client, simulating real-world load to test the scalability, reliability, and performance of your Loki installation.
Before you begin
k6 is written in Golang. Download and install a Go environment.
Installation
xk6-loki is an extension to the k6 binary.
Build a custom k6 binary that includes the
xk6-loki extension.
Install the
xk6extension bundler:
go install go.k6.io/xk6/cmd/xk6@latest
grafana/xk6-lokirepository:
git clone cd xk6-loki
Build k6 with the extension:
make k6
Usage
Use the custom-built k6 binary in the same way as a non-custom k6 binary:
./k6 run test.js
test.js is a Javascript load test.
Refer to the k6 documentation to get started.
Scripting API
The custom-built k6 binary provides a Javascript
loki module.
Your Javascript load test imports the module:
import loki from 'k6/x/loki';
Classes of this module are:
Config and
Client must be called on the k6 init context (see
Test life cycle) outside of the
default function so the client is only configured once and shared between all
VU iterations.
The
Client class exposes the following instance methods:
Javascript load test example:
import loki from 'k6/x/loki'; const timeout = 5000; // ms const conf = loki.Config("", timeout); const client = loki.Client(conf); export default () => { client.pushParameterized(2, 512*1024, 1024*1024); };
Refer to
grafana/xk6-loki
for the complete
k6/x/loki module API reference..
|
https://grafana.com/docs/enterprise-logs/latest/loki/clients/k6/
|
CC-MAIN-2022-40
|
refinedweb
| 283
| 58.79
|
Hi Rui, I am experimenting with the Heltec LoRa 32 v2 board, and am having problem to get the OLED display to work with MicroPython.
The board worked correctly when I received it, and I could see the Heltec logo and the LoRa sender APP information on the OLED display.
After flashing the latest version of MicroPython I am trying to display same sample text on the display and I am able to make it work.
I am using the Adafruit ssd1306 library and the following code to turn the whole display on/off. There is no error reported on the REPL and the code runs correctly until I press Ctrl-C:
i2c = I2C(-1, scl=Pin(15), sda=Pin(4)) oled_width = 128 oled_height = 64 oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c) oled.show() sleep(0.5) oled.fill(1) oled.show() sleep(0.5) while True: print("Works !!") The specific questions are:
- What am I doing wrong, or missing?
Update:
I have found the probable cause of the issue, which might be specific to this kind of integrated OLED displays on the Heltec Lora 32 I am using.
A special initialization of the display has to be done through the displays RST pin. This is shown in the tutorial TTGO LoRa32 SX1276 OLED Board: Getting Started with Arduino IDE in this “Arduino code”:
pinMode(OLED_RST, OUTPUT); digitalWrite(OLED_RST, LOW); delay(20); digitalWrite(OLED_RST, HIGH);
- How can I go back and reset the board to the Factory settings where I would expect to see the original OO working on the display. I have not been able to find instructions on how to re-flash the board to factory setting.
Solution:
After some more searching and consideration for a “factory image”, it became clear that to put the board back to factory setting you only have to load a sample application sketch through the “Arduino IDE” onto the board. I decided to load the “OLED_Lora_sender” from the “Examples | Heltec ESP32 Dev-Boards | LoRa” section in the Arduino IDE. Once the flashing process finishes the board is back to “Factory Setting”.
Thank you in advance…
Hi Manfred.
When you press CTRL+C you interrupt the execution of the code. That’s why it stops working.
I think you can go back to the factory settings if you find the original firmware.
Regards,
Sara
Thank you Sara, I am aware that Ctrl+C stops execution. What I meant to say is the display does not work while the code is executing, reason for which I press Ctrl-C.
I have not been successful in finding the “factory image” to flash it the board. I had hoped that you could point me to a location where to find it. In your “MicroPhyton Programming for the ESP32” or on one of your tutorials I remember having read that you can return to the “Arduino Enviornment”, which I understand would be through flashing an Arduino or factory image.
Thank you
Hi Manfred.
Yes, you can upload an Arduino code and it will overwrite the micropython firmware.
That board should be similar with the TTGO LoRa32 OLED. We wrote a tutorial about that a few days ago:
Were you able to make the OLED work using micropython?
Regards,
Sara
The following micropython code attempts to implement the “initialization” sequence for the display. Unfortunately it does not do anything and the display does not show anything.
d_rst = Pin(16, Pin.OUT, value=0) d_rst.value(1) sleep_ms(20) d_rst.value(0) sleep_ms(20) d_rst.value(1) sleep_ms(20) d_rst.value(0)
Thank you in advance for your suggestions
Hi.
I think the OLED RST pin must be HIGH to work properly.
So, you should have:
d_rst = Pin(16, Pin.OUT) d_rst.value(0) sleep_ms(20) d_rst.value(1)
Can you try doing that?
I’ve tried controlling the OLED of my TTGO LORA board, but it didn’t work. I’m still trying to figuring that out.
REgards,
Sara
UPDATE. The following code shows how to init the OLED with the TTGO LoRa board and it worked for me.
Please use the following code and see if it works for you:
from machine import Pin, I2C import ssd1306 from time import sleep # ESP32 Pin assignment i2c = I2C(-1, scl=Pin(15), sda=Pin(4),freq=400000) # ESP8266 Pin assignment #i2c = I2C(-1, scl=Pin(5), sda=Pin(4)) oled_width = 128 oled_height = 64 d_rst = Pin(16, Pin.OUT) d_rst.value(0) sleep_ms(20) d_rst.value(1) oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c, addr=0x3c) oled.text('Hello, World 1!', 0, 0) oled.text('Hello, World 2!', 0, 10) oled.text('Hello, World 3!', 0, 20) oled.show()
Hi Sara, thank you for your response and suggestion.
I tested the code on my Heltec LoRa 32 board and unfortunately it does not work.
The specific comment I have found on the Heltec Arduino code which explains the “initialization process”:
This is a simple example show the Heltec.LoRa sended data in OLED. The onboard OLED display is SSD1306 driver and I2C interface. In order to make the OLED correctly operation, you should output a high-low-high(1-0-1) signal by soft- ware to OLED's reset pin, the low-level signal at least 5ms.
Further research showed that the new Adafruit ssd1306 library for their implementation of “CircuitPython” includes the critical initialization code for the OLED display and an optional parameter to trigger it during the object intiantiation. Following the code extract taken from the class “_SSD1306:
def poweron(self): "Reset device and turn on the display." if self.reset_pin: self.reset_pin.value = 1 time.sleep(0.001) self.reset_pin.value = 0 time.sleep(0.010) self.reset_pin.value = 1 time.sleep(0.010) self.write_cmd(SET_DISP | 0x01) self._power = True
However, although I tried to use this same exact code, I was unable to succeed. Needless to say that my display keeps refusing to show anything.
I know it is working from a Hardware perspective, because the original Arduino code works fine.
Hi again.
What library are you using exactly?
In the code I’ve shown you, I was using the deprecated library and it worked well:
However, I’m using a TTGO board, I don’t have an Heltec to experiment with. I don’t know if there is any other trick to make it work.
At this moment, I’m out of ideas to debug this issue :/
If you can give me any other information, maybe I can suggest other alternatives.
Regards,
Sara
Hi Sara,
I am using the same library that you indicate above. However on my board the OLED display does not show anything. For the moment I have given up, and am working on other components on the board. As soon as I get the TTGO LoRa board which I have on order, I will try to find a solution. Once I get it working, I will update this post with any findings.
Thank you very much. Best Regards, Manfred
Hi Sara,
I am happy to report that after a lot of reading and searching on google, the solution turned out to be quite simple.
Basically the I2C bus pins “scl” and “sda” need to be initialized with the internal pull_up resistors enabled. For reference the specific code that now works reliable is as follows:
# Heltec LoRa 32 with OLED Display oled_width = 128 oled_height = 64 # OLED reset pin i2c_rst = Pin(16, Pin.OUT) # Initialize the OLED display i2c_rst.value(0) sleep(0.010) i2c_rst.value(1) # must be held high after initialization # Setup the I2C lines i2c_scl = Pin(15, Pin.OUT, Pin.PULL_UP) i2c_sda = Pin(4, Pin.OUT, Pin.PULL_UP) # Create the bus object i2c = I2C(scl=i2c_scl, sda=i2c_sda) # Create the display object oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c) oled.fill(0) oled.text('Hello, World 1!', 0, 0) oled.text('Hello, World 2!', 0, 10) oled.text('Hello, World 3!', 0, 20) oled.text('Hello, World 4!', 0, 30) oled.show()
I hope this might help someone else to save a lot of time.
You can go ahead and close this issue as solved / answered. Thank you for your assistance.
|
https://rntlab.com/question/heltec-lora-32-micropython-oled-not-working/
|
CC-MAIN-2022-21
|
refinedweb
| 1,359
| 67.35
|
yyield v1.0.0
Y-yield
Y-yield is a way to to build asynchronous javascript applications using generators. As an alternative to
await/async, working with Node versions from
4.9.1 and all modern browsers except IE11, you can still get reliable exceptions and easy to follow asynchronous. So it's based on generators and yielding, but how does it work? Why yield?
The basic idea
ECMAScript 6 introduces something called generators. They look like this:
function* myGenerator() { yield "SomeValue"; yield "SomeOtherValue"; } var a = myGenerator(); var value = a.next(); // first line in myGenerator executes, returns "SomeValue" var otherValue = a.next(); // second line in myGenerator executes, returns "SomeValue"
That looks pretty much like Python, Scala, or C#. What does that bring in terms of asynchronous code? Well, the idea is that if we can write our code as a generator generating different asynchronous pieces of code, we can use the built-in wrapping/unwrapping of function bodies and try/catch statements to make our life easier. We could write something like
function* fetchUrl() { ... } function* getTodos(extra) { try { var todosTask = fetchUrl("/todos"); var emailTask = fetchUrl("/todos"); // Wait for the two parallel tasks to finish var todosAndEmail = yield [todosTask, emailTask]; console.log("All fetched", todos, email); } catch (e) { console.error("Oops...something went wrong", e); } }
Note that the generators return objects that we have to "yield" to see the result for. If you get that, you get what it's about.
Prerequisites.
For node
- Install node js, minimum 4.9.1 (but works with the
--harmonyflag since
0.11.4.
- In your scripts, use:
var Y = require("yyield")(or
import * as Y from 'yyield)
Usage
(function*() { // Here we run our async program }).run();
Ways of doing async - "yieldable things"
You can "yield" all sort of stuff to make life easier, e.g.:
A normal async node function
function* sleep(timeout) { yield function(cb) { setTimeout(function() { cb(null); } , timeout); } }
Another generator
function* sleep(timeout) { yield function(cb) { setTimeout(function() { cb(null); } , timeout); } } (function*() { yield sleep(1000); }).run();
You can always write "return" instead of yield as the last statement
The above example then becomes
function* sleep(timeout) { return function(cb) { setTimeout(function() { cb(null); } , timeout); } } (function*() { return sleep(1000); }).run();
A converted object or function
You can convert objects or functions by using the exported "gen" function. This assumes that all functions have the format
function(..., cb)
...where cb is a callback on the form callback(error, resultArguments)
Some examples:
var Y = require("yield"); var lib = require("somelib") var genlib = Y.gen(lib); var genObj = Y.gen(new lib.SomeClass()); var genFunc = Y.gen(lib.someFunction); (function*() { var a = yield genlib.someFunction(...); var b = yield genObj.someInstanceFunction(...); var c = yield genFunc(...); }).run();
Note that when converting an object, "this scope" is preserved. It is not when you convert a single function. Also - conversions are shallow (just one level of functions) and return values are not converted. Thus - if you require a library which exports a class that you construct, by using
var myClassInstance = new lib.MyClass();
...then you also have to convert the myClassInstance to use generators, by using
var genMyClassinstance = require("yield").gen(myClassInstance);
The Y-yield generators work fine with promises such as Q.defer and jQuery.Deferred
By using promises based asynchornous flows, you are able to chain calls with multiple calls to .then() in e.g. Q or jQuery. You can mix this with calls to done/fail to create way to accomplish asynchronous data flows.
Read more about Q at Read more about jQuery deferreds at
Here's an example using jQuery Deferred (namely the quite common return object form $.ajax/getJSON)
(function*() { try { var newTodos = yield $.getJSON("/todos/new"); alert(newTodos.length + " new todos found."); } catch(e) { console.error(e.stack); } } }).run();
Y also integrates with these by returning promises from the run method. Note that promises are only returned if you're running in node or requirejs (by using Q) or if you're running in a browser and jQuery exists. Y-yield does not require that Q or jQuery are installed and will work fine without them - only run will not return anything. Here's an example where we use Y-yield to chain on a then function:
// See fetchUrl in example above (function* () { return fetchUrl("/todos"); }).run() .then(function(err, result) { console.log("Here are the todos", result) });
An array of something that is yieldable according to above
This then gets executed in parallel. Example:
// requires "npm install request-json" var JsonClient = require('request-json').JsonClient; function* fetchUrl(url) { return function(cb) { new JsonClient("").get(url, cb) }; } // Our generator async program (function*() { // This gets executed in parallel var todosAndEmails = yield [fetchUrl("/todos"), fetchUrl("/email")] }).run();
Yield on something multiple times (does memoization) to accomplish e.g. lazyness
// See fetchUrl in example above var lazyTodos = fetchUrl("/todos"); function* getTodos() { // Will be fetched the first time getTodos is called, but only the first time return lazyTodos(); }
Start something without waiting directly for it
// See fetchUrl in example above function* getTodos() { // By calling "run" on the iterator, we fire it off directly. Here we fetch both todos and emails var todos = fetchUrl("/todos").run(); var emails = fetchUrl("/emails").run(); // Finally wait for todos and e-mails. If we hadn't called next above, these calls would "kick it all off" var todosResult = yield todos; var emailsResult = yield emails; // Do something with todos and emails here... }
Using lo-dash (underscore) functional paradigms
Y-yield overrides a couple of underscore/lodash functions to make them generator aware so that you can use them with generators. Currently - the following functions are supported:
- each/forEach - runs sequentially lodash docs/underscore docs
- map - runs in parallel lodash docs/underscore docs
- filter/select - lodash docs/underscore docs
- reject - runs in parallel - lodash docs/underscore docs
// See fetchUrl in example above function* getTodos() { // By calling "next" on the iterator, we fire it off directly. Here we fetch both todos and emails var todos = fetchUrl("/todos").run(); // Calling built-in ".next()" would work just fine too var email = fetchUrl("/todos").run(); // Calling built-in ".next()" would work just fine too // Set up a handler "in the future". This will be called once todos has arrived var todosWithExtra = _(todos).map(function*(todo) { var extra = yield fetchUrl("/todos/" + todo.id + "/extra") return _(todo).extend(extra); }); // Set up a handler "in the future". This will be called once email has arrived var emailsWithExtra = _(todos).map(function*(email) { var extra = yield fetchUrl("/emails/" + todo.id + "/extra") return _(email).extend(extra); }); // Finally wait for todos and e-mails. If we hadn't called run above, the yield calls below calls would "kick it all off" var todosResult = yield todosWithExtra; var emailsResult = yield emailsWithExtra; // Do something with todos here... }
So why use this? Why yield?
- Ever used an async library and just get lost. Where did that call go? No reply, no error, no nothing. Rescue is under way.
- We can use try/catch again. You've read the posts about avoiding those pesky keywords in asynchronous code (i.e. all code) as you can't rely on them being called. But do you remember that convenient idea of wrapping a bunch of calls in try/catch and handling a lot of different errors in a grouped way for a piece of code. Perhaps being able to send an error back or outputting to some log. Sure, there are solutions in old callback land such as node domains, load balancing workers, and it may be a good idea to die rather than to do stupid things. But, being pragmatic, it's pretty nifty to be able to actually catch all errors within a block of code and decide for yourself.
- Ever felt a little bad about cluttering your objects, parameters and classes with callbacks here and there. Get ready for cleaner code.
- Ever written some asynchronous code and made a mistake in the error handler? Maybe you forgot to add one? Maybe your colleague did? Maybe you typed it incorrectly and now your application has just not returned from a call in quite some time. Console is just blank. :(
- There are all kind of libraries to make asynchronous coding easier. Among the most popular are async in node, jQuery Deferred and Q. But there are oh so many ways different libraries handle this. jQuery use its own deferreds and node uses the passing of a function with one callback function. Sequalize, a MySQL ORM, uses a notion of chaining success and error callbacks. And still other libraries use an option parameter with a success/error callback. For anyone coding javascript, especially in node, it's evident that these conversions take time, are error prone and leave an uneasy feeling of possibly missing something. And even if we don't consider errors, it's often pretty darn hard to follow what's happening, especially if there's a bit of conditional asynchronous extra calls.
- Asyncronous stack traces? It is pretty saddening to just see that EventEmitter in your stack trace, right? With that said, there are node packages to make it easier such as trycatch.
- ECMAScript is in a way catching up with this. Async handling has been major recent lanaguage features in languages such as C#, F#, Scala,
So let's try it out! If you want to look at more examples, please have a look at the tests.
|
https://npm.io/package/yyield
|
CC-MAIN-2021-10
|
refinedweb
| 1,563
| 66.54
|
Details
- Type:
Improvement
- Status:
Closed
- Priority:
Major
- Resolution: Duplicate
- Affects Version/s: 1.7.5
- Fix Version/s: None
- Component/s: groovy-jdk
- Labels:None
Description
The method 'with' is very useful to write clean code, but it would be even more useful if it returned the object in which it is called, making it very nice for creating DSLs.
I was writing an automated test with selenium and I wanted to do something like:
class MyTestsSuperClass extends SeleneseTestCase {
def fill(Form form) { // implementation details }
}
class MyTest extends MyTestsSuperClass {
@Test
def testSomething() {
// ...
fill form.with { date = "2010/09/27" }
// ...
}
}
I can't do it right now because form.with {} will return me a String (the result of the last statement executed inside the closure).
I worked it around overwriting 'with' in my Form class as:
def with(Closure closure) { super.with(closure) return this }
But I think it would make more sense if this was the default behavior of the 'with' method.
Sorry if this suggestion has come up already.. it is not so easy to search for 'with' though, I hope you understand.
Thanks a lot!
Issue Links
- duplicates
GROOVY-3976 Improve "with" by ending with implied "return delegate"
Duplicate of issue GROOVY-3976. Please track that one for this requirement.
|
http://jira.codehaus.org/browse/GROOVY-4442
|
crawl-003
|
refinedweb
| 212
| 64.81
|
Stephen wrote:
> Can you help me out here. I need to know how a container
> recognizes that an attribute must be recognized, and how
> a container establishes validity of a attribute that must
> be recognized, and how a container establishes that an
> attribute has in fact been recognized.
OK, problem definition: We have a class that may or may not be an Avalon
component, that may or may not have attributes which in turn may or may
not express a contract that the container must support in order to run the
component. What should the container do?
First, we determine if the class is indeed an Avalon component. This is
easy - all components have a special attribute (AvalonComponent or
something). We test for it and find that indeed this is an Avalon
component. if it isn't then the processing stops here.
Then we look at the set of attributes the class declares. some of them may
not have anything to do with Avalon, but part of the problem is figuring
out which ones do and which ones don't. So let's split the what happens
next into three cases:
1. An attribute which is recognized by the container is declared by the
component:
Easy - just do it.
2. An attribute which is required by the container isn't declared by the
component.
Easy - test for their existence. Failure means you can stop.
3. An attribute is declared by the component, but the container doesn't
recognize it at all.
This is the hard one - the question is: can the container ignore it?
>From now on I'll focus on case 3 - I believe that's the one you're asking
about.
The solution is simple - look at the attributes of the attribute. Create
an attribute that marks attributes as "Required by Avalon 4":
public class Avalon4Requirement {}
And mark those attributes which must be supported by container with this
attribute:
/**
* All A4 containers must support dependencies.
*
* @@Avalon4Requirement
*/
public class Dependency { ... }
So the algorithm is this:
1. For each unknown attribute:
a. Check if it has an attribute of type Avalon4Requirement.
If so, stop - you can't run this component. if not, ignore
and proceed with next attribute.
Regarding the Extension mechanism, suppose we did it this way:
1. Define an extension attribute:
public class Extension { ... }
2. Don't mark it as Avalon4Required yet.
3. When we feel like it, mark it as such.
Does this answer your question?
/LS
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org
|
http://mail-archives.apache.org/mod_mbox/avalon-dev/200308.mbox/%3C20030814164202.G19739@minotaur.apache.org%3E
|
CC-MAIN-2018-05
|
refinedweb
| 429
| 67.35
|
:
This sample is the evolution of my previous article How to integrate a BizTalk Server 2010 application with Service Bus Queues and Topics. This solution shows how to integrate a BizTalk Server 2013 application with the Windows Azure Service Bus to exchange messages with external systems in a reliable, flexible, and scalable manner. In particular, this solution demonstrates how to use the new SB-Messaging adapter to integrate a BizTalk application with Windows Azure Service Bus Queues, Topics and Subscriptions, and how to use the WCF-NetTcpRelay and WCF-BasicHttpRelay adapters to let a BizTalk Server 2013 application expose a receive location in the cloud as a Service Bus Relay Service.
In this demo you will learn how to use the classes in the Service Bus .NET API contained in the Microsoft.ServiceBus library and the new Service Bus adapters provided by BizTalk Server 2013 to perform the following operations:
In order to send messages to the BizTalk application, you can use the Service Bus Explorer tool or you can leverage the client application included in the solution. This client application makes use of the Microsoft.ServiceBus.dll to exchange messages with the BizTalk application via the Service Bus Brokered and Relayed Messaging. In particular, the client application allows to choose between 5 different synchronous and asynchronous ways to send messages to a queue or a topic.
In this demo you will also learn how to translate the explicit and user-defined properties of a BrokeredMessage object into the context properties of a BizTalk message and viceversa using the functionality supplied by the SB-Messaging adapter.
The following picture shows the first of the three scenarios implemented by the sample. In this context, the Windows Forms client application simulates a line-of-business system running on-premises or in the cloud that exchanges messages with a BizTalk Server application by using queues, topics and subscriptions provided by the Service Bus messaging infrastructure. In this sample, the BizTalk Server 2013 application uses always the same queue or topic to send the response back to the caller.
The following picture shows the second of the three scenarios implemented by the sample. The Windows Forms client application still simulates a line-of-business system running on-premises or in the cloud that exchanges messages with a BizTalk Server application by using queues, topics and subscriptions provided by the Service Bus messaging infrastructure. However, in this case the client application specifies the URL of the response queue or topic in the ReplyTo property.
The following picture show the third scenario implemented by this sample. In this case, the Windows Forms client application uses a WCF proxy with the NetTcpRelayBinding client endpoint to invoke a WCF-NetTcpRelay Two-Way Request-Response Receive Location exposed by the BizTalk Server 2013 application.
The following list contains the software required for building and running the sample:
Inside the zip file you can find a Readme file with the instructions on how to install the demo. In particular, the Setup folder contains the binding file to create the receive locations and send ports required by the solution.
Note 1: to create the Service Bus queues, topics and subscriptions required by the sample you can use the Provisioning console application contained in the solution.
Note 2: before importing the binding file in your BizTalk Server 2013 application, make sure to perform the following operations:
Note 3: make sure to replace the [ISSUER_SECRET] and [SERVICE_BUS_NAMESPACE] placeholders in the following solution files:
Nota 4: if you use session-aware request queue and subscription you need to specify this explicitly in both SB-Messaging receive Locations configuration as shown in the picture below:
Running the Sample
Assuming that you have already deployed and properly configured the solution, you can proceed as follows to test it.
The figure below shows the how to use the client application to invoke the most interesting combination:
Note 1: you can select one of 5 different methods to send a BrokeredMessage to the BizTalk Server 2013 application.
Note 2: you can use DebugView to monitor the trace messages produced by the orchestrations.
I’ll soon publish an article on MSDN to describe the solution in detail. So come back later for the link! The demo code can be found on MSDN Code Gallery.
Hi Paolo,
Did you get time to publish the detailed description of the above solution.If so, please provide me with the link.
Thanks
Hi Swaran
Unfortunately this is one of the few, probably the one, solution for which I didn't provide a detailed description. Sorry about that. :(
Ciao
Paolo
Is it possible to expose orchestration as a web service and consume the same using Windows Azure BizTalk services.
In our project we have a requirement to use orchestrations, so as orch is not supported in WABS. Is there any other ways we can achieve the same.
Please help me in this regard.
Thanks.
Absolutely, you can expose the orchestration using the Service Bus Relayed Messaging and call the endpoint from a bridge running In Windows Azure BizTalk Services. Just create a receive location based on one of the following adapters:
- WCF-NetTcpRelay
- WCF-BasicHttpRelay
- WCF-Custom + Service Bus binding (BasicHttpRelayBinding, NetTcpRelayBinding)
See the bridge contained in my sample code.msdn.microsoft.com/.../How-to-integrate-Mobile-6718aaf2 to understand how to invoke the WCF receive location from a bridge.
Thanks alot Paolo...
Will try to simulate the same and let you know for any issues... :)
Thanking you again for the quick reply... :)
Could you please help me with a small demo on how to expose the orchestration using the Service Bus Relayed Messaging and calling the endpoint from a bridge running In Windows Azure BizTalk Services.
Hi mate
I'm extremely busy in this period, I don't have time to build a demo, sorry about that. See how to expose a relay service using a WCF adapter in my sample code.msdn.microsoft.com/.../How-to-integrate-Mobiles-77b25d12. I will have time just starting in mid-March, unless something new arrives on my plate before that date. :)
Not a prob..Will try to work it out.....Thanks alot for all your help.. :)
I have exposed the orchestration as a WCF service, but the service endpoints (.svc and .svc_mex) are not getting listed out under the service bus in WABS portal.
What could be the reason for the same?
please, contact me on the email. This blog is not meant to receive suggestions for a PoC. :) Said that, I don't understand where you are searching for the service relay endpoint on the WABS portal. You should rather search for them in the Relays tab under the your Service Bus namespace on the WA management portal. Make sure to use the serviceRegistrySettings endpoint behavior to make your WCF receive location discoverable. :)
Sure will do that.And sorry for the same.Could you please provide me with your email ID.
Thanks Paolo,
I'm really enjoying your insights -- and absolutely love the SB Explorer (nice work).
In this entry though, I'm missing the value added by using SB Queues/Topics to transport the request/response through to the BizTalk Orchestration. We could just as easily and more simply use an exposed WCF Service BizTalk endpoint. Am I missing something?
We're thinking of adding Windows Server SB to our integration platform but might limit it to pub/sub solutions at least until the relay service is available (wish, wish) -- does that sound typical?
Thanks again for all you do
JimF
Hi Jim
thanks for the feedback! :) To answer your question, I would say, it depends on what binding you use on your WCF-Custom receive location:
- Service Bus Topics and Queues provides a pub/sub, asynchronous, pull model while WCF typical bindings (e.g. NetTcpBinding or its Service Relay counterpart, NetTcpRelayBinding) provides a synchronous, push model.
- When using Topics and Queues, sender and receiver don't necessarily need to be up at the same time, while this is strictly necessary when using WCF with any binding provising a synchronous message Exchange pattern
- Service Bus Topics and Queues provide many functions out of the box: duplicate detection, scheduled messages, sessions, message time to live, etc.
I can guarantee that there are a ton of customers using BizTalk along with Service Bus... I can't make names here, but also big companies. ;)
That makes alot of sense. And I totally appreciate your enthusiastic endorsement.
You've led me to recognize that I'm not placing a high enough value on the durable messaging feature of the Service Bus. We tend to think in terms of the Request-Response pattern even when it's not required or even appropriate.
THanks for your help -- I feel a little more confident thinking through the addition of Service Bus features to the integration platform.
Hope that all is going well.
Jim
Hey Paolo,
Quick Question. I need to generate the wsdl file for bts2013/nettcp/calculatorservice. I haven't been able to browse to the wsdl file inj azure. Are you aware of a way to create the wsdl file for a service bus relay. Should we do it from BizTalk or what is the procedure of doing that?
|
http://blogs.msdn.com/b/paolos/archive/2013/04/09/how-to-integrate-biztalk-server-2013-with-windows-azure-service-bus.aspx
|
CC-MAIN-2015-48
|
refinedweb
| 1,541
| 53
|
This document contains the following sections:1.0 Sax parser introduction
This document has been revised since its first release. A patch made available around June 7, 2004, updates the sax module to conform to the description in this document.
This utility provides a validating parser for XML 1.0 and XML 1.1. The interface to the parser is based on the SAX (Simple API for XML) specification.
A SAX parser reads the input file and checks it for correctness. While it is parsing the file, it is making callbacks to user code that note what the parser is seeing. If the parser finds an error in the input file it signals an error.
There are two levels of correctness for an xml file: well formed, described in Section 1.4 Well-formed XML documents and valid, described in Section 1.5 Valid XML documents.
When the sax parser is invoked it creates as instance of the class
sax-parser (or a subclass of
sax-parser). This instance holds the data
accumulated during the parse and this instance is also used to
discriminate on the method to call when callbacks are done. A user of
the parser will usually subclass
sax-parser and
then write methods on this subclass for those callbacks that he wishes
to handle. The
sax-parser class has a set of
methods for the callbacks so the user need only write those methods
whose behavior he wishes to change.
All symbols in this module are exported from the
net.xml.sax package. The module is named
sax. You load the
sax module with a form like:
(require :sax)
See also dom.htm, which describes Document Object Model support in Allegro CL.
XML (the Extensible Markup Language) is a language for writing structured documents. An XML document contains characters and elements. An element looks like
<name att1="value1" att2="value2"> body content here </name>
or
<name att1="value1" att2="value2"/>
The elements are used to assign a meaning to the text between the start and end tags of the elements. For example
<name> <lastname>Smith</lastname> <firstname>John</firstname> </name>
The designers of XML intended to write a clear concise specification of a structured document language. While they did not achieve that very ambitious goal, XML has nevertheless become very popular, in large part because of the popularity of the world wide web and the HTML language on which it is written. XML is very similar to HTML.
There are two versions of XML (1.0 and 1.1) and they differ in the characters they permit inside documents. In XML 1.0 the XML designers decided what characters were permitted and declared that all other characters were forbidden. In XML 1.1 the XML designers decided which characters were forbidden and any characters not forbidden were permitted. Far more characters are permitted in XML 1.1 than XML 1.0. All XML 1.0 documents are XML 1.1 documents but not the other way around.
The way that the two versions of XML documents are distinguished is by what appears at the beginning of the document. An XML 1.1 document always begins
<?xml version="1.1"?>
and this form may also include encoding and standalone attributes.
An XML 1.0 document begins with
<?xml version="1.0"?>
or begins with no
<?xml..?> form at all.
There are two popular models for parsing an XML document, DOM and SAX:
The parser reads the whole XML document and returns a object which represents the whole XML document. A program can query this object and the objects this object points to in order to find out what is in the XML document.
The advantage of DOM parsing is the XML document is now in a form that's easily studied and manipulated by a program. It has the disadvantage that there is a limit to the size of the XML document you can parse this way since the whole document represented by objects must fit in the address space of the program.
While the parser is reading the XML document it is calling back to user code to tell it what it's encountering. The callbacks occur immediately, before the parser has even determined that the XML document is completely error free.
The advantage of SAX parsing is the user code can ignore what it does not care about and only keep the data it considers important. Thus it can handle huge XML documents. But the disadvantage is the callbacks occur before it's even known if this document is correct XML. If the goal is to analyze the document then the sax user code will often end up writing ad-hoc DOM structure.
An XML document must be well-formed or technically it is not an XML document. There is no simple definition of well-formed (and readers are invited to read the XML specification at for all the details). Basically though a well-formed document follows these rules:
document 1: <foo/> document 2: <foo></foo> document 3: <foo> hello <bar> </bar> hello <baz/> hello </foo>
and some not well-formed ones:
document 4: <foo/> <bar/> document 5: <foo/> hello document 6: no elements here.
<foo> </foo>
and this one isn't:
<foo>
<foo> hello <bar/> <baz> hello </baz> </foo>
and this one isn't:
<foo> hello <bar> hello </foo> hello </bar>
The ACL Sax parser will signal an error if it detects that the document is not well-formed.
A well-formed document can also be valid. A valid document contains or references a DTD (document type description) and obeys that DTD.
A DTD contains two things:
The ACL Sax parser will test a document for validity only if the
:validate argument is given as true (the
:validate argument defaults to false). The Sax
parser takes longer to parse if it must validate as well. Even if the
paser is not validating it may detect problems in the document for
which it would have signaled an error if it were validating. In this
case the parser will issue a warning. You can surpress those warnings
by passing
nil to the
:warn argument (the default for
:warn is true).
The parser collects all the DTD information about the document and stores it in the parser object that's passed to all the callback functions. You can use the accessors shown below to retrieve information about the DTD.
There are two predefined classes that you can pass to the :class argument of the sax-parse functions.
The class
sax-parser defines callback functions
that do nothing except for compute-external-address and compute-external-format, which do the work
necessary to ensure that the parse will be able to handle external
references.
In the example code shown below we'll assume that we've created our own
subclass of
sax-parser called
my-sax-parser. We do this by evaluating:
(defclass my-sax-parser (sax-parser) ((private :initform nil :accessor private)))
The class
test-sax-parser defines the callback
methods to bring the values of their arguments. This allows you to see
how the sax parser would treat an xml document. See
Section 3.0 Testing the sax parser: the test-sax-parser class for more information
on this class.
Arguments: (parser sax-parser)
User should define their own method on their subclass of
sax-parser. start-document is called just before
the parser begins parsing its input. This function can be used to
initialize values in the instance of the parser object. The default method
returns
nil.
This callback is a good place to do initialization of the data structures you will be using during the parse, as we do with the following method (we assume private and make-private-object are elsewhere defined):
(defmethod start-document ((parser my-sax-parser)) (setf (private parser) (make-private-object)))
Arguments: (parser sax-parser)
User should define their own method on their subclass of
sax-parser. end-document is called after the parse
is compete. end-document will only be called if the
document is well formed, and in the case where the parser was called
with
:validate t then end-document will only be called if the
document is also valid. The default method returns
nil.
(defmethod end-document ((parser my-sax-parser)) (finalize-parse (private parser)))
Arguments: (parser sax-parser) iri localname qname attrs
This method is called when a start element (like <foo> or
<foo/>) is seen. If you sax-parse with
:namespace
t, then iri is the iri that denotes the
namespace of the start element tag (this is specified by a namespace
binding); localname is the part after the colon
in the element tag; qname is what was actually
seen as the tag (e.g.
"rdf:foo"); and
attrs is a list of
("attrname"
. "value") where
attrname can contain
colons (e.g. namespace processing has not been done).
If, on the other hand, sax-parse with
:namespace
nil, then iri is
nil; localname is the actual
element tag (e.g.
"rdf:foo");
qname is the same as
localname; and attrs is a
list of
("attrname" . "value") where
attrname can contain colons (e.g. namespace
processing has not been done).
Given this xml source:
<foo xmlns="urn:defnamespace" xmlns: <bar/> <pack:baz/> </foo>
If the parser is called with
:namespace t then
during the parse three calls start-element are made with the
arguments to the calls being:
iri="urn:defnamespace", localname="foo", qname="foo" iri="urn:defnamespace", localname="bar", qname="bar" iri="", localname="baz", qname="pack:baz"
If the parse is called with
:namespace nil then
again three calls are made to start-element, but this time the
arguments are:
iri=nil, localname="foo", qname="foo" iri=nil, localname="bar", qname="bar" iri=nil, localname="pack:baz", qname="pack:baz"
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) iri localname qname
This method is called when an end element (</foo> or
<foo/>) is seen. As with start-element, the values of the iri,
localname, and qname arguments depend on whether sax-parse is called
with
:namespace t or
:namespace
nil. See start-element for details.
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) prefix iri
This method is called when the parser enters a context where the namespace prefix is mapped to the given iri. A prefix of "" means the default mapping. start-prefix-mapping is called before the start-element call for the element that defines the prefix mapping
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) prefix
This method is called when the parser leaves a context where the prefix mapping applies.
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) target data
This method is called when
<?name data> is
seen, with target being "name" and
data being "data". If
<?name data
values> is seen then target is
"name" and data is "data values".
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) content start end ignorable
This method is called when text is seen between elements. Note: for a given string of characters between elements, one or more calls to content or content-character may be made. For example, given the XML fragment <tag>abcdefghijkl</tagg>, the content method may be called once with a string argument of "abcdefghijkl", or twice with string arguments "abcd" and then "efghijkl", etc for all the other permutations.
If an application requires access to the entire content string as a single string, the application program must collect the fragments into a contiguous string. The parse-to-lxml function and the DOM module implement normalize options that ensure contiguous string content appears as a single Lisp string.
This is the most common error people make with this sax parser: assuming that all content between the start and end element tags will be passed in exactly one call to the content or content-character. As we said, the content may be provided in more than one call to content and content-character.
content is a character array. start is the index of the first character with content. end is one past the index of the last character with content.) character ignorable
This method is called when a single character of text is seen between elements. character is that character.) string
This method is called when an XML comment (i.e.
<!--
..... -->) is seen.
The default method does nothing and returns
nil.
Arguments: (parser sax-parser) system public current-filename
This method is called when the parser has to locate another file in
order to continue parsing. It should return a filename to open
next. It can return
nil if it cannot compute
a name.
system is
nil or a
string holding the value after SYSTEM in the xml source.
public is
nil or a
string holding the value after PUBLIC in the xml source.
current-filename is the filename of the file
being parsed.
The default method does not handle non-file identifiers such as those
beginning with "http:". It merges the pathname of
system with the pathname of
current-filename, if
current-filename is non-
nil, and otherwise returns the value of
system. The default method signals an error if
system is
nil. Thus,
the body of the default method looks like this:
(if* (null system) then (error "Can't compute external address with no system address")) (if* current-filename then (namestring (merge-pathnames (pathname system) (pathname current-filename) )) else system))
Arguments: (parser sax-parser) encoding ef
Given an encoding, this method should return an external format or the name of an external-format. The default method does the following:
(find-external-format (if* (equalp encoding "shift_jis") then :shiftjis elseif (equalp encoding "euc-jp") then :euc elseif (equalp encoding "utf-16") then ef ; already must have the correct ef else encoding)))
Arguments: filename &key (namespace t) (external t) (validate nil) (class (quote sax-parser)) (warn t) comments show-xmlns
This function parses the file specified by filename. The keyword arguments are:
nilthen some validation will still be done but problems will be reported as warnings and not errors. Even if validate is
nilthe parser will signal an error if the xml is not well formed.
test-sax-parserif you just want to experiment and see the parser in action. The value can be a symbol naming a class, a class object or an instance of the class
sax-parseror a subclass of
sax-parser. If an instance is passed then it must be a freshly created one that has never been passed to a sax parser function.
nil) then call
(comment parser string)for comments seen in the xml.
If namespace is
nil then
the xmlns attributes are always included with the list of attributes
(since in this case there is nothing special about them).
Arguments: stream &key (namespace t) (external t) (validate nil) (class (quote sax-parser)) (warn t) comments show-xmlns
This function is like sax-parse-file but parses the data from stream, which must be an open stream. stream is closed when sax-parse-stream returns.
Arguments: string &key (namespace t) (external t) (validate nil) (class (quote sax-parser)) (warn t) comments show-xmlns
This function is like sax-parse-file but parses the data from the string argument, which should be a string.
Note that the xml form parsed should not contain an encoding declaration as a string-input stream does not have an associated external format (since the contents of the string are characters already).
Arguments: parser flag-name
The parser flags are initially set from the values supplied (or defaulted) to the sax-parse-xxxx functions (sax-parse-file, sax-parse-stream, and sax-parse-string). You can use sax-parser-flag to read the current value of the flag. You can use (setf sax-parser-flag) to set certain flags. Some flags should not be modified after the parse has begun.
parser is an instance of the
sax-parser class (or a
subclass of
sax-parser). flag-name is one
of the values from the table below. sax-parser-flag
returns
t or
nil as
the flag is or is not set. When you use setf with this function to modify a flag,
specify a non-
nil value to set a flag and
nil to unset it.
The table below lists flags; writeable means that user code can change the value during the parse. Setting a flag denoted as not writeable will result in undefined behavior.
The parser first parses the DTD and then the content of the file. The information found in the DTD is stored in the parser object where it is referenced by the parser during the parse.
An xml document need not have a DTD. However if you tell the parser to validate a document then the document must have a DTD.
When the first start-element callback is made the whole DTD has been parsed and the information is stored in the parser object.
After the parse completes the DTD information is still stored in the parser object.
The following accessors retrieve DTD information from the parser object.
Arguments: parser
Returns a string naming the root element. Every xml file contains exactly one element at 'top level' and may contain other elements inside that root element.
Arguments: parser
Returns a hash table where the key is the general entity name and the value is an entity object.
Arguments: parser
Returns a hash table where the key is the parameter entity name and the value is an entity object.
Arguments: parser
Returns a hash table where the key is the notation name and the value is a notation object.
Arguments: entity
Returns a string naming the entity.
Arguments: entity
Returns
nil or a string holding the
replacement text for the entity. If the entity is internal then this
field will be a string.
Arguments: entity
Returns
nil for internal entities. For
external entities this is a string describing the location of the
entity's value (the string is often a location on the filesystem
relative to the file that references it).
Arguments: entity
Returns
nil or a string. For certain external
entities that have public identifiers, this is that public
identifier.
Arguments: entity
Returns
nil or a string. If this is an
external unparsed entity then this is the name of a notation that
describes its format.
Arguments: entity
Returns true if this entity was defined in the 'external subset' which is a term referring to files other than the main file being parsed.
Arguments: notation
Returns a string naming the notation.
Arguments: notation
Returns
nil or a string naming the public
identifier for this notation.
Arguments: notation
Returns a string naming the location of a description of the notation.
Arguments: attribute
returns a string naming the attribute.
Arguments: attribute
Returns the type of the attribute, which is one of:
:cdata
:id
:idref
:idrefs
:entity
:entities
:nmtoken
:nmtokens
(:notation "name" ...)
(:enum "name" ....)
Arguments: attribute
The value returned is one of:
:required,
:implied,
(:fixed
value)
, (:value value).
Arguments: attribute
returns true if the attribute was declared in the external subset.
Arguments: element
returns a string naming the element.
Arguments: element
Returns a list of attribute objects describing the attributes of this element.
Arguments: element
A description of the specification of the body of the element. The format is:
spec := :empty :any cp (:mixed ["name" ...]) cp := (:cp cho/seq modifier) cho/seq := (:choice cp [cp ...]) (:sequence cp [cp ...]) "name" modifier := nil "*" "?" "+"
Arguments: element
Returns true if the element was defined in the external subset.
If you wish to test the sax-parser, we have defined several example
classes. The class
test-sax-parser and its
associated methods are already defined in the system (after the sax
module is loaded). The class
sax-count-parser,
defined below in this section, is not defined in the sax module but
the definition code can be copied from this document.
The examples in this section assume that the SAX module has been
loaded and the relevant package (
net.xml.sax) has
been used. If you do not want to use the package, package-qualify the
relevant symbols. The following forms load the module and use the
package:
(require :sax) (use-package :net.xml.sax)
Here are the definitions of the class
test-sax-parser and the associated methods (again,
these definitions are included in the sax module so they need not be
defined again). The methods on
test-sax-parser
print the arguments to the callbacks.
This is the definition of this class: (defclass test-sax-parser (sax-parser) ()) (defmethod start-document ((parser test-sax-parser)) (format t "sax callback: Start Document~%")) (defmethod end-document ((parser test-sax-parser)) (format t "sax callback: End Document~%")) (defmethod start-element ((parser test-sax-parser) iri localname qname attrs) (format t "sax callback: start element ~s (iri: ~s) (qname: ~s) attrs: ~s~%" localname iri qname attrs) nil) (defmethod end-element ((parser test-sax-parser) iri localname qname) (format t "sax callback: end element ~s (iri: ~s) (qname: ~s)~%" localname iri qname) nil) (defmethod start-prefix-mapping ((parser test-sax-parser) prefix iri) (format t "sax callback: start-prefix-mapping ~s -> ~s~%" prefix iri) nil ) (defmethod end-prefix-mapping ((parser test-sax-parser) prefix) (format t "sax callback: end-prefix-mapping ~s~%" prefix) ) (defmethod processing-instruction ((parser test-sax-parser) target data) (format t "sax callback: processing-instruction target: ~s, data: ~s~%" target data) ;; nil) (defmethod content ((parser test-sax-parser) content start end ignorable) (format t "sax callback: ~:[~;ignorable~] content(~s,~s) ~s~%" ignorable start end (subseq content start end)) nil) (defmethod content-character ((parser test-sax-parser) character ignorable) (format t "sax callback: ~:[~;ignorable~] content-char ~s~%" ignorable character) nil) (defmethod compute-external-format ((parser test-sax-parser) encoding ef) (let ((ans (call-next-method))) (format t "sax callback: compute-external-format of ~s is ~s (current is ~s)~%" encoding ans ef) ans)) (defmethod comment ((parser test-sax-parser) string) ;; ;; called when <!-- ..... --> is seen ;; (format t "sax callback: comment: ~s~%" string) nil)
This is an example of another useful sax-parser subclass. The
sax-count-parser class maintains a count of the
elements, attributes and characters in an xml file. This class is not
defined when the sax parser is loaded but you can just copy the
definition below and load it into Lisp if you wish to try it.
; definition of a sax parser to count items (defstruct counter (elements 0) (attributes 0) (characters 0)) (defclass sax-count-parser (sax-parser) ((counts :initform (make-counter) :reader counts))) (defmethod start-element ((parser sax-count-parser) iri localname qname attrs) (declare (ignore iri localname qname)) (let ((counter (counts parser))) (incf (counter-elements counter)) (let ((attlen (length attrs))) (if* (> attlen 0) then (incf (counter-attributes counter) attlen))))) (defmethod content ((parser sax-count-parser) content start end ignorable) (declare (ignore content ignorable)) (let ((counter (counts parser))) (incf (counter-characters counter) (- end start)))) (defmethod content-character ((parser sax-count-parser) char ignorable) (declare (ignore char ignorable)) (let ((counter (counts parser))) (incf (counter-characters counter))))
LXML is a list representation of an XML parse tree. The notation was introduced initially with the PXML module (see pxml.htm, but note the PXML module is deprecated may may be removed in a release later than 9.0), and is supported for compatibility with existing applications. It is also a convenient representation for moderately sized XML documents.
The representation is made up of lists of LXML tags containing LXML nodes. An LXML node is either a string or a list of an LXML tag followed by an LXML node. An LXML tag is either a symbol or a list of a symbol followed by attribute/value pairs, where the attribute is a symbol and the value is a string. In brief:
LXML-node -> string | (LXML-tag [LXML-node] ... ) LXML-tag -> symbol | (symbol [attr-name attr-value] ... )
And more formally:
- An LXML node may be a string representing textual element content. - An LXML node may be list representing a named XML element. - The first element in the list represents the element tag - If no attributes were present in the element tag, then the element tag is represented by a Lisp symbol; the symbol-name of the Lisp symbol is the local name of the tag; the XML namespace of the tag is represented by the Lisp home package of the symbol. - If attributes were present in the element tag, then the element tag is represented by a list where the first element is the tag (as above) and the remainder of the list is a lisp property list where the property keys are lisp symbols that represent the attribute names and the property values are strings that represent the property values. - The remainder of the list is a list of LXML nodes that represent the content of the XML tag. - An LXML node may be a list of the form (:comment text-string) to represent a comment in the XML document. - An LXML node may be a list of the form (:pi target data) to represent a processing instruction in the XML document.
Each distinct XML namespace is mapped to a Lisp package. An
application may specify the namespace-to-package mapping in full, in
part, or not at all. If there is no pre-specified Lisp package for
some XML namespace, then the parser creates a new package with a name
"pppnn" where "ppp" is a prefix specified by the user and "nn" is an
integer that guarantees uniqueness. The default prefix is the
symbol-name of
:net.xml.namespace. (ending with a
period).
The
:sax module implements the
lxml-parser sub-class of
sax-parser. The methods on this class use the SAX
parser to build an LXML data structure from the parsed XML input. (In
earlier releases, it was possible to require a module named
:sax-lxml, which would not be included by default
in the
:sax module. Now that module is always
loaded when the
:sax module is loaded and cannot be
required separately.)
A subclass of
sax-parser. Slots include normalize,
default-package, package-prefix, and skip-ignorable. The
add-parser-package method is defined.
The initial value of the package slot is
:keyword. The inital value of the normalize
slot is
nil.
Arguments: lxml-parser
Returns the value of the normalize slot os its argument, which
must be an instance of
lxml-parser.
If the normalize slot is
nil, string elemnt
content may appear as a list of strings. The length of each fragment
is determined by the implementation and may vary from one parse to the
If the normalize slot is non-
nil, then if an
element contains only string content, this content will appear as one
contiguous string. This option will naturally require the parser to
do more consing during the parse.
Arguments: parser iri package &rest prefixes
The default method, defined on
(lxml-parser t t),
adds a new iri-to-package mapping to the parser or adds a prefix to an
existing mapping.
The iri argument may be a string or a
net.uri:uri instance (see
uri.htm). The package argument
may be a package or the name of a package. The
prefixes may be symbols or strings. When the
iri argument is a uri instance, it is converted
to its string form for use during the parse.
Note that the Allegro CL implementation of uri instances may map many different uri instances to the same string. To avoid possible ambiguities, it is best to specify the iri argument as a string that will be used without any interpretation or change.
To pre-specify namespace-to-package mappings in a program, the
application program must call add-parser-package in a start-document method for an
application-specific sub-class of
lxml-parser.
Arguments: lxml-parser
Returns the default package of the
lxml-parser instance.
Arguments: lxml-parser
Returns the prefix string used to generate package names for packages
that represent namespaces that were not specified with add-parser-package. This default value is
:net.xml.namespace. (with a trailing period).
Arguments: lxml-parser
Returns whether ignorable text will be skipped for the
lxml-parser instance. This default value
is
nil.
Arguments: lxml-parser
When a parse is complete, this accessor returns the resulting lxml data structure.
A subclass of
lxml-parser. The initial value of the
package slot is the value of
*package*. The inital value of the
normalize slot is
t.
Arguments: string-or-stream &key external-callback content-only general-entities parameter-entities uri-to-package package class normalize comments warn
This function was updated with a patch in December, 2012.
The arguments to this function are like the arguments to net.xml.parser:parse-xml (see pxml.htm).The class and methods are included for compatibility with pxml.
The content-only, external-callback, general-entities, and parameter-entities are ignored, silently in the case of content-only, with a warning for the others.
The package keyword argument specifies the
Lisp package of XML names without a namespace qualifier. If the
argument is omitted or
nil, the initial value
in the class is used.
The class keyword argument specifies the
class of the parser. The choice of class can affect the default
packege and normalize behavior, and many other behaviors. The default
is
lxml-parser.
The class argument may be the name of a class, a class object, or an instance of a suitable class. If an instance is passed, it must be one that has never been used by the SAX or LXML parser.
The normalize keyword argument specifies
the value of the normalize slot in the parser. Values other
than
nil or
t must
be specifed in the call. It can be one of the following values:
nil: do not combine strings, do not delete anything
:trim-simple: applies only to elements where the only content is strings. Combine adjacent string content into a single string, delete leading and trailing whitespace in the combined string.
:trim-complex: applies only to elements that contain other named XML elements. Combine and delete as in :trim-simple. If resulting string is the empty string then delete it entirely.
:trim-all: apply both :trim-simple and :trim-complex
:trim: same as :trim-all
nil: only combine adjacent string content into a single string.
Whitespace characters are defined by the parser-char-table in the parser instance. The various trim behaviors are not specified in the XML standard but are often useful when parsing to LXML.
The uri-to-package argument is a list of
conses of the form
(iri . package) where
iri may be a string or a uri instance and
package may be a package name or a package
instance.
The :comments argument may
be
nil or non-
nil.
When
nil (the default), XML comments are
discarded during the parse. When non-
nil,
XML comments are included in the LXML output as expressions of the
form
(:comment text-string).
The :warn argument is propagated to the sax-parse-* function called by parse-to-lxml.
This form is more general than that allowed by the parse-xml function.
The
lxml-parser instance created in the most
recent call to parse-to-lxml.
The
:pxml-sax module implements a partial pxml API
to the SAX parser. This module replaces the
:pxml
module. It requires the modules
:sax, and
:sax-lxml. Symbols naming operators, variables,
etc. in the module are in the
:net.xml.parser
package. Load this module with
(require :pxml-sax)
The operators in this module are:
The
:pxml-dual module allows an application to
switch at run time between the base implementation of pxml and the
partial SAX implementation. It requires the modules
:pxml,
:sax, and
:sax-lxml. Symbols naming operators, variables,
etc. in the module are in the
:net.xml.parser
package. Load this module with
(require :pxml-dual)
When the module is loaded, the initial setting is to use the SAX parser implementation.
We provide this module to allow mission-critical applications to test both parsers in the same run-time environment. You can switch between the base and the SAX parsers with pxml-version.
The operators in this module are:
:base.
:sax.
In this section, we list the operators and variables associated with the various PXML modules. In many cases, the operators behave differently depending on what module is loaded.
The PXML parser default behavior was to silently ignore external DTDs unless a function was specified for the external-callback argument. The SAX parser default behavior is to signal an error if an external DTD cannot be located. The built-in default function can only locate files in the local file system.
Existing applications that depend on the default external DTD behavior of the PXML parser may break when using the SAX parser through the PXML compatibility package. These application will need to use the SAX parser more explicitly and specify a suitable compute-external-address method.
Arguments:
In the :pxml-sax module, this function works as described in pxml.htm: called with no arguments, this function returns a string naming the PXML version.
Arguments: &optional parser-type
Called with no arguments, this function returns a string naming the
PXML version. If parser-type is specified, it
should be either
:sax,
:base, or
:query.
When parser-type is
:sax, the
SAX version of parse-xml is enabled. When
parser-type is
:base, the original version of parse-xml is enabled.
When parser-type is
:query,
this function returns
:base or
:sax depending on which version of parse-xml is enabled.
Arguments: input-source &key external-callback content-only general-entities parameter-entities uri-to-package
The arguments and behavior are fully described in
pxml.htm. The differences among modules is whether
the keyword arguments content-only,
external-callback,
general-entities, and
parameter-entities have effect or are ignored. In
the
:pxml-sax module and (thus) in the
:pxml-dual module when in
:sax
mode, those arguments are ignored (silently in the case of
content-only, with a warning for the others). The
implementation of parse-xml in the SAX mode cannot at this
time support the use of those arguments, but is much faster than in
base mode. All arguments are considered when regular PXML is loaded or
the
:pxml-dual module is loaded and is in
:base mode.
When the SAX implementation of parse-xml is used, the
uri-to-package argument may be a list of conses
of the form
(iri . package) where
iri may be a string or a uri instance and
package may be a package name or a package
instance.
This form is more general than the form accepted by the base implementation of parse-xml. An application using the more general form will not be back-compatible with the base implementation of parse-xml.
Arguments: &body body
Defined in the
:pxml-dual module only (see
Section 4.4 The PXML-DUAL Module). Within the body of this
macro the implemetation of parse-xml is dynamically bound to the base
implementation. See also with-sax-pxml.
Arguments: &body body
Defined in the
:pxml-dual module only (see
Section 4.4 The PXML-DUAL Module). Within the body of this
macro the implemetation of parse-xml is dynamically bound to the SAX
implementation. See also with-base-pxml.
lxml-parser
*lxml-parser*
pxml-parser
Copyright (c) 1998-2012, Franz Inc. Oakland, CA., USA. All rights reserved.
Documentation for Allegro CL version 8.2. This page was not revised from the 8.1 page.
Created 2010.1.21.
|
http://franz.com/support/documentation/8.2/doc/sax.htm
|
CC-MAIN-2014-52
|
refinedweb
| 5,935
| 54.83
|
SYNOPSIS
#include <string.h>
char *stpncpy(char *dest, const char *src, size_t n);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
stpncpy():
- Since glibc 2.10:
- _POSIX_C_SOURCE >= 200809L
- Before glibc 2.10:
- _GNU_SOURCE
DESCRIPTIONThe.
RETURN VALUEstpncpy() returns a pointer to the terminating null byte in dest, or, if dest is not null-terminated, dest+n.
ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7).
CONFORMING TOThis function was added to POSIX.1-2008. Before that, it was a GNU extension. It first appeared in version 1.07 of the GNU C library in 1993.
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
|
https://manpages.org/stpncpy/3
|
CC-MAIN-2022-21
|
refinedweb
| 133
| 70.6
|
One element of a subcommand dispatch table.
More...
#include <svn_opt.h>
One element of a subcommand dispatch table.
Definition at line 82 of file svn_opt.h.
A list of alias names for this command (e.g., 'up' for 'update').
Definition at line 91 of file svn_opt.h.
The function this command invokes.
Definition at line 88 of file svn_opt.h.
A brief string describing this command, for usage messages.
Definition at line 94 of file svn_opt.h.
The full name of this command.
Definition at line 85 of file svn_opt.h.
A list of options accepted by this command.
Each value in the array is a unique enum (the 2nd field in apr_getopt_option_t)
Definition at line 99 of file svn_opt.h.
|
http://subversion.apache.org/docs/api/1.7/structsvn__opt__subcommand__desc2__t.html
|
CC-MAIN-2018-05
|
refinedweb
| 121
| 71.71
|
Recently, when I was working on SQL Server 2005 project, I created a job and scheduled it as recurring using GUI interface. Then I tried to reverse engineer the script (Create Script > Change the Script parameters manually).
It showed that begindate as current date but to my suprise, last date was 99991231, ie 31st December, 9999. (What a work around !! - Why not assign End date to a predefined macro or value ie something similar to infinity)
So, SQL Server guys predict that world would end on 31st December, 9999 (Armageddon - There are people who think other wise Refer Wikipedia).
Come on, I always knew there was something spiritual about Microsoft products. They defy all the rules on Computer Science. :-)
When a tester enters my room wearing a big smile, most common replies are "It was working on my machine" or probably "It was working that day" :-) .
Well I say, "You know since it uses Microsoft products, you can start it only on auspicious day and that too by chanting OM (holy word that Hindus use)"
Ok jokes apart, I never supported Microsoft products until I was hired by a Microsoft based company and was placed on .NET and SQL Server projects.
Thats when I realized that Microsoft somehow allows you to create most of the things and that too easily, but still as a computer science engineer I still find things are not in place.
For eg. I was experimenting SQL Server Integration Services using C# code. I wanted use Microsoft.SqlServer.Dts.Runtime namespace. But it is not imported by default for Console application. So I tried to find dll that I should reference (I presumed it should have been Microsoft.SqlServer.Dts.dll or with similar name).
I was visited MSDN for help (See MSDN). I didnt find a thing on which DLL to reference. By the way, if Microsoft claims that they have improved their Visual Studio standards by n%, they have detoriated their MSDN standards by same n%. It is one of the most pathetic documentation for a good product like .NET.
I personally think all MSDN documentor should read Java Doc before even starting to write a line. Any time while coding in Java, most of my problems are solved using Java Doc. This is not in case of C#.
Anyways, I did found out the dll after 1/2 hour of trial and error search. It was Microsoft.SqlServer.ManagedDts.dll
Come on guys, we are not robots to find such references among huge list of dll. (Why Managed ? I understand the difference between managed and unmanaged code. Even if it is important to use Managed substring, please reflect that in your documentation).
Disclaimer: This blog is simply meant for humour and is not meant to offend anyone. This doesnot reflect any views of my company or my collegues. It is purely my personal opinion.
|
http://niketanblog.blogspot.com/2006/11/metaphysical-coding.html
|
CC-MAIN-2019-04
|
refinedweb
| 481
| 64.51
|
In this C++ tutorial, let us discuss multithreading concepts and creation or termination of threads with an example program.
Introduction of Multithreading
Multithreading is a specific form of multitasking that allows your computer to run two or more programs concurrently. There are two types of multitasking such as
- process-based multitasking
- thread-based multitasking
The Process-based multitasking controls the concurrent execution of programs, whereas the Thread-based multitasking deals with the concurrent execution of parts of the same program.
In general, C++ does not hold any built-in functions for multithreaded applications. Instead, it relies entirely upon the operating system to provide that feature.
Creation of Thread
The routine given below is used to create a POSIX thread.
#include <pthread.h> pthread_create (thread, attr, start_routine, arg)
Here, pthread_create creates a new thread and makes it executable. This routine can be called several numbers of times from anywhere within the code.
Description of parameters in the above-specified routine
- thread
An opaque, unique identifier for the new thread returned by the subroutine.
- attr
An opaque attribute object that may be used to set thread attributes. You can specify a thread attributes object or NULL for the default values.
- start_routine
The C++ routine that the thread will execute once it is created.
- arg
A single argument that may be passed to start_routine. It must be passed by reference as a pointer cast of type void. NULL may be used if no argument is to be passed.
Termination of Threads
The routine given below is used to terminate a POSIX thread
#include <pthread.h> pthread_exit (status)
Here pthread_exit is used to explicitly exit a thread. Typically, the pthread_exit() routine is called after a thread has completed its work and is no longer required to exist.
C++ Program for creation and termination of threads
This following example code creates 5 threads with the pthread_create() routine and then terminates it using pthread_exit().
#include <iostream> #include <cstdlib> #include <pthread.h> using namespace std; #define NUM_THREADS 5 void *PrintHello(void *threadid) { long tid; tid = (long)threadid; cout << "Hello World! Thread ID, " << tid << endl; pthread_exit(NULL); } int main () { pthread_t threads[NUM_THREADS]; int rc; int i; for( i = 0; i < NUM_THREADS; i++ ) { cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)i); if (rc) { cout << "Error:unable to create thread," << rc << endl; exit(-1); } } pthread_exit(NULL); }
Output
main() : creating thread, 0 main() : creating thread, 1 main() : creating thread, 2 main() : creating thread, 3 main() : creating thread, 4 Hello World! Thread ID, 0 Hello World! Thread ID, 1 Hello World! Thread ID, 2 Hello World! Thread ID, 3 Hello World! Thread ID, 4
|
https://www.codeatglance.com/cpp-multithreading/
|
CC-MAIN-2020-40
|
refinedweb
| 438
| 55.64
|
The micro:bit can output sound to a pair of headphones or a speaker by sending a signal on Pin0.
Common ways to do this are by using a set of analogue headphones with a jack plug on the end or using an inexpensive piezo speaker
Connection
Programming
MakeCode
Place a start melody block from the Music menu underneath the on start block to play a melody when the micro:bit is powered on or reset.
Python
Music
Import the music module and then use music.play() to play a melody when the micro:bit is powered on or reset.
import music music.play(music.ENTERTAINER)
Audio
It is also possible to play audio from sound files available to the micro:bit. Add the files attached to this article to the Python Editor filesystem. Copy this program into the editor then flash this program to the micro:bit.
import audio # read the files from the microbit filesystem def read_frame(f_list, frame): for file in f_list: ln = file.readinto(frame) while ln: yield frame ln = file.readinto(frame) # craete a function to open and play each file in turn")
Thanks to fizban for the example.
Speech
Micropython also contains a speech synthesiser to enable the micro:bit to talk and sing.
This program uses speech.say() and speech.pronounce() with some parameters as an example of what we can do with the speech module. Copy this program into the editor and then Download/Flash this program to the micro:bit:
from microbit import * import speech while True: speech.say('I am a BBC microbit`) sleep(100) speech.pronounce('BIHDDIY BIHDDIY BIHDDIY BIHDDIY', speed=60, pitch=255) sleep(100)
You might want to use a powered speaker for this as the speech module is quiet.
How the micro:bit produces sound via PWM
The micro:bit uses Pulse Width Modulation (PWM) as a way to simulate an analogue output on a digital pin. It sends a series of high speed on/off electronic pulses to a speaker which can convert this to physical vibrations to create sound waves.
The variation in the length of the on pulse, the Duty Cycle creates an average voltage output. A 50% duty cycle (often called a square wave) sets an equal time for pulse on and off.
To convey the frequency of sound(Hz) that the digital signal is trying to reproduce we vary the time between the signal voltage being high or low.
To do this, the period(ms) of the note needs to be found, which is the amount of time it takes for the wave to cycle once. This is done by taking the inverse of the frequency of the note:
Period(ms)=1/Frequency(Hz)
For example, The note A4 maps to 440hz so using our calculation:
1/440 = 2.727
Once we have the value of the period, to represent the pitch in PWM, we just hold the signal high for half the length of the period (1.13636ms), and low for the other half(1.13636ms).
The Analogue Pitch block in MakeCode lets us set this frequency for PWM on the desired pin.
|
https://support.microbit.org/support/solutions/articles/19000101901-connecting-a-speaker-to-the-micro-bit
|
CC-MAIN-2019-51
|
refinedweb
| 523
| 62.48
|
Increase the memory available to your tests
I love having test projects included in my solutions. Software is alive. I’m constantly making improvements/changes/fixes. When I have customers asking for various features in my code, or for code improvements, being agile and able to publish a changed build with utmost confidence relies largely on a great set of tests that can be run very quickly and exercise my code. These tests very also make it easy to run my code, allowing Test Driven Development techniques, and much greater productivity.
I wrote a tool called MemSpect which consists of:
1. A portion that’s injected into a target process (very early) that intercepts all memory use, such as
a. Heap operations like creation and allocation
b. Virtual memory allocations like VirtualAlloc
c. Managed object creation
2. For each intercepted allocation, it creates an associated tag which contains
a. The Sequence number (1,2,3…)
b. The ThreadId
c. The Size of the allocation
d. The callstack (who allocated it and why)
3. If the object is freed (or garbage collected) the memory is freed and the tag is discarded.
4. A UI executable shows UI to examine all memory allocations
5. It can take a “snapshot” of the entire process for later offline analysis even after the target process has exited
We discovered and fixed hundreds of memory issues which effect performance. (Reading disk is 1000 times slower than memory, like the difference between 1 second and 17 minutes! So inefficient memory use leads to more slow disk accesses)
MemSpect is further described here:
I rely heavily on the test infrastructure I’ve created for MemSpect. However, sometimes a test or two throws an Out of Memory exception, especially those that load a full offline snapshot of Visual Studio memory.
I know that the production code, when run on a 64 bit OS, can use more memory, but the test infrastructure in VS 2010 does not take advantage of more memory.
So I modified the test execution engine to allow much larger memory to be used, if available. ( see Out of memory? Easy ways to increase the memory available to your program)
Without the fix, the code below used 164Meg before it failed. With the fix, 655 Meg! That’s 4 times more memory!
On a 64 bit OS with at least 8 Gigs memory (that’s the same amount of memory on my Windows Phone!),
Start Visual Studio 2010
File->New->Project->C#->Test
Call it csTest.
Paste in the code sample below.
You might have to choose Project->Properties->Build->Allow Unsafe Code.
Hit Ctrl+R,T to run the test (Don’t hit Ctrl+R, Ctrl+T, which will debug the test!)
3:15:52 PM Size = 40000
3:15:52 PM 0 Priv= 55,214,080 Mgd: 0
3:15:52 PM 1000 Priv= 94,519,296 Mgd: 42,323,344
3:15:53 PM 2000 Priv= 135,544,832 Mgd: 83,416,952
3:15:53 PM 3000 Priv= 219,201,536 Mgd: 164,495,376
3:15:53 PM 4000 Priv= 219,459,584 Mgd: 164,495,376
3:15:53 PM Exception SizeofBig = 40,000 Inst #=4,096 TotSize = 163,840
Now make the change: Start a Visual Studio Command Prompt
Make a backup copy of QTAgent32.exe before you start. Also Exit VS so the file is not in use.
C:\>editbin "Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\QTAgent32.exe" /LargeAddressAware
To revert the change:
C:\>editbin "Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\QTAgent32.exe" /LargeAddressAware:no
To verify the change:
link /dump /headers "Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\QTAgent32.exe" | more
122 characteristics
Executable
Application can handle large (>2GB) addresses
32 bit word machine
3:14:45 PM Size = 40000
3:14:45 PM 0 Priv= 55,070,720 Mgd: 0
3:14:46 PM 1000 Priv= 115,257,344 Mgd: 21,889,544
3:14:46 PM 2000 Priv= 197,410,816 Mgd: 21,889,544
3:14:46 PM 3000 Priv= 219,459,584 Mgd: 164,997,440
3:14:46 PM 4000 Priv= 219,844,608 Mgd: 164,997,440
3:14:46 PM 5000 Priv= 386,777,088 Mgd: 328,591,840
3:14:46 PM 6000 Priv= 387,035,136 Mgd: 328,591,840
3:14:46 PM 7000 Priv= 387,424,256 Mgd: 328,591,840
3:14:46 PM 8000 Priv= 387,813,376 Mgd: 328,591,840
3:14:47 PM 9000 Priv= 715,374,592 Mgd: 655,855,360
3:14:47 PM 10000 Priv= 715,632,640 Mgd: 655,855,360
3:14:47 PM 11000 Priv= 715,632,640 Mgd: 655,855,360
3:14:47 PM 12000 Priv= 716,021,760 Mgd: 655,855,360
3:14:47 PM 13000 Priv= 716,410,880 Mgd: 655,855,360
3:14:47 PM 14000 Priv= 716,800,000 Mgd: 655,855,360
3:14:47 PM 15000 Priv= 717,189,120 Mgd: 655,855,360
3:14:47 PM 16000 Priv= 717,189,120 Mgd: 655,855,360
3:14:47 PM Exception SizeofBig = 40,000 Inst #=16,384 TotSize = 655,360
Additional notes:
· The code uses a log file with a constant name. Normally, a test just passes or fails. But with a log file, the test can output status. I open the file in VS, choose Tools->Options->Environment->Documents->Auto-load changes, if saved and the log refreshes automatically as the test progresses. TO refresh the log faster, use File.AppendAllText (as in the code comments)
· Using a log file as test output, one can compare the log with a baseline log and assert any differences.
· Performance counters are used to show memory use. Note how the managed memory grows in big chunks (doubling)
see also
Out of memory? Easy ways to increase the memory available to your program
Create your own Test Host using XAML to run your unit tests
<code>
using System; using System.Text; using System.Collections.Generic; using System.Linq; using Microsoft.VisualStudio.TestTools.UnitTesting; using EnvDTE; using System.IO; using System.Diagnostics; using System.Reflection; namespace csTest { /// <summary> /// Summary description for UnitTest1 /// </summary> [TestClass] public class MyTests { private TestContext testContextInstance; /// <summary> ///Gets or sets the test context which provides ///information about and functionality for the current test run. ///</summary> public TestContext TestContext { get { return testContextInstance; } set { testContextInstance = value; } } StreamWriter _logStream; public const string LogFileName = @"d:\t.txt"; private int _nLines; public void LogString(string logMsg, params object[] parms) { try { if (parms != null) { logMsg = string.Format(logMsg, parms); } /* // just add a slash at the beginning of this line to switch File.AppendAllText(LogFileName, logMsg); // slower, but updates every line /*/ _logStream.WriteLine( string.Format("{0:T} {1}", DateTime.Now, logMsg)); if (_nLines++ == 100) // flush every 100 lines { _logStream.Flush(); } //*/ } catch (Exception ex) { LogString(logMsg); } } [TestInitialize] public void Init() { File.WriteAllText(LogFileName, "");// create/clear log file _logStream = new StreamWriter(LogFileName, append: true); } [TestCleanup] public void cleanup() { _logStream.Close(); } public struct BigClass { public unsafe fixed int _Array[10000];// = new int[1000000]; } [TestMethod] public void MyTest() { int iInstance = 0; var sizeofBig = System.Runtime.InteropServices.Marshal.SizeOf(typeof(BigClass)); try { //var thisAsm = System.IO.Path.GetFileNameWithoutExtension(Assembly.GetEntryAssembly().Location); var procName = "QTAgent32"; PerformanceCounter pcPrivBytes = new PerformanceCounter( "Process", "Private Bytes", procName); PerformanceCounter pcManagedMem = new PerformanceCounter( ".NET CLR Memory", "# Bytes in all Heaps", procName); var nInstances = 100000; LogString("Size = {0}", sizeofBig); var lstBig = new List<BigClass>(); for (iInstance = 0; iInstance < nInstances; iInstance++) { var newBig = new BigClass(); lstBig.Add(newBig); if (iInstance % 1000 == 0) { LogString("{0,6} Priv={1,16:n0} Mgd:{2,16:n0}", iInstance, (int)pcPrivBytes.NextValue(), (int)pcManagedMem.NextValue() ); } } LogString("Created {0} ", nInstances); System.Threading.Thread.Sleep(5000); } catch (Exception ex) { LogString("Exception SizeofBig = {0:n0} Inst #={1:n0} TotSize = {2:n0}\r\n{3}", sizeofBig, iInstance, iInstance * sizeofBig, ex ); throw; } } } }
</code>
|
https://docs.microsoft.com/en-us/archive/blogs/calvin_hsia/increase-the-memory-available-to-your-tests
|
CC-MAIN-2020-24
|
refinedweb
| 1,318
| 56.15
|
routes from Google Maps for JavaScript
Location Based Services are one of the most used services in mobile application today. Sometime, routs are needed to determine how to go to a location more easily. This tutorial teaches how to get routs from Google Maps.
Preconditions
This example needs import some modules wich can be encountered File:Rout libs.zip. It also needs internet connection.
Source file
In this example we will get routs from Campina Grande to Joao Pessoa, both cities in Brazil. To get routs for other locations, just alter START_LOCATION and END_LOCATION string constants.
import urllib
from html_parser import *
START_LOCATION = "campina grande"
END_LOCATION = "joao pessoa"
def parser_html(start,end):
rout_link = " \
mobileproducts&daddr=" + start + \
"&output=mobile&site=local&saddr=" + end + \
"&btnG=Como+chegar"
GHTML = urllib.urlopen(rout_link)
parser = Rout()
parser.feed(GHTML.read())
return parser
def get_rout(start, end):
end = end.replace(" ", "+")
start = start.replace(" ", "+")
start = start.encode('utf-8')
end = end.encode('utf-8')
parser = parser_html(start, end)
routlist = parser.return_list()
parser.close()
return routlist
routlist = get_rout(START_LOCATION, END_LOCATION)
try:
for element in routlist:
print unicode(element,'latin-1')
except:
print "Destination does not exist"
Postconditions
If all run ok we will get this result:
Head south on Av. Santa Catarina toward Av. Mato Grosso - 30 m
Turn left at Av. Mato Grosso - 0.2 km
Turn left at Av. Amazonas - 0.2 km
Turn right at Av. Espírito Santo - 1.1 km
Continue on R. Prof. Joaquim F V Galvão - 0.8 km
Turn right at BR-230 - 0.3 km
Slight right to stay on BR-230 - 10.8 km
Continue on BR-101 - 6.9 km
Exit onto BR-230 - 108 km
Continue on Av. Pref. Severino Bezerra Cabral - 2.9 km
Slight left at Av. Canal Go through 1 roundabout - 1.1 km
Make a U-turn at R. Tomaz Soares de Souza - 55 m
|
http://developer.nokia.com/community/wiki/Archived:Getting_routes_from_Google_Maps_for_JavaScript
|
CC-MAIN-2015-11
|
refinedweb
| 311
| 63.05
|
Checking if a user owns a domain
The technique we’re using is the one used by Google, Microsoft and others to verify that you've got some authority over a domain. So while it's not foolproof, at least we're in good company!
The code in this article is TypeScript, but the same method would work in most languages.
Overview
All of the verification methods I've seen for these rely on the user being able to modify the site in some way - which makes sense, since you're checking if they have control over the site they're trying to use.
Most of them seem to have settled on using some form of DNS entry - a special record which they can check actually exists.
Quick DNS intro
This is very brief; for a (slightly) fuller introduction to the DNS, see my other post.
The Domain Name System consists of records giving information to computers accessing the internet. There are quite a few different types of record. The most basic one is called an A record, A for address. It essentially says "this text - foobar.example.com - points to this IP address".
There are a number of reserved addresses which have particular meanings. One useful address is
127.0.0.1 - that always means "this computer". The symbolic name for it is
localhost.
The plan
We want to check the user can modify the DNS entries for that domain, but not with anything particularly disruptive or complicated - the more complicated we make it the more likely it is that user error will creep in.
The simplest way - generate a random subdomain and have them create an A record pointing to
127.0.0.1.
Generating an alias
There are many different ways to do this. I chose to use the Node uuid module and take the first 8 characters. 8 was chosen because it was random enough for our purposes, and because it was the first 'lump' in the v4 UUID.
siteDetails["alias"] = uuid().substr(0, 8);
Checking the alias
Using the Node dns module we can resolve the alias we created; we append
domain after it, making
alias a subdomain.
The plain
dns methods are callback based; it also supplies a
dnsPromises set of APIs which are Promise based. We’ll use that resolve method for convenience.
import dns from "dns"; const dnsPromises = dns.promises; type Site = { alias: string; // Alias we'll be verifying domain: string; // Domain the user gave us verified: boolean; // Is it verified yet } async function verifySite(site: Site) { try { const res = await dnsPromises.resolve(site.alias + "." + site.domain); const valid = ((res.length == 1) && (res[0] == "127.0.0.1")); site.verified = valid; } catch (err) { console.error(`Error ${err} doing site ${site.id} verification`); } }
We’re expecting result of the lookup to be a single entry,
127.0.0.1 - if it is then we called it verified. Lastly, we make sure the data reflects what we just found.
Running checks in the background
We now have a function which we can use to verify domains. The last stage is to have it run periodically in the background, rather than on-demand.
The implementation I used is below. I haven’t included the utility functions (like
getAllSites, but the code should still be understandable without those.
startBackground uses
DOMAIN_VERIFY_PERIOD_SECONDS from the environment if it’s defined - if it isn’t it defaults to 300 seconds (5 minutes). It then uses
setInterval to schedule
verifySites.
setInterval takes milliseconds as an argument, so we convert it first.
verifySites simply gets the current list of sites and runs
verifySite on all of them.
Lastly,
stopBackground will cancel the interval function if it’s been scheduled to run.
import { getAllSites } from "./queries"; let domainCheckId: NodeJS.Timeout | null = null; export async function verifySites() { const sites: Site[] = await getAllSites(); sites.forEach(site => verifySite(site)); } export function startBackground(): void { const SECOND = 1000; const period: number = parseInt(process.env.DOMAIN_VERIFY_PERIOD_SECONDS || "300"); console.log(`Starting domainCheck, period ${period} seconds`); domainCheckId = setInterval(verifySites, SECOND * period); } export function stopBackground(): void { if (domainCheckId) { clearInterval(domainCheckId); domainCheckId = null; } }
And that’s it - those functions are enough to start verifying domains in the background. Let me know if you use it!
|
https://www.solarwinter.net/checking-if-a-user-owns-a-domain/
|
CC-MAIN-2022-05
|
refinedweb
| 703
| 57.77
|
How to get the automatic AP back on lopy ?
Hi, I'm using a Lopy4. I managed to connect with a computer to the automatic AP named "lopy-wlan-xxx". Then I went on and everything worked fine.
I tried to change the AP name with the following command :
network.WLAN(mode=network.WLAN.AP, ssid="lopy_test", auth=(network.WLAN.WPA2,'lopy123456'))
It worked (I could see the new network on the computer) but I couldn't get to. I tried to change the mode (AP, STA, STA_AP) but I can't get on the ftp link anymore.
Does anyone know how to get back to the way it was (default configuration with the automatic access point) or get the FTP to work with a custom network ?
Thanks
@robert-hh Ok, thanks !
@melkoutch If you just have a 1-to-1 connection, then you can do anything. If you have several devices already in a network with a router, then it may be more convenient to have the Lopy4 also connected to the router.
And not, it is not faster.
@robert-hh Yes I saw that but when you say "more convenient" what do you mean ? Is it like faster ? I really just need one computer to get access to the ftp server so is it worth it to take a router raher than making the board an AP ?
@melkoutch You could set up the device to connect to you home router, along the instructions at
That is usually much more convenient.
@melkoutch Have you try to restart ftp server after changing AP name ?
import network server = network.Server() server.deinit() # disable the server # enable the server again with new settings server.init(login=('user', 'password'), timeout=600)
(from pycom documentation)
Warning : if you use telnet to acces to the board you will lose the connexion after server.deinit()
Use serial cable to do this
@Eric73 Hi, yes I did try. I did this exact line and then tried to get to (tell me if I'm wrong) but it didn't work.
Hi, have you try to set your IP with something like this
network.WLAN.ifconfig(config=('192.168.0.4', '255.255.255.0', '192.168.0.1', '8.8.8.8'))
Documentation found here:
|
https://forum.pycom.io/topic/4939/how-to-get-the-automatic-ap-back-on-lopy
|
CC-MAIN-2020-24
|
refinedweb
| 378
| 74.79
|
You have seen the demos showing you how to access your camera and take pictures from a PhoneGap application. But these demos often end there, and a number of people have asked me for an end-to-end example showing how to take pictures and upload them to a server.
I created a simple app that I called Picture Feed. It shows you the last pictures uploaded by users, and lets you take pictures that are automatically uploaded to a server for other people to see.
In this sample app, I upload the pictures to a Node.js server, and I keep track of the list of pictures and associated information (if any) in a MongoDB collection. But the same approach would work with other server stacks.
Here is a short video:
Source Code
The client-side and server-side (Node.js) code is available in this GitHub repository.
In my next post I’ll modify this example to demonstrate how to upload pictures to Amazon S3.
This application was created with PhoneGap 3. Don’t forget to add the plugins required by the application:
phonegap local plugin add phonegap local plugin add phonegap local plugin add phonegap local plugin add phonegap local plugin add
To test the application on iOS, make sure you modify the “widget id” value in config.xml and specify the namespace that matches your application provisioning profile.
Pingback: How to Upload Pictures from a PhoneGap App to Amazon S3 | Christophe Coenraets()
Pingback: Node.js and Mobile News Round-up – September 10, 2013 | StrongLoop()
Pingback: Node Weekly No.4 | ENUE()
|
http://coenraets.org/blog/2013/09/how-to-upload-pictures-from-a-phonegap-application-to-node-js-and-other-servers-2/?utm_source=rss&utm_campaign=how-to-upload-pictures-from-a-phonegap-application-to-node-js-and-other-servers-2&utm_content=buffer90db7&utm_medium=facebook
|
CC-MAIN-2017-43
|
refinedweb
| 264
| 62.88
|
Today’s Programming Praxis problem is an easy one. In 2005, Steve Yegge posted an article about interviewing programmers that listed 7 simple programming exercises for phone screening. These assignments are a perfect example of ‘Bonsai code’, since they require so little code. The scheme solution clocks in at 22 lines, or just over 3 lines per function on average. Let’s see how Haskell does.
Our imports (the latter one is only to ensure type safety in exercise 7):
import Text.Printf import Data.Word
1. Write a function to reverse a string
Normally you would just use the reverse function from the Prelude (like the Scheme solution does), but I consider that cheating, since the assignment says to write our own. Fortunately, the full solution is not much longer:
reverse' :: String -> String reverse' = foldl (flip (:)) []
2. Write a function to compute the Nth fibonacci number
The function to produce an infinite list of Fibonacci numbers is a well-known example of Haskell’s brevity. All we need to fulfill the assignment is to get the correct one:
fib :: (Num a) => Int -> a fib n = fibs !! n where fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
3. Print out the grade-school multiplication table up to 12 x 12
The only non-one-liner of the bunch, but still very trivial.
timesTable :: IO () timesTable = mapM_ putStrLn [concat [printf "%4d" (a * b) | a <- [1..12]] | b <- [1..12 :: Int]]
4. Write a function that sums up integers from a text file, one per line.
Aside from the map read bit it’s almost plain English.
sumFile :: FilePath -> IO () sumFile path = print . sum . map read . lines =<< readFile path
5. Write a function to print the odd numbers from 1 to 99.
And another one that almost anyone should be able to understand.
printOdds :: IO () printOdds = print $ filter odd [1..99]
6. Find the largest int value in an int array.
As with the reverse assignment, we would normally just use the built-in maximum function here. But since that’s cheating, we use another simple fold:
largest :: [Int] -> Int largest = foldl max minBound
7. Format an RGB value (three 1-byte numbers) as a 6-digit hexadecimal string.
Printf to the resque. As mentioned in the imports, we could just use Ints and get rid of one of our imports, but this way we guarantee that only 1-byte numbers can be passed in.
toHex :: Word8 -> Word8 -> Word8 -> String toHex = printf "%02x%02x%02x"
And that brings us to 10 lines of code in total, which is less than half the Scheme solution size. I just love one-liners :)
Tags: coding, exercises, Haskell, kata, phone, praxis, programming, screen, steve, yegge
July 1, 2009 at 2:58 am |
printOdds = print $ [1,3..99]
— but this is getting golfy :)
July 1, 2009 at 2:58 am |
printOdds = print [1,3..99]
— but this is getting golfy :)
July 1, 2009 at 1:12 pm |
> reverse’ = foldr (:) []
Did you mean
reverse’ = foldl (flip (:)) []
?
July 1, 2009 at 1:34 pm |
The foldl version is the one used in the Prelude if USE_REPORT_PRELUDE is true, yes. The foldr version will produce the same output though, and since it’s a bit shorter that’s the one I decided to use. After reading your comment, I figured I’d do a little benchmark to see which one is faster. It turns out that on a string with 5 million characters the foldr version is 30% faster than foldl when interpreted, and about 4 times faster when compiled.
I guess there must be some reason why they’re using the foldl version in the Prelude, but to be honest I don’t see it. If anyone happens to know the answer I’d love to hear it.
July 1, 2009 at 2:20 pm |
Your reverse’ function does not seem work, that’s why I commented here. It’s acting like id function for String.
Prelude> let reverse’ = foldr (:) “”
Prelude> reverse’ “foobar”
“foobar”
I guess you didn’t test the function enough or have some mistakes.
July 1, 2009 at 2:33 pm |
D’oh. Apparently I forgot to test again after replacing the foldl with the foldr version. The lesson here: just because it typechecks doesn’t automatically mean that it’s correct :) Thanks, fixed.
|
http://bonsaicode.wordpress.com/2009/06/30/programming-praxis-steve-yegge%E2%80%99s-phone-screen-coding-exercises/
|
CC-MAIN-2014-42
|
refinedweb
| 717
| 73.17
|
web.py Sessions
Other languages : français | ...
Sessions are a way to store information between requests, thereby making http stateful. They work by sending the user a cookie, which maps to a session storage object on the server. When the user requests a page, the client sends the cookie back with the request, web.py loads the session based on the key, and code can request and store information in it.
Sessions are convenient because they allow a programmer to store user state in native Python objects.
Storage types
web.py sessions allow for multiple ways to store the session data. These methods include:
- DiskStore. Session data is pickled in a designated directory. When instantiating, the first and only argument is the folder where the session information should be stored on disk.
DBStore. Session data is pickled and stored in a database. This can be useful if you want to store session data on a separate system. When creating, the DBStore takes 2 arguments: a web.py database instance, and the table name (string). The table which stores the session must have the following schema:
session_id CHAR(128) UNIQUE NOT NULL, atime DATETIME NOT NULL default current_timestamp, data TEXT
ShelfStore. Data is stored using the python shelve module. When creating, the ShelfStore takes the filename that should be used to store the data.
The storage methods have various performance and setup tradeoffs, so the options allow you to choose what's best for your application.
Example
The following code shows how to use a basic DiskStore session.
import web urls = ( '/', 'Index', '/login', 'Login', '/logout', 'Logout', ) web.config.debug = False app = web.application(urls, locals()) session = web.session.Session(app, web.session.DiskStore('sessions')) class Index: def GET(self): if session.get('logged_in', False): return '<h1>You are logged in</h1><a href="/logout">Logout</a>' return '<h1>You are not logged in.</h1><a href="/login">Login now</a>' class Login: def GET(self): session.logged_in = True raise web.seeother('/') class Logout: def GET(self): session.logged_in = False raise web.seeother('/') if __name__ == '__main__': app.run()
Sessions and Reloading/Debug Mode
Is your session data disappearing for seemingly no reason? This can happen when using the web.py app reloader (local debug mode), which will not persist the session object between reloads. Here's a nifty hack to get around this.
# Hack to make session play nice with the reloader (in debug mode) if web.config.get('_session') is None: session = web.session.Session(app, db.SessionDBStore()) web.config._session = session else: session = web.config._session
|
http://webpy.org/docs/0.3/sessions
|
CC-MAIN-2017-17
|
refinedweb
| 425
| 61.22
|
So i have massive(i mean massive) array. It has over 500000 lines. Each line starts with some bs that i don't need. What i need is EVERYTHING after 64th symbol(65th symbol is needed). It's the same for every line, but after 64th symbol each line lenght is different. How do i get symbols form 65 and on...? It's ok by me if everything else is deleted but those symbols after 64th character stay in the array.
for (int i = 0; i < stringArrAC.length; i++) { if (i<=64) { stringArrAC[i] = null; break; } }
something like this? but its not working... Thanks for help. :)
EDIT.. I NEED MORE HELP So im back..
i GOT MY 500000 plus lines to the way i want them.. now i need to find second to last word in each line(they are divided by space) and find 3 most popular of them and find how many are there.... Can You help me in any way?
data example: abcd i asd ffdds abcd ddd ? abcd ffdds asd ddd i ddd abcd i a f g w e a asdfasdasdas fdd i
answer that i need:
abcd 2 ddd 2 fdd 1 or 2 abcd 2 ddd 1 fdd
This is my code
public class asdf { public static void main(String args[]) throws IOException { BufferedReader in = new BufferedReader(new FileReader("input.txt")); String str; List<String> list = new ArrayList<String>(); while((str = in.readLine()) != null){ if (str.startsWith(" ")&& str.endsWith("i")||str.endsWith("?")) { list.add(str); } } String[] stringArr = list.toArray(new String[0]);//for backup String[] stringArrAC = list.toArray(new String[0]); for (int i = 0; i < stringArrAC.length; i++) { stringArrAC[i] = stringArrAC[i].substring(63); } //String[] stringArrLAST = (new String[0]); for (int i = 0; i < stringArrAC.length; i++) { // THIS IS WHERE I DONT KNOW WHAT TO DO } try { PrintWriter pr = new PrintWriter("output.txt"); for (int i=0; i<stringArrAC.length ; i++) { pr.println(stringArrAC[i]); } pr.close(); } catch (Exception e) { e.printStackTrace(); System.out.println("No such file exists."); } }
|
http://www.howtobuildsoftware.com/index.php/how-do/bxm/java-arrays-string-break-java-cut-off-array-line-before-charactern-nr-x
|
CC-MAIN-2019-39
|
refinedweb
| 337
| 78.35
|
Accessing Hidden VB Interfaces
The Microsoft KnowledgeBase article Q193018 in MSDN lists the problem that Visual C++ programs cannot access hidden Visual Basic interfaces that are defined with an underscore ( _ ) as the first letter in the name, such as the Visual Basic Collection interface declared in MSVBVM60.DLL.
This article will depict the simple solution for this task of using VB hidden object interface in VC project. I will show how to import VB Collection object from VB Control to VC application.
There are several steps to make things work:
The first step is to make ActiveX control. This example shows a simple ActiveX control which has a collection of Cars. the cars have a CarId which identifies them. The control keeps a VB Collection Object to store all the Cars Id's. We would like to get this Collection from VC application.
The control has two interface functions
AddCar(car as Integer) which adds a car to the collection
and Cars(ByRef cars as Variant), which reterns a reference to the cars collection.
1) bulid the project in order to produce CarControlProject.ocx, and register the control in your computer using RegSvr32.exe utility.
2) Open new Dialog based VC application. and put this Control on the dialog. Add member variable to your dialog which will be with type CUserControl.
3) Now, we would like to get the collection of cars that this control holds. This is tricky and requires several steps:
1) Find out what is the IID of the requested object. In our example we would like to receive a pointer to _Collection object. Open the OLE-COM viewer and display the information of all available interfaces. look for the interface _Collection.
There are probably several _Collection interfaces. look for the one which is declared as VisualBasic For Applications Component.
3) generate an .idl file for the object and run MIDL to compile this interface.
Here there are no shortcuts. we have to write the interface. However we do not have to do it from scratch. we can use information retreived from the OLE-COM viewer.
Here is the .idl file that I wrote for _Collection object. using the information from OLE-COM viewer. save this file as Collection.idl
NOTE: The interface ID - IID must be as the one you saw in the OLE-COM map viewer. Cut and Past the IID from the OLE-COM viewer tool to avoid mistakes.
import "oaidl.idl"; import "ocidl.idl"; [ object, // The uuid was taken from OLE-COM viewer. uuid(A4C46780-499F-101B-BB78-00AA00383CBB), helpcontext(0x000f7886), dual, pointer_default(unique) ] interface _Collection : IDispatch { [id( helpcontext(0x000f7903)] HRESULT Item([in] VARIANT* Index, [out][retval] VARIANT *Item); [id(2), helpcontext(0x000f7901)] HRESULT Add( [in] VARIANT* Item, [in, optional] VARIANT* Key, [in, optional] VARIANT* Before, [in, optional] VARIANT* After); [id(3), helpcontext(0x000f7902)] HRESULT Count([retval][out] long *count); [id(4), helpcontext(0x000f7904)] HRESULT Remove([in] VARIANT* Index); };
compile this file using MIDL compiler. The compiler will generate the files which are needed in order to use this interface in our VC application.
first Collection_i.c the definition of the IID of the interface. - add it to the project, in order that the linker will find the IID symbol.
and the Collection.h file. Header file for the _Collection interface.
4) Now, we wish to invoke the Cars member of our Control. This method will give us a refrence to the Connection object.
Pass the method a pointer to Variant object.
#include "Collection.h" COleVariant vrCars; HRESULT hr; interface _Collection *ICol; m_myCar.Cars(&vrCars); vrCars.ChangeType(VT_UNKNOWN);
5) Use QueryInterface to get The requested interface pointer. In our case, a pointer to the _Colleciton interface
hr = vrCars.punkVal->QueryInterface(IID__Collection, (void **) &ICol);
6) Use this pointer to access the Collection.
long l; // Call the Collection count method ICol->Count(&l); for (long i=0;i < l ; i ++) { // Get all the Items of the Collection. COleVariant vrIndex; COleVariant vrValue; vrIndex.ChangeType(VT_I4); vrIndex.lVal = i; vrValue.ChangeType(VT_BSTR); _bstr_t bstr(vrValue.bstrVal,true); m_listCars.AddString(bstr); }
How to return Variant data as array from activex dll/exe to VC clientPosted by Legacy on 01/09/2002 12:00am
Originally posted by: ashutosh
problem with _recordset likePosted by Legacy on 01/02/2002 12:00am
Originally posted by: Antonio Ramirez
Problem with Add !!!Posted by Legacy on 05/18/2001 12:00am
Originally posted by: Daren
Your walk though has been very useful , but alas I still have a problem that you may be able to help me with ....
I am currently unable to Add items into the collection from VC++ !
I am creating a COleVariant vrItem and giving it an int and defining 3 COleVariants as VT_EMPTY , and passing all 3 into the collections ->Add method ... but alas the item is not being appened to the collection .... and no error surfaces either ...
Please could someone help me ...
MTIA Daren 'Well and Truely Stuck'
|
http://www.codeguru.com/cpp/com-tech/activex/controls/article.php/c2565/Accessing-Hidden-VB-Interfaces.htm
|
CC-MAIN-2015-35
|
refinedweb
| 821
| 58.69
|
//**************************************
// Name: GetResource
// Description:Reads embeded files from your executable programs (such as xml, images, etc.) Allows you to distribute one program - the exe, but still access your data and images needed with the program.
// By: Lewis E. Moten III (from psc cd)
//
// Assumes:This only allows you to read the data, not update/delete/add to it, except during design time.
//**************************************
Public Function GetResource(ByVal filename As String) As System.IO.Stream
' Demo Instructions:
'
' Add an XML document to your project called "test.xml"
'
' Click the file and under properties, set BuildAction to "Embedded Resource"
'
' paste this routine in one of your classes or forms.
'
' Add the import to the top of your class
' Imports System.Xml.XmlDocument
'
' Dim the XML Document object
' Dim xmldoc As XmlDocument = New XmlDocument()
'
' Load the XML document from the resource
' xmldoc.Load(GetResource("test.xml"))
'
' Congratulations. You have learned how to embed files within
' your program and read them at run-time
'
Dim oAssembly As Reflection.Assembly = MyClass.GetType.Assembly
Dim sFiles() As String
Dim sFile As String
' prefix filename with namespace
filename = oAssembly.GetName.Name & "." & filename
' get a list of all embeded files
sFiles = oAssembly.GetManifestResourceNames()
' loop through each file
For Each sFile In sFiles
' found the file? return the stream
If sFile = filename Then
Return oAssembly.GetManifestResourceStream(sFile)
End If
End Function
Other Leo13 on 6/30
(Screen Shot).
|
https://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=954&lngWId=10
|
CC-MAIN-2018-30
|
refinedweb
| 227
| 57.77
|
Hi everyone,
I am working on a project (just started) in PYTHON and I want to create a keylogger for 2 languages english and greek. For this I have created the below (it is under "construction") code.
from pynput.keyboard import Key, Listener from langdetect import detect from pynput import keyboard def on_press(key): global string if key == keyboard.Key.esc: #if button escape is pressed close the program listener.stop() elif key == keyboard.Key.space: print(string) string="" else: string = ''.join([string,str(key).replace("'","")]) string="" controller = keyboard.Controller() # Collect events until released listener = keyboard.Listener(on_press=on_press) listener.start()
I run it and all good until I change the language. (please see the screenshots+explanations).
In the first picture I started typing in english and the printed result was in english but when I changed in greek the printed result remained english.
In the second picture I started in greek and the printed result was in greek, but when I change in english the result remains greek.
How can I solve this problem in order to take the same language result after an input language change?
Any help or advice would be great!
Thanks in advance.
|
https://www.daniweb.com/programming/software-development/threads/521791/solve-pynput-problem-when-changing-language
|
CC-MAIN-2022-33
|
refinedweb
| 198
| 67.76
|
The revolutionary project management tool is here! Plan visually with a single glance and make sure your projects get done.
private void LoadUserAdmin_Click(object sender, EventArgs e) { UserAdmin ua = new UserAdmin(); }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
Goran
1) import this namespace with the using keyword at the beggining of the class
using MyNameSpace;
or
2) write a fully qualified name for the UserAdmin form. Example
Open in new window
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
private void LoadUserAdmin_Click(object
{
othernamespace.UserAdmin ua = new othernamespace.UserAdmin()
}
That sound sounds like a double-check answer!
You have to modify 2 files:
yourform.cs
yourform.designer.cs
there you will find:
using someNS;
using otherNS;
namespace theFormNS
{
public [partial] class YourFormClass
{
// etcera
Then you have to change in both files: theFormNS to your desired namespace
|
https://www.experts-exchange.com/questions/23802506/Not-able-to-open-sub-form-from-main-form.html
|
CC-MAIN-2018-13
|
refinedweb
| 182
| 59.8
|
Circular Programming in Clojure
A couple years ago, I decided to learn Haskell. This led me to develop an interest for lambda calculus, interpreters, and laziness. (And monads, in general as well as in Clojure, but that is less relevant for today's topic.) While researching those topics, I came across a paper meshing all those topics together by explaining a nice technique to implement embedded functional languages in Haskell. On the face of it, the technique presented seems to rely so heavily on Haskell language features — primarily whole-language laziness — that it looks like it would be hard to implement in any other language.
This, of course, looked like a challenge to me, and I immediately started wondering how hard it would be to implement the same technique in Clojure. Then I got distracted and left that problem alone for a few months. That is, until I got a bit of an epiphany while playing with prime numbers two weeks ago.
But first, let's take a look at the paper itself.
The Problem
The paper is entitled "Using Circular Programs for Higher-Order Syntax"; as may be expected, the problem it tries to solve is representing "higher-order syntax", and the solution the authors came up with relies on "circular programs". Those terms may not be immediately familiar, so let's break that down.
At the core of the problem is lambda calculus. Without going into all of the details, the idea here is to simplify the notion of a programming language to the extreme, and specifically to the point where the language is composed of only three forms: variables, single-argument function definitions, and function applications.
It turns out these three forms are enough to form a complete model of computation, but that's not quite what I want to focus on here. For our purposes today, it is enough that:
- We agree any reasonable "functional language" we may want to implement as an embedded language will have functions.
- Higher-order syntax is a nice API to define programs in that language (more on that later).
- It turns out there is a bit of a thorny issue to solve there, and this minimal language is enough to illustrate it.
As we'll want to write some sort of interpreter for our embedded language, we should define an abstract syntax for it. Let's go for:
;; a variable, identified by a number [:var 1] ;; a function definition (identity) [:abs 2 [:var 2]] ;; a function application applying identity to a free variable [:app [:abs 1 [:var 1]] [:var 2]]
Note that, semantically, that last line has the same meaning as:
[:app [:abs 2 [:var 2]] [:var 2]]
because of lexical scope, but now, as a human reader, we have to realize that
there are two different variables named
2 at play. Also note that we could
have chosen to use strings for variable names instead of numbers, and probably
would in a real context, but for our purposes here all we need is equality
checks and numbers are the simplest way to get that.
If we want to write more complex expressions, it can become a bit tedious. For
example, here is the
+ operator in Church encoding:
[:abs 0 [:abs 1 [:abs 2 [:abs 3 [:app [:app [:var 0] [:var 2]] [:app [:app [:var 1] [:var 2]] [:var 3]]]]]]]
Now, part of the tediousness comes from the language itself: by defining such a small core language, we inevitably get some verbosity in defining useful values.
But part of the tediousness also comes from the use of a data representation of what should naturally be functions. Moreover, there is an inherent opportunity for human error in writing down those expressions, as it is easy to mistype a variable name.
To address those issues, we can use "higher-order syntax", which is a fancy way of saying let's use our host language functions to represent our embedded language functions. This way, we get our host language semantics for free when it comes to checking our argument names (and possibly types) and dealing with shadowing.
This means defining two functions,
abs and
app, such that the above can be
written:
(abs (fn [m] (abs (fn [n] (abs (fn [f] (abs (fn [x] (app (app m f) (app (app n f) x))))))))))
which is still as complicated in terms of the logic of this function (because Church encoding is complicated), but is syntactically much nicer, and arguably less error-prone even in a language with no static type-checking.
And so the problem the paper sets out to solve is: can we define
abs and
app with the presented API and reasonable performance?
Specifically, the paper uses Haskell, so, for clarity, these are the types in play:
data Exp = Var Int -- ex: [:var 1] | Abs Int Exp -- ex: [:abs 2 [:var 2]] | App Exp Exp -- ex: [:app [:abs 2 [:var 2]] [:var 2]] app :: Exp -> Exp -> Exp abs :: (Exp -> Exp) -> Exp
If you're not familiar with Haskell, those last two lines mean that
app is a
function of two
Exp arguments and returns another
Exp argument, while
abs
is a function that takes a single argument which is itself a function from
Exp to
Exp, and returns an
Exp.
At this point, it's important to realize that we're not trying to calculate
anything:
app and
abs are just meant as syntactic sugar to construct
Exp values.
In other words, we could expect
app to just be
(fn [e1 e2] [:app e1 e2]).
The implementation of
abs is less obvious, as it needs to introduce names for
the
Abs arguments, and that's the problem the paper tackles.
Why
abs is non-trivial
The paper remains a bit more abstract in the representation of variable names,
and keeps them as "any sufficiently large, totally ordered set"; in this blog,
I'll just represent them as integers. Any other countable (or smaller), ordered
type can be trivially mapped to integers as a separate step that is of no
interest to this discussion. The operations we use on that type are
zero, a
value representing the "minimal" name (in our case,
0),
succ, an operation
that takes a name and returns the "next" one (
inc), and "⊔", an operation
that takes two names are returns the bigger one (
max).
If one were to sketch an implementation for
abs, one would likely start with
something like:
(defn abs [f] (let [n (generate-name f)] [:abs n (f [:var n])]))
Maybe it's not obvious that we need to pass the function
f to
generate-name, and maybe we actually don't. At this point, I'm just
sketching, and
f is the only bit of information I do have access to, so I
might as well assume I'll need to use it somehow.
What are the constraints on choosing a value for
n? From the semantics of
lexical binding, we can deduce that it needs to:
- Not capture free variables in the body of the function,
(f [:var n]).
- Should not be captured by other bindings in the body.
What does that mean? First, let's start with an example of 1. Say the body of the function is:
[:app [:var ???] [:var 2]]
where
??? is where our generated
n should come into play. So we're going to
generate an abstraction of the form:
[:abs n [:app [:var n] [:var 2]]]
Hopefully, it is pretty obvious that the meaning of this abstraction will be
very different if we set
n to
2 than if we set
n to any other number: if
we set it to
2, we capture the free variable, and end up with a function
that applies its argument to itself, rather than a function that applies its
argument to an externally-scoped value.
Now, where could that free
[:var 2] come from? If we construct expressions
only using
app and
abs, at first glance it looks like we cannot generate a
free variable. And that is true, at the level of a complete expression. But
when constructing an expression, subexpressions can (and often will) have free
variables.
For example, on our
+ operator above, when considering the abstraction that
introduces the binding
3, all three other bindings look like free variables
to the body of that abstraction.
The second error case we want to avoid is to generate a binding that then gets captured by a function "above" in the call stack. The paper goes into that in more details, but the gist of it is that if we construct our name generation to look "down" and only generate "bigger" names, and this is the only API we ever use to create expressions, then that works out.
Having to look "down" poses an obvious problem: because what we give
abs is a
function, the only way it has of looking at the expression of its body is
to actually apply the function and look at the result. Continuing with our
example, we're basically in a situation where we're trying to solve for:
(abs (fn [n] [:app [:var n] [:var 2]]))
and it seems like the only way for
abs to even see the expression is to pick
a number at random and execute the function. If we picked something other than
2, that's great, but if we picked
2, we're screwed. Maybe we can run it a
bunch of times and see what values are stable?
Threading state
The paper briefly considers the option of threading an infinite list of names
through (for which a monadic implementation may be useful), but
discard it as a non-starter, because that would require changing the base type
of expressions, i.e. the arguments and return types of the
app and
abs
functions now need to be wrapped in some way.
Speculative naming
The first real solution to the problem that the paper presents is a more
disciplined approach to our "let's try random values and see what happens"
approach. The core idea is to decide that, in the final expression we produce,
the name
0 will never be used.
Under that assumption, the implementation is fairly straightforward:
(defn app [e1 e2] [:app e1 e2]) (defn exp-max [exp] (match exp ;; see core.match [:var n] n [:app e1 e2] (max (exp-max e1) (exp-max e2)) [:abs _ e] (exp-max e))) (defn abs [f] (let [n (inc (exp-max (f [:var 0])))] [:abs n (f [:var n])]))
In other words, we first generate the expression while pretending we're going
to name the current variable
0, then look at the expression, walking through
the entire thing in order to find out what the highest variable is, and then
generate our expression again using a higher variable than that.
This works, but we call
f twice at each level, meaning it's exponential in
the nesting level of our function definitions. That's obviously not ideal.
Circular speculation (Haskell)
The solution presented in the paper is to implement the above using what the authors call circular programming:
exp_max :: Exp -> Int exp_max (Var _) = 0 -- not looking at the variable anymore exp_max (App f a) = max (exp_max f) (exp_max a) exp_max (Abs n a) = max n (exp_max a) -- we now look at n here app :: Exp -> Exp -> Exp app = App abs :: (Exp -> Exp) -> Exp abs f = Abs n body where body = f (Var n) -- Note that body depends on n ... n = 1 + (exp_max body) -- ... and n depends on body
The "circular" part refers to the fact that the definition of
body depends on
n, while at the same time the definition of
n depends on
body. It is
possible to write this in Haskell because the language is lazy. The computation
actually works because there actually exists a sequence of computation here
that ends up defining both
body and
n without triggering an infinite loop.
The main "trick" here is to rewrite
exp-max to look at the bindings of
abstractions, but, crucially, not look at variables. Trying to write the same
abs with our previous definition of
exp-max would result in an infinite
loop.
The paper goes on to prove that this method works (i.e. it produces "safe"
bindings) and that one can improve the complexity class from quadratic to
linear by tweaking
exp_max, but in this post I am not interested in that.
What I am interested in is whether this can be made to work in Clojure.
Circular programming in Clojure
A word-for-word translation would not work, because Clojure's
let bindings
are not recursive:
;; DOES NOT COMPILE (defn abs [f] (let [body (f [:var n]) n (inc (exp-max body))] [:abs n body]))
will, if we're lucky, fail to compile with a message along the lines of:
Unable to resolve symbol: n in this context
If we're not lucky,
n may get resolved to some global name if we happen to
have a var named
n in the current namespace (or, more likely, if we decided
to name this variable differently).
I mentioned in the introduction that this post was inspired by my rediscovery of promises last week, so it should come as no surprise that the solution I'm going to present here relies on promises.
In a way, a Clojure promise for an integer is very similar to a Haskell binding for an integer: in both cases, it does not really matter exactly when the value is computed, so long as it is present when we actually need it.1
Unfortunately, we can't quite get all the way to the same level as Haskell: because Haskell has pervasive laziness, there is no difference between an integer and a promise for an integer. In Clojure, though, those are very different things, and promises do need to be explicitly dereferenced.
This means that we have no choice but to change our underlying type representation to account for promises, and that any code using the (implicit) "exp" type has to know about them. (Or does it? More on that later.) Assuming that's an acceptable cost, the implementation looks something like:
(defn app [e1 e2] [:app e1 e2]) (defn exp-max [e] (match e ;; see core.match [:var _] 0 [:app e1 e2] (max (exp-max e1) (exp-max e2)) [:abs n _] @n)) ;; this is the quadratic -> linear change (defn abs [f] (let [n (promise) body (f [:var n])] (deliver n (inc (exp-max body))) [:abs n body]))
The evaluation of
abs works because
exp-max never looks at the value of a
:var; specifically, because the promise in a var never gets dereferenced.
One could wonder what
promisegives us here that we wouldn't get from using
atom,
varor
volatile. While those could work just as well,
promiseimplies that the value will only ever be set once, and that's a useful thing to know for people reading such code.
This is especially important here because the promises do end up being part of the "api" (i.e. the data representation) of the language.
At this point, one might try to look for ways to get rid of the promises in the
final representation. I do not think this is possible. We could easily remove
them from
:abs forms, by just changing the return expression of the
abs
function to:
[:abs @n body]
and that would work, but there is no way to get rid of the promises in
:var
forms without calling
f again, and that leads us back to an exponential
complexity. And if we're going to have them in
var forms, I believe it's more
consistent to keep them in
:abs forms too.
We could also just post-process the complete forms to get rid of the promises:
(defn printable-exp [exp] (match exp [:var p] [:var @p] [:app e1 e2] [:app (printable-exp e1) (printable-exp e2)] [:abs p b] [:abs @p (printable-exp b)]))
t.core=> (printable-exp (abs (fn [m] #_=> (abs (fn [n] #_=> (abs (fn [f] #_=> (abs (fn [x] #_=> (app (app m f) #_=> (app (app n f) #_=> x))))))))))) [:abs 4 [:abs 3 [:abs 2 [:abs 1 [:app [:app [:var 4] [:var 2]] [:app [:app [:var 3] [:var 2]] [:var 1]]]]]]] t.core=>
but I would recommend doing that only for printing, and actually restoring the promises on reading, as otherwise one might end up with mixed forms.
Conclusion
Haskell obviously has an edge here, as it allows us to work with circular values without any extra ceremony. In Clojure, where strict evaluation is the default, we have to be a bit more explicit about introducing, and resolving, lazy values (circular or otherwise).
It's a bit annoying, in principle, that this implementation choice leaks to the
value domain and may have to be taken into account by other code manipulating
expressions. One could observe, though, that we started this with the desire to
assign a unique name for each variable, and that the only operation we really
need to support on variable names is
=.
It turns out that Clojure promises (like other Clojure mutable cells) compare
for equality with
identical?, meaning that, by construction, we do get a new,
unique value for each binding using this technique, and "client" code could
simply use equality checks and not care about how the variable names print.
This means that we have switched our domain of names from integers to,
essentially, Java memory pointers, but apart from that it should all work out.
Pushing this idea further, one could actually define
abs along the lines of:
(defn abs [f] (let [n (promise)] [:abs n (f [:var n])]))
and any code that only relies on equality of variable names would work with
that directly. The mapping of promises to integers (or other name domain) could
then be done at printing time. Not that I'd recommend it, but it does result in
a faster, arguably more lazy,
abs implementation than the Haskell one.
|
https://cuddly-octo-palm-tree.com/posts/2021-11-21-circular-clojure/
|
CC-MAIN-2022-40
|
refinedweb
| 3,007
| 60.38
|
This is very common request recently – How to import CSV file into SQL Server? How to load CSV file into SQL Server Database Table? How to load comma delimited file into SQL Server? Let us see the solution in quick steps.
CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.
Create TestTable
USE TestData
GO
CREATE TABLE CSVTest
(ID INT,
FirstName VARCHAR(40),
LastName VARCHAR(40),
BirthDate SMALLDATETIME)
GO
Create CSV file in drive C: with name csvtest.txt with following content. The location of the file is C:\csvtest.txt
1,James,Smith,19750101
2,Meggie,Smith,19790122
3,Robert,Smith,20071101
4,Alex,Smith,20040202
Now run following script to load all the data from CSV to database table. If there is any error in any row it will be not inserted but other rows will be inserted.
BULK
INSERT CSVTest
FROM ‘c:\csvtest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
Check the content of the table.
FROM CSVTest
GO
Drop the table to clean up database.
FROM CSVTest
GO
Reference : Pinal Dave ()
Good one.
It solves my problem.
Keep it up. Pinal.
Thanks for your great share.
How about csv files that have double quotes? Seems like a formatfile will be needed when using BULK INSERT. Would like to see if you have another approach.
Hi all,
I am getting following error in the bulk insert of the .csv file.
The bulk load failed. The column is too long in the data file for row 1, column 25. Verify that the field terminator and row terminator are specified correctly.
Please help me.
Dnyanesh
What would be the process to import a CSV file into existing MS SQL database tables? I have a CSV file that contains data which is comma delimited that I need to import into a SQL production database which is being used by another application. The CSV file contains a list of defects that basically needs to be imported into the Defects table in our SQL database.
Thanks for all of your help in advanced.. This is a really good website to obtain SQL information.
I had some problems with bulk inserts too. The data set to insert was somewhat large, around a few hundred megabytes, resulted in some two million rows. Not all data was required, so massaging was needed.
So I wrote a smallish C#/.NET program that reads the input file in chunks of 25,000 rows, picked appropriate rows with regular expression matching and sent results to DB.
String manipulation is so much more easy in about any general-purpose programming language.
Key components: DataTable for data-to-be-inserted and SqlBulkCopy to do the actual copying.
@Dnyanesh: Your input data is likely to not have correct column/row separator characters.
I’d guess you have a line-break as row separator. If that is the case, the problem is, in Windows CR LF (0×0a 0×0d) is often used, in other systems only CR or LF are used. Check your separator characters. Use a hex editor to be sure.
@Branden: Just use sed to change quote characters :-)
@Dragan: Use Import/Export wizard or write a program to do inserting like I did.
More on CSV files can be found here
Thanks for all in this forum,
I have used BULK INSERT command load CSV file into existing SQL server table. It is executing sucessfully withougt giving any error but only concern is it is inserting every alternative records.
BULK INSERT EDFE_Trg.dbo.[tblCCodeMast]
FROM ‘c:\temp\Catalogue.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
this is my code. Please help me on this.
With advance thanks…
kumar
thank you very mush for this,
it willy helped me.
regards
Hi,
I need to import a significant number of Latitude and Longitude data into one of my SQL tables… (running SQL2005)
Tried the above, but got you do not have permission to use Bulk Load statement.
Logged in as administrator… any thoughts?
Martin
ok…solced that issue…logged in using Windows Authentication, and gave my SQL admin the rights for BULKINSERT
SQL code now runs with no errors… and if I do a select * from the table, it shows the field headings, but no data … 0 rows affected
Any thoughts? This one has me stumped… no error message leaves me nowhere to look
Cheers
Hi Pinal
Please help me
Iam using sql server 2005,.,
@Siva
SQL Sever 2005 must have access rights to the file (“c:\Test.txt”). Make sure the file is accessible for the database account
How to import data from a Excel Sheet using “Bulk Insert” command?
Will this method work for SQL Server Compact Edition?
Panel, Excellent Job, Congrats.…..
Great stuff, codes perfect, helped me alot lot
hai dave,
i want to iterate the record from collection of records without using cursors. how can i achieve this
Hi,
What if my Table has 4 Columns and the CSV file has 3 Columns and I wanna the last column of the Table to be a variable?
THANK YOU!
Hey there, thanks for the great article.
I have one question, is it possible to read a CSV file line-by-line without using bulk import?
It’s just more convenient in my situation so please let me know if its a possible option,
Thank you…..
i want to get data from CSv file into sql database.suppose the CSV file changes that is updations will be done on CSV file for every 8 hrs or 4 hrs like that.Then i have to get the updations reflected in my sql datbase also. Then how it is possible ????
you never answered scott’ question above. thoughts?
his question:
I am using SQL Express Edition to Run the code..
But I am Getting the error help me to resolve the error…
Thanx in advance…
What happens to the rows that fail? I wuold like to save those off to allow the user to fix them….
to different scott
I have fields with commas as well. Convert the csv (using Excel) to a tab-delimited text file and then use the ‘\t’ delimiter instead.
to Abdul
I have used Excel to break longer fixed-length columns into smaller columns. I believe this will help you. Open the file in Excel, add two columns after the first column. use the Data –> Text to Columns option. you can break the data into additional columns and then save the file back out to a csv.
I have over 1,200 separate csv files to import and have two issues where I could use some help.
First, all of the files contain date fields but they are in the standard MM/DD/YYYY format. Is there a way to import them and then change them to an acceptable format for SQL Server?
Next, each file name is unique and I have created a temporary table that has the directory and file name. Is it possible to nest a query something like this:
BULK INSERT MyTable
FROM (Select FileName from Files;)
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
As an addendum, I discovered that some columns are in scientific notation so I need to change all numeric columns in the CSVs to be numeric with 6 decimal
Did it. I used a combination of tools:
1. Lots of brute force – over 20 hours formatting, saving and converting the data. There has to be a more elegant solution but I couldn’t find it.
2. Excel 2007 because it can open a csv file that has over 200,000 rows. I loaded 25 files at a time into a worksheet (max of computer resources), reformatted the columns in bulk and used a nifty VBA script to write each worksheet out to a separate csv with the name of the worksheet.
3. Found a great procedure that uses XP_CMDSHELL and the BCP utility in SQL 2005 that loads all files located in a specified directory. Loaded over 1,300 files in less than 30 minutes.
Now daily maintenance is loading one file a day.
D.
im getting the foll err can u help
ulk Insert fails. Column is too long in the data file for row 1, column 1. Make sure the field terminator and row terminator are specified correctly.
Hi,
Does BULK INSERTing into an existing table that has an index on it take longer than inserting into an existing table without an index?
Cheers
Hi Pinal,
I was wondering how will i insert thousands of row from excel sheet.
But I converted the file to csv (comma seperate file ) and then usiing your bulk import statement i created all of my rows into my Table.
This was really wonderful
Thanks,
Keep it up
Happy Coding!!!
Puneet
I’m doing exactly what is said above but when I try the
BULK
INSERT CSVTest
FROM ‘c:\csvtest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
I’m recieving the following error??? I was wondering could someone help me with this or is it just some tiny error that I can’t see, help would be really appreciated!!.
Hi,
I was using BULK insert to import a csv file to a database table(following the original example by Pinaldave in the beginning of this email stream), but got errors. I have granted “Everyone” full control to this file.
Thanks so much,
Sheldon
============================================
BULK INSERT dbo.CSVTest
FROM ‘c:\csvtest.csv’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file “c:\csvtest.csv” does not exist.
i have a text file that is not comma or tab delimited. the data needs to be separated using fixed width counts. ie: characters 1-3 go into column1. characters 4-12 go into column 2 and so on. can this method be adapted to handle pos statements?
Hi,
I Just want to Thank’s for this Code realy help me a lot.
and i want to how to inport a .csv file through imprt command.
I ran code in sql 2000 worked fine. During cut and paste it has problem with ‘ ‘. I had to re-enter from my keyboard. Also like to suggest for those it doesn’t run, please name extension .csv in than .txt for sql 2000.
Thanks it works!
Also just wanted to point out that the query you said uses ” ` ” instead of ” ‘ ” shouldn’t you change it?
Hi Pinal
Please help me
Iam using sql server 2000,.,
What is the Max size that SQL server can read and insert into database table? Please help me for this because i am inserting up to 4MB file into a table using .NET code, it is not inserting records in the table. Up to 3 MB files are inserting into the table.
Thanks
Hi,
i have to upload data from a fixed width text file to sql server 2000.
data in file is like rahulankilprogrammer
we have two create a table of three column fname,lname and job, first 5 letters will go in first column , next 5 letters will go in second and than next 10 letters will go in third column .
please help me to do this ,as it is not a CSV file.
this file come to us on daily basis, daily we hav to upload it on sql server.
pls pls help me out ASAP
ThanX in advance
Hello,
I have a proplem with BULK in stored procedures.
I want to do a BULK from a file (.CSV) wich name changes every day. So I have an attribute “@FileName” in order to change the name every single day.
BULK INSERT dbo.TableName
FROM @FileName
WITH ( FIELDTERMINATOR=’;', ROWTERMINATOR=’\\n’)
But it gives me an error telling that there’s a sintax error after FROM. I have tried with all kind of punctuation (’,”,`,´,+)
It would be gratefull if some one could answer to my question.
Thanks in advance
Hi,
I try that code. But my saved location is D drive.
But i shows one error. That is Incorrect syntax near ‘ ‘ ‘.
Give the solution.
Thanks,
Vijay Ananth
hi
it solved my problem
can u please tell how can i schedule stored procedure in sql server2005
thanks
rajeh
Hello,
I’m using bulk insert and everything works fine but I cannot get latin characters to import properly. For example, my flat data file contains Sertãozinho but the imported data displays Sert+úozinho. The “a” character changes to something with plus sign. I tried using nvarchar datatype but it did not help.
Here is my code:
DROP TABLE [DumpData]
GO
CREATE TABLE [dbo].[DumpData](
[DataField1] [varchar](255) NULL
ON [PRIMARY]
–import data using bulk insert
DECLARE @bulk_cmd varchar(1000)
SET @bulk_cmd = ‘BULK INSERT [NalcoSAPDump]
FROM ”c:\DataImport\import.dat”
WITH (FIELDTERMINATOR = ”;”,ROWTERMINATOR = ”’+CHAR(10)+”’)’
EXEC(@bulk_cmd)
Thanks!!
Hello,
thanks for your answer, Vijay Ananth. :D
Does anybody know something about incremental bulk? It’s just the “normal” bulk insert or there is another thing that I have to add to the bulk sentence?
I want to do a BULK from a csv file. Firstly I do a Bulk of the whole file.
But that csv file changes every time, and in order to have those new rows in my DB I would like to know if there is another way to bulk just the new rows. With that incremental bulk or something else.
At the moment I’m doing it with bulk every second. So I don’t know if somebody that is writting the file can get an error while the bulk is working.
Any suggestions will be wellcome.
Thanks
insert Excel file record in a table + SQL SERVER + C#.net
Thanks, helpful post.
Pinal one more time you saved my life, althought it is the first time that I submit you a commend! Mate you are the best! I hope to reach your level one day!
Panagiotis Lepeniotis
MCDBA,
MSc Database Professional Student!
As for the dataMate who asked about the quotations, I will suggest to edit the csv file, at least thats what I did and worked!
Cheers
Hi Pinal,
i want to batch upload from excel to sqlserver where i want to insert into multiple table as dependency is there between tables. How to do it? Is it possible to do in c#?
Thank you in advance. I would be very thankful if I get answer quickly as I’m badly in need of it?
Dave B
Regarding your question:
Does BULK INSERTing into an existing table that has an index on it take longer than inserting into an existing table without an index?
The answer is no. We tried it out on a large Table (10 mil rows) into which we are loading even more rows. WITHOUT the index, the application timed out. With the index, it loaded in fine time.
thanks a lot!
exactly what i was looking for.
BULK INSERT CSVTest
FROM ‘c:\CSVTest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
I got the following error
Msg 102, Level 15, State 1, Line 2
Incorrect syntax near ‘‘’.
Msg 319, Level 15, State 1, Line 3
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.
Can you please let me know what needs to be done in this case?
Can we do conversions while doing Bulk inserts? (like string to decimal conversions, string to date, string to time etc..,)
Any suggestions is highly appreciated! :)
I also get the error: Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file “c:\csvtest.txt” does not exist.
How do I grant SQL access to the file?
I am running on SQL2005 on an instant I created for myself.
I tried to change the file to have access approved for ‘everyone’, but still this didn’t make any difference. I still get the file doesn’t exist error.
Thank you,
Anke
I was able to resolve.
Thank you!
I have the same problem as Kumar. Runs without errors but only every second (alternative) record gets inserted. I have tried different rowterminators, but the problem remains.
bulk insert Gen_ExcelImports from ‘C:\Software\invoices\April2008\csvfile.txt’
with ( FIELDTERMINATOR = ‘,’,FIRSTROW=2,ROWTERMINATOR = ‘\n’)
Any help would be greatly apprciated. Thanks in advance!
I have imported a flat (csv) into SQL Server 2005 using the wizard. I have field called Product Code which has 3 numbers e.g. 003. I imported it as text (default). When I open the ipmorted table, i have a plus sign next to the Product Codes e.g. +003. Why is it there and how do i get rid of that plus sign? I just want it to show 003.
It is particularly annoying and I want to concatenate the Product Codes with other codes to create an ID.
Please Help
Hola your example i good
How conncted remote computer for read file in this remote computer.
for example
BULK
INSERT dbo.tblPsoArchivosCalificacion
FROM ‘\\10.63.200.28\Paso\LibroDatos.csv’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
the question is?
i need net use userid and password to access remote computer.
sorry but my english is bad.
Hi All,
Appreciate help on below
I want to do conditioal sum in an SQL table as follows
Column1 column2 column3 column 4 column5
a b zzz jjj 4
b a yyy rrr 7
a a fff hhh 3
a b ccc kkk 2
b a kkk ttt 7
b a ggg lll 4
a b yyy kkk 3
what i want to do is,
For values of column1 = ‘a’ then add(sum) column 5 by grouping by column3. for values of column2 = ‘b’ then add (sum) co lumn5 by grouping by column4.
please help!
Krish
Replace ` with ‘ (single quotes)
When I run this particular stored proc all of my data comes out in the table with double quotes around it. How would I get rid of these double quotes?
Thanks
Simple and Suberb information.
Thanx.
Hi
Can you tell me if it is possible to execute tis code with sqlcmd in VB.NET.
I am using express editions of VS 2008 and SQL server 2005
I get the error: Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file “c:\csvtest.txt” does not exist.
OR
Cannot bulk load. The file “c:\csvtest.csv” does not exist.
Thanks, it was a very simple solution to meet my requirement.
I’m using the bulk insert statement
BULK INSERT Received
FROM ‘\\Admfs1\users\CPurnell\Responses\C0009348.iff’
WITH
(
FIELDTERMINATOR =’|',
ROWTERMINATOR =’ {CR}{LF}\n’
)
This works great for one file. But what I really need to do is bulk insert all .iff files from the Responses folder.
Any suggestions?
Hi
I tried to insert the data in the table using INSERT Bulk Query.
I used the Query like this:
BULK INSERT insertdatdata FROM ‘D:\mani_new\standard.Dat’
WITH (
DATAFILETYPE = ‘char’,
FIELDTERMINATOR = ‘ ‘,
ROWTERMINATOR = ‘\n’
)
GO
The file is there in correct path.
But i got the following error.
Cannot bulk load because the file “D:\mani_new\standard.Dat” could not be opened. Operating system error code 3(The system cannot find the path specified.).
please give me some solution.
Its urgent.
Thanks,
Sathya.
I got the error:
Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file “D:\test.txt” does not exist.
Plz give the solution.
Thanks
Sathya.
How do we create a table in SQL server through VB.NET code by importing the schema from a TEXT file?
g8
its a true solution
hi,
sathya
i think u r using sql server client verson. on ur pc
so u have to put the text file on ur data base server’s
“D:\mani_new\standard.Dat” paths.
just do it …………
hi pinale,
While working on it i m getting the error file doesnt exist can u suggest me any thing to be done
Thanks
Hi,
Thank you for sharing your code, it really works well ;-)
I tried to open a text file in web but it did not work. Do you have any idea to work it around?
BULK
INSERT bod.temp
FROM ‘’
WITH
(
FIELDTERMINATOR = ‘|’,
ROWTERMINATOR = ‘\n’
)
GO
Thanks!
hi,
can u tell how to load the particular field from dastination file into database…
i am using simple notepad files which is contain 46 columns…
so i want to read the input file and insert only three columns (column 1, column 15,column 46) into the tables…
is there any commands existing for doing this type of file handling…
i m using sqlserver 7
plz help me…
I have to import a csv file into an existing table that has different column names from the originating one. How do I match differently named fields ?
Thank you
hi, i want to upload a cvs file frm asp.dot page and the file should automatically extract into sql database table. the table is created, plz help out…
I just wanted to say that this solution is so quick and easy. I found a ton of other way too complicated examples. This just cuts right to the quick and gets the job done.
Thanks for this excellent code snippet!
Mike
Hi Pinal,
I am attempting to use the bcp utility (via command prompt) in order to create a comma-separated text file named Inside Sales Coordinator. Here is the new table created in the Northwind database:
CREATE TABLE [dbo].[NewEmployees](
[EmployeeID] [int] NOT NULL CONSTRAINT [PK_NewEmployees] PRIMARY KEY,
,
[Country] [nvarchar](15) NULL,
[HomePhone] [nvarchar](24) NULL,
[Extension] [nvarchar](4) NULL,
[Notes] [ntext] NULL,
[ReportsTo] [int] NULL,
[PhotoPath] [nvarchar](255) NULL
) The new comma-separated text file should contains the following columns from the NewEmployees table: EmployeeID, LastName, FirstName, HireDate (date part only; no time), and Extension. Only those employees with a Title of “Inside Sales Coordinator” should be returned.
Here is what I came up with so far:
bcp “select EmployeeID, LastName, FirstName, HireDate, Extension from Northwind.dbo.NewEmployees” out C:\InsideSalesCoordinators.csv –c –t , –T -S SDGLTQATEAM\SQLExpress can you give me a little insight on how to populate the .csv file. Thanks
This Procedure can help in ASP.NET
CREATE PROCEDURE ImportFile
@File VARCHAR(100)
AS
EXECUTE (’BULK INSERT TableName FROM ”’ + @File + ”’
WITH
(
FIELDTERMINATOR = ”;”
, ROWTERMINATOR = ”\n”
, CODEPAGE = ”RAW”
)’ )
/*
EXEC dbo.ImportFile ‘c:\csv\text.csv’
*/
C# Code
SqlConnection cnn = new SqlConnection(
string.Format(@”Data Source=.\SQLEXPRESS;Initial Catalog=DB_Name;Persist Security Info=True;User ID=;Password=” ));
SqlCommand cmd = new SqlCommand(”ImportFile”, cnn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(”@File”, SqlDbType.VarChar).Value = txtFile.text;
cnn.Open();
cmd.ExecuteNonQuery();
cnn.Close();
Hi,
Most of the time, I search this site for answers of my SQL questions. And here I have one more….
I am using SQL Server 2005 Enterprise Edition on Windows Server 2003.
I wanted to load Double quote enclosed, comma delimited text file into SQL Server 2005. There was no header record in the file.
I used “SQL Server Import Export Wizard” to perform this task. I chose Data Source = Flat File Source, selected .csv file using Browse button, specified Text qualifier as double quote (”). There are total 17 fields in the upload file and under Advanced section they were named from “Column 0″ to “Column 16″. The default length of each column was 50. I specified length of the “Column 5″ as 200 because I know the length data in that column was more than 50. I have checked rest of the columns and made sure that length 50 was enough for other fields.
I clicked on Finish to start uploading and I got following error:
————————————-
Error 0xc02020c5: Data Flow Task: Data conversion failed while converting column “Column 4″ (26) to column “Column 4″ (136). The conversion returned status value 2 and status text “The value could not be converted because of a potential loss of data.”.
(SQL Server Import and Export Wizard)
Error 0xc0209029: Data Flow Task: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The “output column “Column 4″ (136)” failed because error code 0xC020907F occurred, and the error row disposition on “output column “Column 4″ (136)” specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
(SQL Server Import and Export Wizard)
Error 0xc0047022: Data Flow Task: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component “Data Conversion 1″ (112) failed with error code 0xC0209029.)
Error 0xc0047021: Data Flow Task: SSIS Error Code DTS_E_THREADFAILED. Thread “WorkThread0″ has exited with error code 0xC0209029. There may be error messages posted before this with more information on why the thread has exited.
(SQL Server Import and Export Wizard)
Error 0xc02020c4: Data Flow Task: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
(SQL Server Import and Export Wizard)
Error 0xc0047038: Data Flow Task: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component “Source – Giftlink_2008_csv” )
Error 0xc0047021: Data Flow Task: SSIS Error Code DTS_E_THREADFAILED. Thread “SourceThread0″ has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited.
(SQL Server Import and Export Wizard)
————————————-
I have checked “Column 4″ length and it was 50 which was much enough for data in that column.
Is anyone has any idea on this?
Any help would be appreciated.
Thank you in advance.
Please help me
Iam using sql server 2000,7.0 ..I want to do a similar operation..
Hi there
I’m facing an issue in Bulk insertion. Whenever i query the command –
BULK INSERT vijay FROM ‘c:\vijay.csv’ WITH (FIELDTERMINATOR = ‘,’, ROWTERMINATOR = ‘\n’);
sql throwing an error “ORA-00900: invalid SQL statement”.
Could anyone help me out in this issue?
Thanks in advance.
i am facing an issue that is i have text file like below format
AAAAAAAAAAAAAAAAAAAAA BBBBBBBBBBBBB CCCCCCCCCCCC
DDDDDDDDDDDDDDDDDDD EEEEEEEEEEE
AAA BBB CCC DDD EEE FFF
121 john Albert 1-1-08 ok No
121 john Albert 1-1-08 ok No
121 john Albert 1-1-08 ok No
in this format AAA,BBB,CCC…FFF this are heads of column Now
i want to insert this text data of each head in Sqlserver database table in separate column mens AAA head data sould be save in AAA column in SqlServer Databse
hi..my text file is quite complicated, the field terminator only can use by ‘length’ to seperate…please refer the text file i copy paste as example
VENDORID INVNO INVDATE PONO PODATE
S0016 TB080767924/07/08BD-01/07/08
the fieldterminator surely cannot use by space bar…can i use the length to seperate the field? if yes…please guide how to…
thanks and regards
Pinal, nice and simple code, but I cannot solve the problem
——————————————————————————
Meldung 4860, Ebene 16, Status 1, Zeile 2
Massenladen ist nicht möglich. Die Datei “c:\csvtest.txt” ist nicht vorhanden.
(0 Zeile(n) betroffen)
FILE DOES NOT EXIST
——————————————————————————
There are many users with the same issue, can you advice what I need to do
many thanks
Hi,
How to import from CSV to new table? I tried with OPENROWSET . but it shows error if there are any special characters in the file name(say for example ‘-’).
I was getting the same problem as other here with the error reading
The file “c:\Test.txt” does not exist.
Looks like the the SQL commands will only work when the file itself is on the actual SQL server box. I was getting this error using SQL Admin on another box and as soon as I moved the file to the server itself, I could run the command from any machine but would still reference the file on the actual SQL box. Enjoy!
I’m trying to do an import from a text file using the BULK INSERT. The text file is separated by fixed width (no delimiters).
Can anybody give me an example query for this?
Hi Pinal,
How to create a CSV or Text file from a Stored Procedure?
Thanks
Sultan
I found how to do this by inserting all the data into one column and then using the substring function to split the columns. It worked…
hi pinal
i have got the same problem as charles. my text file has no spcific delimiter. the columns can be only separated in terms of length.how can such text filebe loaded?
thanks
I’m wanting to upload a csv file into phpmyadmin, but you last me after you said, “February 6, 2008″!
Can you give me any help on a 5th grade level?
Ron
Firstly, thanks for this article, it is great to see the whole code posted rather than individual bits all around.
Please could you assist me with this scenario:
I have 6 columns created in the CSVTemp table, the last two columns could have null values so I have specified this using LastName VARCHAR(40)NULL
When I execute the query, the next record is placed in the first of the NULL columns instead of the next line e.g.:
File contains information: 1,2,3,4,5
20,30,40,50,60,70
CSVTemp table Col1 Col2 Col3 Col4 Col5 Col6
1 2 3 4 5 20
I am using your example the ROWTERMINATOR = ‘\n’ but yet sql still reflects this record as above. How to I inform SQL to insert that 20 into the column 1, rather than continuing?
Okay here is my dilemma…
I have multiple csv with data in each file. They are connected via foreign keys (with each table having its on primary key). I want to load these tables into an Oracle database using SQL. If i know the order in which the tables should be loaded how do i go about loading the tables into the Oracle db.
E.g.
// CSV 1
Fields = ID_NUMBER | Name | Date
Data = 12345 (PK)| Hafiz | 12-12-2008 |
// CSV2
Field = ID_PRODUCT_ NUMBER | ID_NUMBER | Colour | Type
Data = 54321 (PK) | 12345 (FK) | Black| Car |
The Oracle DB has the tables setup with the same field name. I want to load CSV 1 first into Oracle Table 1 and then CSV 2 into Oracle Table 2 (must be in this order or will run into referential integrity issues.
The next step is to make this applicable to csv files with 100 lines of data each (probably using some sort of iterative process). Can you help??????????
@MD
The data you imported is stored and sorted based on the collation setting you choose when sql server was installed or when the database was created.
Read more about collation settings, then you will be able to troubleshoot the problem.
@MB
The data you imported is stored and sorted based on the collation setting you choose when sql server was installed or when the database was created.
Read more about collation settings, then you will be able to troubleshoot the problem.
Hi,
This article was very helpful to me. However I do have one question, how would you modify this script if you have a text file that doesn’t have set number of columns. E.g.:
row1: col1,col2,col3
row2: col1,col2
row3: col1,col2,col3,col4
row4: col1,col2,col,3,col4,col5
I tried to load this file using the above script, i was able to load the data but the data couldn’t be loaded correctly.
Help please.
Thanks
Hi ,
those who are facing problem that the text file does not exists
copy ur file to server where ur sql server exists
nice article
Hi,
Great forum! I am encountering an issue that I have always been able to overcome, but this time it has me stuck. I have a .txt file that is comma delimited. There are some data issues in the file. Normally, I would import all records into one column, then parse from there. I am using {LF} as both the column delimiter and record delimiter, so that all data will go into my table that has 1 large (nvarchar (4000)) field. When the import hits line 62007, I get an error message “Column Delimiter Not Found.” I can look at the text file in jujuEdit and see the {LF} at the end of each column.
Has anyone encountered anything like this before?
Thanks!
The command is run from the sql server, you can put the file anywhere on the network and you should no longer get the error that the file does not exist.
Did anyone ever get a solution to the text qualifiers (double quotes in the text)?
Hi Melissa
I tried to insert text which has double quotes in that text, its executed successfully with out any error.
my text file data is like this
12 I need “my” car brakes to be changed
14 ertetr etert ert etert
15 i need a land line phone in my home
if its not the case can you describe problem details.
Hi all!
my problem is that during bulk insertion, it is showing that
“You do not have permission to use the bulk load statement.”
so can anyone tell me how to give permission for bulk insertion.
Thanks .it’s realy helpfull for me
now i do it from server side. i want to do t from frontant. i use VB.NET 2005 and SQL SERVER 2005.From a form i want to do the work when i click on butto. now where i write the code.
Pinalbhai!
It has helped a lot. Especially since Microsoft has not provided SSIS in SQL Server 2005 express edition.
Hi All,
I want to read the the csv file and insert into table using sql server 2005. I have code below. This will do that insert using OPENROWSET function. My pblm is when i give th .csv file with out header it throws an exception like ‘Duplicate column names are not allowed in result sets obtained through OPENQUERY and OPENROWSET. The column name “NoName” is a duplicate.’ can anyone solve this pblm
Dim strSQL As String
strSQL = “Select * ” & _
” INTO ” & DATABASE & “.dbo.[List_staging] ” & _
“FROM OPENROWSET(’MSDASQL’,'Driver={Microsoft Text Driver (*.txt; *.csv)}; DEFAULTDIR=C:\VoicenetSQL\project\tampa\Politic\” & projectfile & “; Extensions=CSV; HDR=No;’,'SELECT * FROM at1008.csv’) ”
Thanks in advance..
G.V
This may be the best tech blog I have ever seen. Keep up the good work!
Hi Pinal,
Thank’s a lot. It has solved my problem.
Regards,
Nadeem.
Hi,
I need to restore the data from SQL server 2005 to 2000. In my data there are many languages(it means that data has mixure of all characters).
Are datatypes similar in SQL server 2000 and 2005? am receiving the problem datatype problem..
can any one suggest me how do i restore it?
Regards,
Minchala
excellent!!!!!!
Just wanted to say thanks. This code worked perfectly for a simple program i wrote to run at a scheduled time. It saved me so much work from having to manually parse the CSV!
Hi, thanks for the post, I noted that the size of the table is too big you can reduce it?
grateful,
Edson
Hi Dave,
You are amazing, thanks for sharing your knowledge.
I need a stored proc to import data from a .txt file to a table in sql server 2000 but I need to do it using BCP command and the stored proc should also perform error handling, if possible could you please post the stored proc which performs this job.
Thanks! I’m glad your blog always comes up in one of the first google hits. You help a lot :)
It is an excellent example indeed .Thanks.
I am trying to load a CSV file from a VB .NET 2.0 app. Since the CSV file is in user’s local drive, I can not use BULK INPUT method, can I?
Bulk Input method requires the CSV file to be present in the SQL Server’s local drive/folder, right?
I will appreciate if you can comment on my assumptions. Thanks.
Regards,
Mehdi Anis
Dude. You just saved my life.
Hi
I’m having some problem something similar to what you have explained, bt the only difference is that my data in csv file is not seperated with commas, its all chunk together bt i want my output to be like yours. Any idea how i may be able to do that?
Thanks in advance. =)
Hi,
Excellent Code to import data from text file into Database
Thanks.
HI dave,
i need to store a file content in database, it could be a text file or image file how can i??????
HI,
The above code works great for me, But I have several problem
1] I have the CSV file with 1155585 records the format of data is
“Bhavin”,”Mumbai”,”10/2/2004″,1,2,3,4,5
“Bhavin12″,”Mumbai12″,”10/2/2005″,1,2,3,4,5
so what i did i replace all string which contains Quotes to null i.e nothing
now the data looks like
Bhavin,Mumbai,10/2/2004,1,2,3,4,5
Bhavin12,Mumbai12,10/2/2005,1,2,3,4,5
Now I tried using the above code with comma as a deliminator, but I can only see 1155195 records
plz help me how to proceed…..
Fisrt of all, thank you for your example with bulk insert it solved my problem, but I styill have a little problem: I have in my .csv file special characters like (şţăîâ) and I’d like some help on this matter, please!
Thank you!
Excellent article!
I am importing a csv file and several fields have a very long field length (200 characters). I want to truncate it (25 characters) and add a special character (for example a tilde to advise me the data is truncated) after importing. Any suggestion would be appreciated.
Thank you.
I get this error msg on simple 6 row table(same format and file location as described above): help for this?
hello,
Looking to import a txt file that at present errors when dts package enncounters nagative vlaues. Any suggestions?
Negative values are assoicated with Money
0000003.34 will pass
00000-3.34 will fail
Thanks,
Cheif
I used the script to import a CSV into a new table in Database Explorer in Visual Web Developer. It works! Thanks for your help.
I am using your query to import CSV file to Sql Server.
But I am getting the following error”Cannot bulk load. The file “C:\CSVTest.txt” does not exist.”
What should I do now?
Any suggestions are welcome.
While runnning BULK INSERT statement, i am getting error
BULK INSERT #temptb FROM ‘/home/msrivast/temptable.txt’
go
sg 102, Level 15, State 0
ASA Error -131: Syntax error near ‘BULK’ on line 1
Please help
Hi Pinal,
How do we load CSV file has embedded newlines (in varchar colomuns) into MS SQL server?
For instance, how do we load the CSV file below. Note that the second record has a “newline”/”line break” in Column 2
id, reason, season
==, =====, =====
10,’cold’, ‘winter’
12,’cold
flu’,'autumn’
i couldn’t find anyone with an answer for dealing with CSV’s that have quotes in the text with commas within them.
The key issue being importing this:
Col1, Col2, Col3
“Joe”,”Smith”,”Sr Architect”
“Nicole”,”Dawson”,”Manager”
“Jon”,”Stephens, PHD”,”PM”
I was able to work out this solution. Basically describe the delimiter (field terminator) as “,” like this using XML Format Files for SQL Server:
And notice i had to describe the ” as " format for each field (i only listed 1 as the example).
Thanks
-B
I am loading lattitude longitude information into database using load file command.I loaded them into database successfully .But after checking using “select * from data base name” it’s showing all zeros.Any help would be helpful.Thanx in advance.
What if I want to load hundred of files with one script saved into one folder? Above script is just a fun. Do something actual through which many people can find the solution to their problems.
Please help.
i get large amount of information in an excel file which i need to store in MSSql database. i used to import but i need to make an ASP interface that picks the information from excel or csv and inserts into the database.
is it possible?
please help me. I got following error when i run insertion script below-
USE WATCHDOIT
BULK
INSERT CSVTest1
FROM ‘C:\Inetpub\wwwroot\WatchDoit\BusinessList1.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
))”.
Really Superb Pinal Sir..
I m njoying your SQL Tips daily…
Continue with your Favours…
Bye…
Hi,
insert Only one row
Plz help me
as early as possible
Hi,
I’m using bulk insert and everything works fine but I cannot get ‘£’ sign characters to import properly. It always change into ‘?’ sign. I tried using nvarchar datatype but it did not help.
Please Help me.
Thanks
Hi,
I wants to use bulk insert for a text file with fieldterminator as a space. How do I use It.
Regards,
Vaibhav
Sir,
that is already done. can you suggest me a way to defin th table by its own using the CSV file. i.e. my first row is the column name. Now how do i create a table using the variable column names??
tank in advance.
Mukul.
Hi Pinal,
I have used Bulk command, but want to get the file path of CSV file. I want file path using T-SQL.
Please tell me how to get the file path only by providing file name.
Thanks in advance….
Hi Pinal,
Yours query is marvellous but i want the same query to access a text file to online server with the text placed in client machine
Can any one suggest me
Thanks in advance
Hi!!
i am facing a problem…..
i am trying to bulk insert data into a table.
the stored procedure goes like this…
====================================
CREATE PROCEDURE [dbo].[insbulk]
@circle varchar(20),@path varchar(100)
AS
bulk insert @circle from @path with (fieldterminator=’,')
GO
======================================
The thing is that this is a wronh syntax…..
My frontend application goes like this…
which takes input the filename ie the path,the tablename
===================================
SqlCommand cmd=new SqlCommand(”insbulk”,conn);
cmd.CommandType=CommandType.StoredProcedure;
cmd.Parameters.Add(”@circle”,this.cmbcircle.SelectedItem.ToString());
cmd.Parameters.Add(”@path”,this.fldg.FileName);
cmd.Parameters.Add(”@insertiondate”,insertiondate);
cmd.ExecuteScalar();
}
catch
{
MessageBox.Show(”Error!!!Connection terminating with database”);
conn.Close();
}
@samrat
First try executing that stored procedure in SQL Server Client tools.
Check if it is working fine ?
Regards
IM
Thanks
Hi
how to transfer data into multitables from csv file and is it possible to tranfer the data into constraint tables.
I am trying to import data from multiple text files into sql server 2005. After the data has been imported in the sq server, i want to move the files to another folder. Does anyone have a solution to this?
Thanks in advance.
Hi this article is very helpfull
I was trying to run same query with SQL Server Compact Edition Database.
But faild to do same can u please help me in this
Hi Abhisek,
To move file after processing, you need to use SSIS for that.
In that you can process file and later you can move/Delete that file too.
Tejas
I had the same problem. Copying the file to the DBserver and calling the file with full name worked.
…..
bulk insert dict from “\\dbserver\mydirectory\myfile.txt”
……
Hi Pinal
Hi Pinal & all-
Help please
I have a csv file that contains data like this
“1″,”james”,”smith”,”2323″
how do I import this to a table without the double quotes.I want to avoid a intermideate convirsion into an excell file since I like to schaduele this as a job.
As would be script if the columns have quotation marks double? For example:
“1″,”James”,”Smith”,”19750101″
“2″,”Meggie”,”Smith”,”19790122″
“3″,”Robert”,”Smith”,”20071101″
“4″,”Alex”,”Smith”,”20040202″
I have following script as she would modify?
BULK
INSERT CSVTest
FROM ‘c:\csvtest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
Hi,
I regularly looking at this knowledge sharing site.
Nice and very informative
* How to import the Excel file to the remote SQL Server without using OBDC, ie only with Stored Procedure?
Hi,
How to import data like this:
“qwe,asd,zxc”,123,sometext
The result i whant to have:
qwe,asd,zxc 123 sometext
but what i have is:
“qwe asd zxc”,123,sometext
Any thoughts?
Thanks
Sorry result i whant to have:
“qwe,asd,zxc” 123 sometext
:)
@Satrebla
If you have( ” ) character at fixed length for every value that goes into that column, then you can use substring function and select only those characters that you want ignoring rest of them and concatinate with other values (if needed)
Regards,
IM.
@Imran Mohammed
Thanks for reply, but lenth is variable. For example:
“abc,qwe,zxc”,123,txt
“qwe,asd,fgh,jkl”,456,qqq
“12233,456789″,rty,159
..?
replace commas that aren’t embedded between quotes with a character that won’t be used. change fieldterminator to be that new character.
can probably also use fmt file, but i’ve never used one so not positive
Hi,
I would like to know how do we import a remote text to some other server in sql server 2005 ?
how or where do you write the script that you are publishing. I am new to the import but not asp.net. How do you import into a table that you have already constructed with the gui in asp.net.
[...] SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL …Bulk inserting data from CSV file to SQL Server database is now simplified with this article. [...]
Thanks a lot..
It is working fine
How We can Insert The Data in sq l server through Excel Sheet. Here i don’t have ‘,’ field or Line termination.
from
OPENROWSET(’Microsoft.Jet.OLEDB.4.0′,
‘Excel 8.0;Database=c:\.xls’, [$])
) AS x
Hi Praveen,
from
OPENROWSET(’Microsoft.Jet.OLEDB.4.0′,
‘Excel 8.0;Database=c:\ExcelFileName.xls’, [YourSheetName$])
) AS x
Thanks,
Thanks Tejas
Hi All,
how to join two different tables on different server
Hi Praveen,
Is SQL PORT open for any server?
If yes, then you can use:
FROM table A
INNER JOIN
(
OPENROWSET(’SQLOLEDB’,'ServerAddress’;'User’;'Password’,
’select * from
table
‘)
) B
ON A.id = b.Id
Thanks,
Tejas
Hi,
I had implemented this “Bulk Insert” in Sql 2005, i am getting
Msg 102, Level 15, State 1, Line 7
Incorrect syntax near ‘‘’.
my Statement ..
BULK
INSERT ccprocessorstandardpayee
FROM ‘c:\csvtest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
please help me..
thnk u..
Just a note that somehow – doesn’t matter to me – I was stymied at first by “fancy” quotes or apostrophes. That is, I was inclined to copy/paste your stuff, and SQL returned syntax errors having to do with the use of NON- straight (up & down) single quotes. I happened to figure out the problem but someone else might get frustrated prematurely. Since there’s some problem having to do with Full Text searching that prevented me from importing those big (and no doubt beautiful) sample DB’s that MS makes available, this post (YOURS) is/was EXTREMELY HELPFUL. Thanks!!
Thank you very much! This was very helpful.
Hi,
I’m trying to upload a file to my database and it wont work at all, I get this message:
#1064 – You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘BULK INSERT tblPostcodes FROM ‘c:\postcodes.txt’ WITH ( FIELDTERMINATOR’ at line 1
when I run:
BULK
INSERT tblPostcodes
FROM ‘c:\postcodes.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
Any ideas? tell me why this is happening and pls give me the solution for this
ur Bulk insert works as following way: in LAN only
Bulk Insert cv_Test from ‘\\Rahool\SharedDocs\Book2.txt’ with
(
FIELDTERMINATOR= ‘,’,
ROWTERMINATOR= ‘\n’
)
Go
Thanks this is great.
You are a true guru at sql. This has solved a massive problem.
I was trying to insert a csv into sql express 2005.
Which is witout the import functions. this code solved that problem.
Thanks a lots!!!
Hi,
I’m new to SQL, I’m a VoIP engineer. I am trying to create a call billing database. I have a simple table like this
My VOIP server exports out call records as text (comma separated) files everday.
phone,name,callerid,dialednumber
3929,joel,3929,3454
3454,anita,3454,3929
I need to be able to upload data from text files onto the same table. Is there a way to do this and set it to automatically import the text files?
Thanks in advance,
joel
hi guys, (newbie / VS2008 pro)
I have a table with 5 columns and a csv with 5 columns which i want to import into table.
I have used your code as above to import CSV into table and I get ERROR message :
There was an error parsing the Query. [Token line number =1, token line offset =1, Token in error = BULK]
what does this mean ?
not sure if i am in the right area but am in the Server Explorer window, right clicked on database name then selected
‘New Query” ( is this right ?)
also, i can’t find DTS in the \bin folder ?\ of VS2008/ SSME
thanks in advance
viv (frustrated)
Hi,
Please look at this:
Let me know if it helps you.
Thanks,
Tejas
Hi Tejas,
I actually need help with the error below when ever i try to run the query ?
“There was an error parsing the Query. [Token line number =1, token line offset =1, Token in error = BULK] ”
please help
(VS2008 Pro)
thanks
viv
Hi,
Could you give me any sample data?
So I can try my own and let you know.
Thanks,
Tejas
Is there a way using DTS or SSIS to easily pick up and import in reoccurring delimited files.
They are very simple files with about 10 pipe delimited fields that very easily import in manually. They are placed in a directory by another business process and contain a unique DateTimeStamp filename. All of the current current BCP or Import tasks I see in either 2000 or 2005 force you to select a specific filename rather than a wild card. I understand that I will have to deal with moving the files also as they are processed which there appears to be a file operations task I could use. I thought for sure that this would be a commonly needed slam dunk task to perform in DTS or SSIS, but right now feel like just writing a small custom app. to do it. Any insight you have to offer would be greatly appreciated. Thanks!
Hi there,
sorry I’m new to SQL Server.
I was wandering to load the ASCII file into SQL Server table with this code:
BULK
INSERT CSVTest
FROM ‘c:\csvtest.txt’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
GO
c:\csvtest.txt’ refers to the file in the same filesystem with the SQL Server or they can be on different boxes?
Thanks
Dear Pinal,
I still have a problem I want to use the same bulk insert query by passing the filename as a parameter since I dont want to hard code it in the query is it possible??
declare @filename varchar(100)
set @filename = ”” + ‘C:\test.txt’ + ””;
bulk insert vishu_test
from @filename
with
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
@ Vishwanath,
Yes, Of-course it is possible. What you need to do is store whole script into another variable and execute that variable. something like this,
declare @SQLCMD varchar(1000)
declare @filename varchar(100)
set @filename =‘C:\test.txt’
set @SQLCMD = ‘
bulk insert vishu_test
from ‘+@filename+’
with
(
FIELDTERMINATOR = ‘‘,’’,
ROWTERMINATOR = ‘‘\n’’
)’
Print @SQLCMD
Exec (@SQLCMD)
~
declare @SQLCMD nvarchar(1000)
declare @filename nvarchar(100)
set @filename =‘C:\test.txt’
set @SQLCMD = ‘
bulk insert vishu_test
from ‘+@filename+’
with
(
FIELDTERMINATOR = ‘‘,’’,
ROWTERMINATOR = ‘‘\n’’
)’
Print @SQLCMD
exec sp_executesql @sql(@SQLCMD)
This method is recommended as it prevents and SQLInjection…
Hi,
I wonder if anyone can help. I’m trying to get a bulk data import to mssql but I want to have the file name also included in one of the colums. Also i’m trying to get the importer to import any file name .txt file int he import folder. Is this possible? here is my script:
BULK
INSERT orders
FROM ‘c:\imp\test3.txt’
WITH
(
firstrow=2,
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
is there a wildcard for the filename? something like FROM ‘c:\imp\*.txt’
hello……..
using this i can insert the contents of csv file to sql.but the last row was not added into the table……
BULK INSERT portf FROM ‘E:\\portfolio\\WebSite2\\grouped\\2007\1\\EQ020107.CSV’
WITH (FORMATFILE=’C:\\Documents and Settings\\user\\portfol.fmt’,FIRSTROW=2)
Have any solution for this problem?
This is really a very simple a good article. Informative indeed.
Thanks for the help and keep up the good work!
Good Luck!
ComputerVideos.110mb.com/
Pinal Hi,
Based on your example how do you insert multiple txt files
Like
C:\csvtest1.txt
C:\csvtest2.txt
C:\csvtest3.txt
to the csvtest table in sql
Thank you very much.
Oded Dror
Thanks! This worked splendidly.
This is a great example to start. I am facing only one problem
How can i Skip Header while inserting CSV or Tab Seperated Values. B’Cause my TXT files consisting Header informations as well.
That would be great help..Thanks
Hi Pinal
I followed your bulk insert and it worked perfectly. But I was trying a couple of other things which did not work for me. Basically I want to auto increment the primary key by 1, instead of storing 1,2,3…… in csv file
Eg: my csv file looks like this
James,Smith
Meggie,Smith
Robert,Smith
Alex,Smith
and my table looks like this
csvinsert(id int identity(1,1), fname varchar(20), lname varchar(20), primary key(id))
Now when I follow your commands it gives me dataconversion error as it is trying to insert a string in id column. What can I do to make this work?
If I have a file:
“1″,”James”,”Smith”,”19750101″,,
“2″,”Meggie,Smith”,”19790122″,”A”,
“3″,”Robert,Smith”,”20071101″,”B”
“4″,”Alex”,”Smith”,”20040202″,,
How can I do it. (import to SQL tables)
Thanks,
Thank you very much, pinaldave your site is very appricated the fresh candidates also
Hi,
Try like this
BULK INSERT TmpStList FROM ‘c:\TxtFile1.txt’ WITH (FIELDTERMINATOR = ‘”,”‘)
Ref :
Thanks,
24×7
You need to use Format File to import CSV file
Awesome article , exactly what I wanted.
hi pinal,
please help me to insert data/multiple data in a csv using VBA.
thanks in advance
Plz help me yaar
Hi
You do not need to do anything . Just design a table right click and click import data select csv file and thats it. No need to do programming and stuff if its one time only.
hi there,
can you tell me how to make update to the database from the text file.the text file i have is a buffer(log file) from finger print machine.i don’t want to save a text file every time i collect the data from the device.should i make a trigger or what.is there any other way??.
other thing:how to insert the data into db from the device?
Thank u a lot, my friend……..
Explained in a simple manner. Good one
Please help urgent,
While runnning BULK INSERT statement, i am getting error
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.
why we cannot do bulk insert to temporary or variable tables?
Hi,
I ve to load a table with 600000 records to excel. How do i do it.
Can anyone help me load a file through bulk insert that has a date and time appended in its name with first part of the name remaining constant and the date time part being variable everyday.
something like this:
bulk insert dbo.tablename
from ‘filename*.txt’
etc.
here the filename is the first part
and
datetime part denoted by * is the variable part
@DNJ
Definitely I would go for DTS / SSIS. That is way faster than any other tool, faster than Database Engine.
~ IM.
@Hasan
You need to write Dynamic SQL.
By Using Dynamic SQL, first prepare bulk insert script with proper file name.
Consider below script as a sample for your requirement.
Declare @sqlcmd1 varchar(1000)
set @sqlcmd1 = ‘bulk insert dbo.tablename
from filename’+convert(varchar, datename(yyyy, getdate())) + convert(varchar, datepart(mm, getdate()))+convert(varchar, datename(d, getdate()))
Execute Sp_ExecuteSql @sqlcmd1
~ IM.
hai thanks.
your code helps realy good
Thanks
Hi,
Thanks for the post.It was very helpful.
I tried it.It is working.But i need to do it for only desired columns(not for all the column).
Can anybody suggest me.
Hi Prashanti,
you have to explore the area of using the FORMATFILE = ‘format_file_path’ for bulk insert to do it for desired columns.
Some text fr
The format file should be used if:
1.The data file contains greater or fewer columns than the table or view.
2.The columns are in a different order.
3.The column delimiters vary.
There are other changes in the data format. Format files are typically created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.
Regards,
Saswata
Can someone plz help me.Its keep telling me that incorrect syntax whereas im using the exact command.
bulk insert dbo.Orders
from ‘C:\Data\orders.txt’
with
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
Incorrect syntax near ‘‘’.
Dont know whats wrong.
Can someone please help me with the following. I have a csv file that I’m trying to load into sql.
The 1st line in the file contains IDs, 2nd line – user account and remaining lines contains the change info.
Example:
1234,2586
dom\hope,dom\newberry,dom\ksayr,dom\farley
11111,1,23
11111,2,187
11111,3,9687
I broke it down to three separate bulk inserts. However, I’m having problems with the user accounts insert.
CREATE TABLE #USER
(
#1 varchar (100)
)
BULK INSERT #USER
FROM ‘C:\Documents and Settings\Nakisha\FIRST\1064PBF01.csv’
WITH
(
FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’,
FIRSTROW = 2,
LASTROW = 2
)
With the script above “dom\hope,dom\newberry,dom\ksayr,dom\farley” gets display in the table.
Is there a way to have each user account display in a different row in the table?
Thanks for your help!
@imran;
delete all the single quotes and type again. it should not be like this
‘,’
but like this
‘,’
Hi,
I want to know y does the following script run in SQL and not in T-SQL
——————————————————-
DECLARE @tblName varchar(30)
SET @tblName = CONVERT(VARCHAR(20),GETDATE(),112) + ‘Table’
DECLARE @sql nvarchar(4000)
SELECT @sql =
‘CREATE TABLE “‘ + @tblName + ‘”
(
ID VARCHAR(15),
Name VARCHAR(15)
)’
EXEC(@sql)
go
——————————————————-
it gives you the error
Msg 170, Sev 15: Line 1: Incorrect syntax near ‘20090714Table’. [SQLSTATE 42000]
|
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
|
crawl-002
|
refinedweb
| 9,875
| 74.19
|
A CTaskContext is the main entry point of a distributed component and maps to a RTT::TaskContext. More...
import "rtt/transports/corba/TaskContext.idl";
A CTaskContext is the main entry point of a distributed component and maps to a RTT::TaskContext.
Definition at line 33 of file TaskContext.idl.
Activate this component.
Add a one-way peer connection.
Cleanup this component.
Configure this component.
Create a two-way peer connection.
Connect all compatible and equally named data ports with another CTaskContext's data ports.
Connect all compatible and equally named services with another CTaskContext's services.
Destroy a two-way peer connection.
Get a peer this task is connected to.
Get a list of all the peers this task is connected to.
Get a service.
Use 'this' as the name to get the task context's own service provider
Get a required service.
Has this task a peer with given name ?
Is this component in a Fatal error state ?
Is this component in a RunTime error state ?
Is this component's ExecutionEngine active ?
Is this component configured ?
Is this component running ?
Access to the Data Flow ports.
Remove a one-way peer connection.
Asks the component to transition from an Exception state to the Stopped state.
Start this component.
Stop this component.
|
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/interfaceRTT_1_1corba_1_1CTaskContext.html
|
CC-MAIN-2015-48
|
refinedweb
| 212
| 63.76
|
The Storage Team Blog about file services and storage features in Windows and Windows Server.
Several newsgroup customers have recently reported Explorer refresh issues when connecting to shared folders using a DFS path. For example, if you connect to a shared folder directly (without using the namespace) and create a file or rename a file in a shared folder, the change is evident immediately. However, when you use the shared folder’s DFS path to make the same type of change, you have to refresh the window to see the change. Turns out this is an Explorer issue fixed in Windows XP SP2. A change to the registry is also required. Check out for details.
--Jill
|
http://blogs.technet.com/b/filecab/archive/2006/03/22/422547.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 116
| 63.29
|
We teamed up with SiteGround
To bring you the latest from the web and tried-and-true hosting, recommended for designers and developers. SitePoint Readers Get Up To 65% OFF Now
WordPress Cron is one of the most useful features that you’ll want to learn and understand if, like me, you spend a great deal of time working with WordPress.
Being able to run certain functions on a tight schedule is essential for any CMS and WordPress has a set of functions which help make this process very simple and almost effortless.
In this article I will cover the following WordPress Cron features:
You are probably familiar with the term ‘Cron’ as it relates to the time-based scheduler in Unix systems and although WordPress’ Cron is different; the main idea behind it is the same.
Some examples of how WordPress is using it’s Cron system internally is checking for theme and plugin updates and even checking if there are posts ready to be published.
If you are familiar with Unix’s Cron, you probably think that WordPress’ Cron is always on the lookout for new tasks and running them as they come. This is far from the truth and I’ll explain why shortly.
WordPress’ Cron runs when a page is loaded, whether it’s a front-end or back-end page. In other words, when a page is loaded on your website, WordPress will check if there are any tasks or events that need to run and execute them. If you are thinking this is not ideal, you are absolutely right.
If you happen to have a website that doesn’t get too much traffic and you have a task that needs to be executed at a precise time, WordPress will not know the task is due until someone visits your website. Even if it happens to be a search engine bot crawling your website.
There are two flavors of Cron events that you can schedule with a few lines of code:
Scheduling a recurring event requires that you create a custom ‘Action’ which must also be registered with Cron. Once the Cron runs, the function attached to the custom ‘Action’ you created earlier is executed.
Let’s take a look at the following example where we are going to be deleting post revisions on a daily basis.
First we create our custom ‘Action’ which will have attached to it the function we want to run when the hook is called by Cron.
<?php // delete_post_revisions will be call when the Cron is executed add_action( 'delete_post_revisions', 'delete_all_post_revisions' ); // This function will run once the 'delete_post_revisions' is called function delete_all_post_revisions() { $args = array( 'post_type' => 'post', 'posts_per_page' => -1, // We don't need anything else other than the Post IDs 'fields' => 'ids', 'cache_results' => false, 'no_found_rows' => true ); $posts = new WP_Query( $args ); // Cycle through each Post ID foreach( (array)$posts->posts as $post_id ) { // Check for possible revisions $revisions = wp_get_post_revisions( $post_id, array( 'fields' => 'ids' ) ); // If we got some revisions back from wp_get_post_revisions if( is_array( $revisions ) && count( $revisions ) >= 1 ) { foreach( $revisions as $revision_id ) { // Do a final check on the Revisions if( wp_is_post_revision( $revision_id ) ) { // Delete the actual post revision wp_delete_post_revision( $revision_id); } } } } }
For scheduling the recurring event we make use of the
wp_schedule_event( $timestamp, $recurrence, $hook, $args ) function which takes 4 arguments:
hourly‘, ‘
twicedaily‘ and ‘
daily‘. We’ll see how to create our own time intervals later.
First we make sure the event has not been scheduled before and if it hasn’t, we go ahead and schedule it.
<?php // Make sure this event hasn't been scheduled if( !wp_next_scheduled( 'delete_post_revisions' ) ) { // Schedule the event wp_schedule_event( time(), 'daily', 'delete_post_revisions' ); }
Note that you can also add tie this snippet of code to an action. If you are a plugin writer, you could set up the scheduled event to run the first time the plugin options page is visited. For a much simpler example, we are going to tie it to WordPress’
init action.
<?php // Add function to register event to WordPress init add_action( 'init', 'register_daily_revision_delete_event'); // Function which will register the event function register_daily_revision_delete_event() { // Make sure this event hasn't been scheduled if( !wp_next_scheduled( 'delete_post_revisions' ) ) { // Schedule the event wp_schedule_event( time(), 'daily', 'delete_post_revisions' ); } }
Now that you know how to schedule recurring events, let’s take a look at creating a single event which will never run again until it is rescheduled.
Just as its name suggests, a single event is one that runs once and then it stops. This single event can still be rescheduled again if needed.
The concept behind it is the same as the recurring events. First you register a custom hook which is called by Cron when it runs on the server. Once Cron calls the hook, its function is executed and that’s basically how you get things done.
As an example, we are going to be setting an expiration date for posts. Posts will expire 30 days after being published. We are going to be hooking into the
publish_post so that we can schedule our single event as soon as the post is published and count down begins.
Setting up the function which will delete the post after 30 days.
<?php // delete_post_after_expiration will be called by Cron // We are going to be passing the Post ID so we need to specify that // we'll need 1 argument passed to the function add_action( 'delete_post_after_expiration', 'delete_post_after_expiration', 10, 1 ); // This function will run once the 'delete_post_after_expiration' is called function delete_post_after_expiration( $post_id ) { // Takes care of deleting the specified Post wp_delete_post( $post_id, true ); }
Pretty simple, right? Now we need to schedule the event once the post is actually published. In order to accomplish this task we need to use the
wp_schedule_single_event( $timestamp, $hook, $args ) function which takes 3 arguments.
Here is quick look at how all this actions and hooks are put together.
<?php // schedule_post_expiration_event runs when a Post is Published add_action( 'publish_post', 'schedule_post_expiration_event' ); function schedule_post_expiration_event( $post_id ) { // Schedule the actual event wp_schedule_single_event( 30 * DAY_IN_SECONDS, 'delete_post_after_expiration', array( $post_id ) ); }
We are using some time constants that WordPress has in place to make our lives easier. For more information on these constants, you can go to “Using Time Constants“, but here is a quick overview:
MINUTE_IN_SECONDS= 60 (seconds)
HOUR_IN_SECONDS= 60 * MINUTE_IN_SECONDS
DAY_IN_SECONDS= 24 * HOUR_IN_SECONDS
WEEK_IN_SECONDS= 7 * DAY_IN_SECONDS
YEAR_IN_SECONDS= 365 * DAY_IN_SECONDS
Now that you know how to schedule recurring and single events, it’s also going to be useful to know how to un-schedule these events.
You might be wondering, why would you want to un-schedule events? There is a good reason, particularly if you include some sort schedule events in your plugins.
Crons are stored on the wp_options table and by simply deactivating and deleting your plugin. WordPress will still try to run your events even though said plugin is no longer available. Having said that, please make sure you un-schedule events properly within your plugin or custom implementation.
Un-scheduling Cron events is relatively easy, all you need to know is the name of the hook and when is the next scheduled time that particular Cron is supposed to run. We are going to be using
wp_next_scheduled() to find when the next occurrence will take place and only then we can un-schedule it using
wp_unschedule_event().
Considering our first example, we would un-schedule the event the following way.
<?php // Get the timestamp of the next scheduled run $timestamp = wp_next_scheduled( 'delete_post_revisions' ); // Un-schedule the event wp_unschedule_event( $timestamp, 'delete_post_revisions' );
It is possible to set custom Cron intervals which you can use when scheduling events using Cron. To do so, we just need to hook into the
cron_schedules filter and add our own. Let’s take a look at adding a custom interval set to run every 10 minutes.
<?php // Add custom cron interval add_filter( 'cron_schedules', 'add_custom_cron_intervals', 10, 1 ); function add_custom_cron_intervals( $schedules ) { // $schedules stores all recurrence schedules within WordPress $schedules['ten_minutes'] = array( 'interval' => 600, // Number of seconds, 600 in 10 minutes 'display' => 'Once Every 10 Minutes' ); // Return our newly added schedule to be merged into the others return (array)$schedules; }
Using WordPress’ Cron couldn’t be any easier and it is a very nice and interesting tool which is sure to help you make your plugin more robust. Learning all these functions and putting them into practice with real world applications is the best way to master WordPress’ Cron for scheduling events.
|
https://www.sitepoint.com/mastering-wordpress-cron/
|
CC-MAIN-2017-26
|
refinedweb
| 1,380
| 54.15
|
Considerations (ISSN: 1066-4920) is an independent magazine written for all who are sensitive to the seasonal rhythms of the Earth and the apparent movement of the planets about her. It provides a forum in which there is an opportunity for new and old ideas to be presented, questioned and refined. The magazine, MasterCard, American Express) Circulation : Wendy Robinson Editorial offices: Goldens Bridge, New York
Š2004 COPYRIGHT: CONSIDERATIONS, INC. All rights reserved. Reproduction in any form is prohibited except with prior written permission of the Publisher. Printed in the United States by Allen Press, Lawrence, Kansas
CONSIDERATIONS Volume XIX Number 1 February – April 2004
CONTENTS Locality Astrology Charles A. Jayne
3
Neptune & the South Sea Bubble John W. Gross Jr
16
The Arrest of Saddam Hussein Nicole Girard
50
Will I be Able to Join the Society? Ruth Baker
55
More Time Twins Ken Gillman
57
Uranus Treading on Your Toes Prier Wintle
68
Fate & Fixed Stars Barbara Koval
76
Random Notes on Uranus & Neptune Michael Zizis
81
April in Paris & Elsewhere Ken Paone
87
Astrology & Quantum Physics Martin Piechota
90
These Considerations Let’s Consider Who?
2 75 97
These Considerations
J
ANUARY now, February when this gets to you. We’ve begun another new cycle: the q has swung north again and i is in n. These are new beginnings but then Life itself is Change and change really has no need for the calendar, for star gazing or for numerical calculation. It has been bitterly cold these past few days here in America’s north east. According to the newspapers, we’re in an icebox. Awoken at night, snug under piled blankets and thankful for the wonders of modern central heating, we listen to the howling January wind and hear it batter against our insulated windows. u’s great primal forces are still very real, today’s modern conveniences notwithstanding. His hungry fangs of ice and freezing cold are a seasonal reminder that there’s something more, far more, to this innascible planetary god than we tend to see in most beginners’ texts. We have several gems in this issue of Considerations. There’s a never previously published article by Charles Jayne, which was lost for fully eighteen years. Charles handed it to your editor for publication in this magazine the last time they met, a day or so before Charles went into the hospital where he died on 31st December 1985. It should then have been placed in the “For Next Issue” folder but was inadvertently filed elsewhere. Its discovery now was purely serendipitous, the result of going through old files to answer a query from Larry Ely. Despite this unseemly delay there’s nothing out-of-date in the article’s content, to the very end our good friend’s intellect remained at a high level. We are delighted to include a carefully considered analysis of the astrology associated with the South Sea Bubble by John Gross. As he points out, the reek of o permeates that financial upheaval throughout, and there are clear ties between then and the recent bursting of the internet bubble. Ken Lay and his cohorts would have been very much at home in 18th century London. We are also thrilled to include writings by Nicole Girard, Ruth Baker, Prier Wintle, Barbara Koval, Michael Zizis, Ken Paone and Martin Piechota, all of whom provide superb offerings. Although this is just the start of our 19th issue, it is the commencement of the 21st year since Considerations first appeared (the Premier issue came out on 8th December 1983) and we continue to be excited and thankful for our longevity, your support and the opportunity we have to provide you with the work of some of the very best writers on matters astrological. —Enjoy
2
Locality Astrology CHARLES A. JAYNE
T
HE CORRECT JOHNDRO CHART What can one do if one’s client has no time of birth? One can, of course, cast a Solar Chart which is what is often done. Nor is a Solar Chart without merit. Fifty-six years ago L. Edward Johndro published his book, The Earth in the Heavens (1929). In that book he brought the zodiac to earth by picking a baseline that he considered to correspond to 0º a. He pointed out that if one measures the slow precession of the equinoxes along the Equator instead of along the Ecliptic, this slower “terrestrial precession”, as I call it, would cause the baseline to move westward. At any given time each place on earth would have a "Locality Midheaven" and a "Locality Ascendant". According to his baseline, as of 1930.0, the MC of the Greenwich Meridian was 1º19’s (corresponding to a RAMC of 29º10’). Using the Locality MC one could also cast a Locality Horoscope that had nothing to do with the time of day of birth! This horoscope has come to be known as the Johndro Chart. In their book Mundane Astrology the English astrologers Michael Baigent, Nicholas Campion and Charles Harvey describe many other such efforts by other astrologers in Europe and in America. Thus the subject is a complex one. A number of these other astrologers (David Williams, A. Parsons, and K. O. Spencer) as well as Johndro, gave the location of the Great Pyramid of Gaza, 31º08’ East, as especially significant. This was due to the allegation of David Davidson in his wellknown book The Great Pyramid that that pyramid is at the center of the land mass of the earth. However his assertion was erroneous. If we divide the earth into two hemispheres such that the most land mass is in one, the principal hemisphere, it has been determined that the "pole" (center) of that principal hemisphere is actually virtually on the 0º (Greenwich) meridian and a little over 47º north near Nantes, France. Therefore the Greenwich meridian is not an arbitrary one, and it provides us with a real basis for a "Zodiac on Earth". Oddly enough a moving terrestrial zodiac, as at first put forward by
3
Jayne: Locality Astrology
Johndro, corresponds to a fixed or Sidereal Zodiac in the heavens. On the other hand a fixed Terrestrial Zodiac, as proposed by most astrologers, corresponds to the moving Tropical Zodiac in the Heavens! Dr. Walter Gornold, better known as the English astrologer Sepharial, was the earliest advocate of the first point of such a fixed tropical and terrestrial zodiac corresponding to the Greenwich meridian. But he made one serious mistake in using celestial longitude instead of right ascension for measuring distances east and west on earth. Since he had real knowledge of astronomy this was a surprising error. For terrestrial longitude, measured along the equator on earth, is the same as right ascension, also measured along the equator but in space. This mistake had not been made Johndro. In June 1950 in American astrologer Margaret Morrell's superb magazine Modern Astrology, Johndro revised his baseline radically to a fixed one, which commenced at the Greenwich meridian as 0ºa (Tropical). Distances East and West from that baseline were measured along the Equator and then converted into the equivalent Ecliptic longitudes. It took much courage to come out and admit that after twenty years of additional testing his first baseline had been wrong. Unfortunately a number of astrologers had adopted that first baseline uncritically. Some of them were quite unwilling to abandon it. One astrologer went so far as to allege that Johndro went back to his original baseline! Such a claim is really amazing. Johndro died in November 1951 (within half an hour of his wife) so it would have been most extraordinary had he reverted to his earlier baseline so soon after rejecting it! Furthermore his partner of fifteen years, W. Kenneth Brown, was my best friend, and he certainly would have told me had Johndro displayed such a mental aberration. My own work on his later baseline confirms its correctness to my satisfaction. The title of Johndro's article of June 1950 was Where is Greenwich? The relevant excerpt from it has been published in my book An Introduction to Locality Astrology on pages 34 to 36. In Johndro’s article and in others that followed in that magazine, he maintained that a major reason for not having detected his mistake sooner was his discovery of the Electrical Ascendant. He linked the Midheaven "angle" with gravitation, the conventional Ascendant with the earth's magnetic field (thus this was the M Ascendant), and the "new" Ascendant with the earth's electric field. He found that no other Baseline accounted for all events since the E (electrical) Ascendant was missing! I had discovered this "Third Angle" at about the same time he did, naming it the Vertex. But since he published on it before I did the credit for the discovery belongs to him. Before his death I corresponded with him about all of this, pointing out that in directing the "angles" only the other end of this new "angle" actually worked. In his reply to me he said that, on the basis of its Oblique Ascension, I was right. The anti-Vertex, his E-ascendant, is always due east, which the regular M-ascendant so seldom is. The Vertex is due west, where the q every day crosses the east-
4
Considerations XIX: 1
west and perpendicular local Prime Vertical plane. Space has three dimensions which locally are defined by the north-south meridian and eastwest Prime Vertical, both of them being perpendicular to the horizon. My present article is not about the Vertex angle, the most karmic of the three angles which has so much to do with what you "bring in", for weal or woe, from previous lives. As Johndro did we may correlate u with the MC; o, the planet of magnetism, with the usual Ascendant; and i, the planet of electricity, with the Vertex. This was his GEM or gravity-electricity-magnetism. Certainly we all should be studying the impersonal and largely unconscious Vertex in all of the charts we do. How does one cast a correct Johndro Chart? I will first take my own case, assuming that I know only my date of birth on 9th October 1911 in Jenkintown, Pennsylvania at 75º07.5’ West and 39º54.2’ North (Geocentric). I shall assume that the time was unknown. Not only is the celestial longitude of the Greenwich meridian just 0º00’a, but its right ascension (RA) is also 0º00’, which, for purposes of subtraction from it, can also be written as 359º60’. Since I was born west of Greenwich we must subtract that 75º07.5’ from 359º60’, which gives 284º52.5’ as the RAMC of Jenkintown. This is equivalent to a celestial longitude of 13º41.5’¦, rounded to 13º42’¦, which will be the "fixed" MC of all towns and localities on that north-south meridian. To find the longitude that corresponds to a given right ascension of the MC (RAMC) look in any good Table of Houses. Any such table will give the sidereal time (the RA of the mean q), the RAMC, and the longitude of the MC, which is equivalent to that RAMC. As the time of birth is assumed to be unknown we use True Local Noon, when the q is just at the MC so that the maximum error either way does not exceed twelve hours. True Local Noon occurred at 11:48 AM EST with the q at 15º16’z. Counting its distance from 0ºa, as we must, this is 7 signs plus 15 degrees and 16 minutes. We always add this arc distance of the q from 0º a to the longitude of the Locality MC. This is 195º16’ + 13º42’¦ which equals 28º58’ f as the MC of the Johndro Chart. This in turn gives an Ascendant at 24º48’z and a Vertex at 29º05’s. The w, whose position is quite unreliable, is at 5º02’s—my natal w is actually at 11º26’s. Many would find that the elevated o A the MC from the 9th House side quite significant for a professional astrologer, who has had books and many articles published. That o also squares the Ascendant. Equally striking is the opposition of i to the MC from the end of the 3rd House, which also squares the Ascendant. Therefore that i S o dominates those two "angles". i, a singleton zodiacally in a "Bucket" pattern, is also trine the Vertex. The 11th House r in h rules the z Ascendant, is widely semisquare it, and is closely semisquare the MC—friendships have been especially valued by me. In my actual chart the w is precisely triseptile to my q at 15º43’z such that their midpoint at 28º35’f is not only 1' from a semisquare to r (r=q/w), but it is right on the MC of this
5
Jayne: Locality Astrology
Charles Jayne Jr. Natal chart 10:39:30 PM EST 9th October 1911 Jenkintown PA 39N54 75W07’30”
Johndro chart Equivalent of 6:56:51 AM EST 9th October 1911
6
Considerations XIX: 1
Johndro chart! My wife, who has had 36 years in astrology, says this Johndro chart fits me well. The MC of this Johndro chart trined my i by direct solar arc at the beginning of February 1968, and set off their natal opposition. This was a little over four months after a long unsuccessful trip with my wife and the sudden loss of my job in Wall Street a month later. By converse solar arc i moved back to the trine of the MC (again setting off their opposition) at the middle of October 1969, two and half months after a sudden loss of another and more important position in Wall Street as a Technical Analyst. Since my natal q is actually 27' later than 15º16’z the correct Johndro MC must also be 27' later, at 29º25’f. This fits better for the 1967 journey and job loss. u, coming from the 7th House and ruling the 3rd House, reached the correct Johndro MC by direct solar arc in October 1981 at the time of the failure of a proposed business partnership with a man that involved joint writing for him by my wife and myself. The q's trine to the correct MC by solar arc energized the natal square in June 1955, a month after the nullification of a research project by an important man. In late October 1953 the direct MC trined the (correct) w in the 7th House; this took place three weeks after I moved and six weeks before I met my wife! Some of these directions are not close fits, but they are good enough; they would aid an astrologer in the rectification of an unknown time—by rectifying the MC of the Johndro Chart one can find the exact longitude of the natal q. All of the above directions are by direct or converse solar arc. I decided to check Johndro's baseline and chart by a test of what the Johndro charts of Marilyn Monroe and Jacqueline Onassis signified in their lives. After all they have been two of the most publicized women of this century. Monroe was born on 1st June 1926 in Los Angeles, California at a given time of 9:30 AM, PST. That outstanding astrologer, Ronald Davison of England, rectified that time to 8:48:24 AM, which gives an Ascendant at 4º30’g and a MC of 25º07’a, which is conjunction her r at 28º43’a. One meaning of r A the natal M C is that the individual makes a career of their good looks, or as a hairdresser or cosmetician improves the beauty of others. But this is not enough to account for her great fame. Fame is shown by the w or y in aspect to the MC. One can be notorious under the w, but not under y which gives honor. Los Angeles is 118º15’West, which when subtracted from 359º60’ gives the RAMC of that city 241º45’, equivalent to a Los Angeles MC of 3º45’33”c (rounds to 3º46'). Her natal q at 10º25’d is 70º25' from 0º00’a. When added to the Los Angeles MC her Johndro MC becomes 14º11’b, and thus conjunct (and parallel) to her w at 18º43’b. y at 26ºb is in the 10th House, where it adds to her reputation. The 3º37’d Ascendant is conjunct her e at 6º43’d and (widely) to her q at 10º25’d.
7
Jayne: Locality Astrology
Marilyn Monroe Natal chart 8:48:24 AM PST 1st June 1926 Los Angeles, CA 34N07 118W15
Johndro chart Equivalent of 4:22:41 AM PST 1st June 1926
8
Considerations XIX: 1
Jacqueline Onassis was born on 28th July 1929 at 2:30 PM, EDST (unrectified) in Southampton, New York which is 72º23’ West and 40º53.3’ North. Her natal MC of 28º55’g is trine her Moon at 25º35’a, thus accounting in part for her fame. 359º60’ less 72º23’ gives the RAMC of her Southampton MC at 287º37’, which is equivalent to 16º15’¦. Her natal Sun at 5º19’g is 125º10’ from 0º a. Add this to the locality MC of Southampton and her Johndro MC is 21º25’s with a Declination of 18º07’ North. This is Parallel to her q's Declination of 18º59’ North, which shows her public status as being enhanced by relationships with men of authority. This also places both y at 9º35’d and r at 21º47’d elevated in her 10th House. But Johndro used the (true) solar arc not only in longitude but also in right ascension. To do so he added the distance of the natal q from 0ºa in RA to the RAMC of the locality MC of the Place. Her q is 127º32' of RA from 0º a, which we add to the 287º37’ RAMC of the place. This gives the RAMC of her own Johndro MC as 55º09’. It is to that value that we would apply the (true) RA solar arcs. It is equivalent on the Ecliptic to 27º18’s, and has a Declination of 19º34’ North. This places it parallel to her r, which at 20º25’ North is parallel to 2º56’d (in longitude terms this is equivalent to an orb of 5º38’). Therefore, in taking both Marilyn Monroe and Jacqueline Onassis and both charts for each one: they have favorable aspects of both the w and r to their MCs, and in both Johndro charts y is in the 10th House! When we shift the Onassis Johndro Chart to Greece we find that its angles are virtually the same as those of her natal chart! Her marriage to Onassis of Greece can be regarded as a part of her life pattern. Now I will discuss how one shifts a chart for a change of locality, which is a relatively neglected but important branch of Astrology. The limited method of Astro-Cartography can only be used if a chart has been rectified. The simple method developed by Johndro can be employed when a birth time is only approximately accurate. In addition the aspects of the Lights (q and w) are included, something that is missing from Astro-Cartography. See also my book An Introduction to Locality Astrology.
9
Jayne: Locality Astrology
Jacqueline Kennedy-Onassis Natal chart 2:20 PM EDT 28th July 1929 Southampton, NY 40N53 72W23
Johndro chart Equivalent of
7:42:52 AM 28th July 1929
10
Considerations XIX: 1
D
OES ASTRO-CARTOGRAPHY DO THE JOB? A widely used and valid method for shifting your Horoscope to another locality is to shift the positions of the planets in the Houses and the MC and Ascendant in the signs, but not the planets in the signs. Let us say that X was born in California at 4 AM, PST. The natal Chart is set up for that time at, say, Los Angeles. But Mrs. X is thinking of spending the summer in London, England. How do we shift her chart there? Since Pacific Standard Time is 8 hours earlier than the Greenwich Time of London we add those 8 hours to her birth time. For whenever a move is made eastward the time is later so we must add; if we are dealing with a westward shift we must subtract the time difference between the two localities. In the case of Mrs. X her chart in London must be set up for noon (4 AM + 8), using the latitude of London in place of the Los Angeles latitude. I am sure that this simple and useful method is known to all. However, aside from the changes of the angles in the Signs what it tells us is limited. A method developed by Johndro is immensely useful, does not require an exact time of birth and is not too difficult to do! One finds the positions of all natal bodies on the equator in right ascension (R.A.). If one is moving from, say, Los Angeles (118º15’ West) to New York (73º57’ West) one finds the difference in terrestrial longitude between the two places, i.e. 118º15’ - 73º57’ = + 44º18’. Since the shift is eastward the value of the shift is a plus. Therefore we add the 44º18’ to the RAs of all of the natal bodies and to the RAMC in order to shift the Lights, planets and MC alone the Equator eastward. RA and terrestrial longitude are equivalent since both are measured along the plane of the Earth's Equator. Furthermore, as for every degree and minute of RA there is a corresponding degree and minute of celestial longitude in the zodiac, one then converts all of these shifted RAs back to longitudes. The resultant positions give your locality chart. From it one takes aspects to the natal chart in determining what the locality will mean for you. This chart does not replace the one you were born with any more than a progressed chart does. Indeed such a locality chart is one progressed in space instead of in time. There are RA-Longitude Tables that enable you to convert from the longitude and latitude (found in any Ephemeris) to RA, such as in my book An Introduction to Locality Astrology, and elsewhere. Given the (celestial) longitude and (celestial) latitude of a body, we find the equivalent RA, via the double interpolation that any astrologer knows how to do to find an Ascendant. In converting back from RA to longitude the latitude used is 0º so that this is only a single interpolation. A good Table of Houses gives three values for each MC: the Sidereal Time, the RAMC (actually the same as the ST) and the longitude of the MC. One can use such values for the second conversion from RA to longitude. So really it is not too difficult to do this.
11
Jayne: Locality Astrology
Let us say that we confine ourselves to the eight known planets, the SO, LU, MC and Ascendant (omitting the NN, CH and the asteroids, the VX, etc.). Then we have just twelve factors. There are eight Ptolemaic aspects (conjunction, opposition, two trines, two squares and two sextiles). Let us add the two semisquares and the two sesquiquadrates, so that we have twelve important aspects. How many conjunctions are possible of N bodies taken 2 at a time? The formula is: [N x (N - 1)] รท 2. If N = 12, this gives us (12 x 11)/2 = 66. But this only one aspect. For 12 aspects this would be 12 x 66 = 792 aspects. In Astro-Cartography each of the ten bodies is caused to rise, set or be in an upper or lower Culmination, which gives 40 combinations or only about 5% of the 792. It can be rightly argued that the MC and Ascendant are more important than all of the other factors except for the q and w. The eight planets make twelve aspects each to the two Lights, i.e. 8 x 12 x 2 = 192 combinations. Therefore the Johndro Shift gives almost five times as many combinations as is given by Astro-Cartography. Even if we arbitrarily limit ourselves only to conjunctions and oppositions to the Lights there would be as many combinations as those of the MC and Ascendant. Therefore one may legitimately question whether at its best the method of Astro-Cartography does a complete job! As far as I have been able to determine the Astro-Cartographic method was first developed by Cyril Fagan, who like Johndro was a great technical astrologer. It was brought to the United States many years ago by the English astrologer Kennet Gillman, editor of this excellent magazine, Considerations. It was also independently developed by Gary Duncan, who was given the Johndro Award for 1964. Provided the time of birth is really accurate it is quite valid although limited. But the question is just how accurate are times of birth? Let us say that we omit all times given to the nearest minute and confine ourselves to those given to the nearest 5, 10 or 15 minutes, etc. Let there be 1200 such times, a sufficiently large sample so as to reduce statistical fluctuations. Then if the distribution of times is truly random there will be: 100 to the nearest hour, 100 to the nearest half hour, 200 to the nearest quarter hour, 400 to the nearest 10 minutes and 400 to the nearest 5 minutes. In general there ought to be four times as many to the nearest 5 or 10 minutes as to the nearest half hour or nearest hour. Several years ago I took several hundred times from birth data of the Gauquelins that had come from official birth records. The distribution of times was quite different than the one I have just given. Times given to the nearest hour and half hour were quite frequent. Several years ago I was giving a Workshop on Rectification in Chicago for NCGR. At that time Dr. Wharton, who was in the audience, was the Chairman of the NCGR Board of Directors. I asked him as an experienced physician about the accuracy of
12
Considerations XIX: 1
hospital birth records. He replied that while some hospitals were careful others were not and that there was really no was to tell the difference! Therefore the accuracy and reliability of even official records is dubious. Then what is the average probable error of birth times? I have been rectifying times of birth for 36 years. I have spent hours, days, months, years and decades in this endeavor so that I do have some qualifications for speaking with authority to the subject. A critic may ask how can you be sure that you had the right answer for all of your efforts and experience? This is a good question. The acid test is, of course, prediction! If after a number of "test" aspects have occurred none of them fit the time must be re-rectified. Find I am not always right although I do get the correct value more often than not. I took a small sample (29 cases) of such verified rectifications in order to measure the average error between given and rectified times. The discrepancy was 32’ or 8º on the MC! On an average two to three planets are in the wrong Houses no matter what House system is used! What I am talking about is the most important single problem that astrologers face since most of them work with erroneous charts! Assuming the correctness of what has been given thus far Astro-Cartography does usually not do the job since its lines would be "off" by about 400 miles in the average case! On the other hand in the case of the Johndro Shift, which need not use the Angles, only the w has any appreciable error. One sixtieth of a day is 24 minutes of time so that if the w moves 15º a day it must move 15’ in 24 minutes or if 12º at day 12' in 24 minutes. In 32 minutes, therefore, its error would be from 15' to 20'. Since its apparent radius is 15' this offsets most of this error so that even the w may be used in the Johndro Shift. Two of our greatest astrologers have been L. Edward Johndro and Marc Edmund Jones. Dr. Jones made a number of original contributions to horoscope interpretation. In each case he did this in such a way that one could use his methods even although the chart had not been rectified. The Johndro Chart, described earlier, can be used even if the time of birth is unknown. The Johndro-Shift can be used to real advantage even if the time of birth has not been corrected. Like Jones, Johndro made major contributions to the common treasury of Astrology. I have a computer program, written by Michael Erlewine, which shifts the planets in two additional ways to any other locality i.e. along the Prime Vertical in Zenith Distance and along the Horizon in Azimuth, before returning the shifted positions to the Ecliptic. These other two methods, also based on the work of Johndro, are only accurate if the time of birth has been rectified. Those of you who try the Johndro Shift should employ really narrow orbs. Each minute of arc is equivalent to about one mile. If your shifted q is 15' from a trine to your natal y then by moving 15 miles further East or West, as the case may be, the aspect becomes exact. Since the q's apparent radius is 16' one may allow larger orbs of aspect for it.
13
Jayne: Locality Astrology
The maximum orb of the w to q in the most potent aspects should never exceed, say, 30’, that is thirty miles, and most aspects should be less than 15' from exactness if one is to discriminate between localities as close together as, say, fifty miles. Timing factors are more powerful than locality ones but even though they do not last the locality factors are of great importance. Someone may plaintively ask if many horoscopes are so inaccurate how astrologers can do their job at all well. In the first place this assumes that most of those who call themselves "astrologers" actually do a good job! In the second place there is the psychic factor. A really good psychic on a very good day could give a good reading of a wholly erroneous horoscope! However, even the best of psychics has only a few "very good days". In addition only a minority of psychics excel. Astrology is only partly technical since it is also symbolic. This is because it bridges both the limited and measurable objective world of saturnine "time and space" to the subtle interpenetrating and limitless neptunian domain of the psyche. The psychological astrologers deal only with the latter as is also true of the "psychic" astrologers. This is what is called mantic and applies to the Tarot, the I Ching, Kabbala, etc. It may be all very fine but it is not Astrology. At the other extreme are the technical "astrologers" who are exact technically but often have no ability to interpret what they have laboriously "timed". They hardly deserve to call themselves "astrologers" either! On the technical side the intelligent astrologer seeks to reduce error and to include as many valid factors as possible. Clearly this while essential is hardly enough, since the even more difficult rigor in Interpretation is just as necessary. My criticism of Astro-Cartography is therefore twofold: Firstly, even if the time of birth is exactly right it is incomplete in that it deals with only a minority of the aspects, and secondly, if the time has not been corrected and verified its lines are nearly certain to be worthless. At least in the case of cigarettes the buyer is warned that they are bad for his or hear health, whereas in the case of Astro-Cartography the buyer is not even given the rattlesnake's rattle! In conclusion I shall give some illustrative examples of the Johndro Shift from my own chart, see the above table. I use my own chart since it was rectified by Ms. Hesseltine in the fall of 1949 and the validity of that rectification has been verified many times since then. I was born on 9th October 1911 at 10:53 P.M., E.S.T. in Jenkintown, Pennsylvania which is 75W08 and 40N05. This is geographic latitude. I prefer geocentric latitude which is 11' less so that I have used 39N54. Although my time was given in my mother's baby book as 10:53 it was rectified to 10:39½ P.M., an error of 13½ minutes despite the time having been given to the nearest minute! My q is only 35' from the longitude of the midpoint of the two benefics, r and y, whose midpoint axis runs from 15º08’ z to 15º08’ a. In terms of right ascension that midpoint is at 193:43 which is 46' short of
14
Considerations XIX: 1
the q’s RA. Therefore if I moved west about 46 miles—a backward and westward shift—I would have that exact planetary picture of q = r/y. If we subtract the q's RA of 194.29 from the RA of y at 224:26 the difference is 29:57 (29º and 57’ of arc). Therefore, if I moved westward by that same arc, that is, from my natal 75W08 to 105W05 the locality y would be conjunct my natal q. This would take me almost exactly onto the Mountain Standard Time Meridian of 105W00. Therefore, Denver in Colorado, which is at 104W59 West (and 39N44 geographic), ought to make a good home as my q is natally in the 4th House. Birthplace Denver Long. RA RA Long. Aspects q 15º43’ z 194:29 104:20 13º19’ h A r 17 E w 11º26’ s 38:42 8:51 9º38’ a e 4º43’ z 185:58 156:07 4º14’ h r 13º36’ h 163:00 133:09 10º42’ g G t 11 W t 10º29’ d 69:05 39:14 11º36’ s A w 10 W y 16º40’ x 224:26 194:35 15º50’ z A q 7 W D j 24 E u 19º02’ s 47:18 17:27 18º54’ a i 25º25’ ¦ 297:29 267:38 27º49’ c o 23º42’ f 105:28 75:37 15º59’ d “ 28º59’ d 88:57 59:06 1º13’ d Z j 0 k 27º09’ n 357:23 327:32 25º14’ b j* 16º14’ f 107:37 57:32 19º00’ d J* 2º36’ c 240:32 137:32 10º52’ x * Ascendant and Vertex at Denver are those associated with the Denver MC.
At 104W32 locality y would square my Ascendant but as they are only separated by 26' from a natal trine this would be quite favorable. There was not room enough in the above Table for y D j 23 E in Denver. While this is a bit wide it is "pulled in" by locality y A natal q. The only problems there would be social-emotional (r) and would not be "new" as they are already part of the natal chart. My natal r is contraparallel (like an opposition) my q. “ Z j gains weight from being exact and would refer to difficult private "psychological ties" with one or more women. All of the aspects should be worked out to ensure that one or more important ones are not overlooked. This one-dimensional shift is such that what has been said of Denver would apply to all places north or south of it with the same terrestrial longitude. With a rectified chart the other two shifts ("dimensions") could also be done, thus specifying Denver's uniqueness for me as a locality.
15
Neptune & the South Sea Bubble JOHN W. GROSS, JR.
M
Y SHORT LIST of favorite all-time books has to include Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay1. Witch manias, investment manias, religious manias, crusades, and swindles are lumped together with odd follies, dueling, thieving, poisonings and other delusional behavior like prophesy and… astrology! This classic history of mass-hysteria and rapture was published in 1841; five years shy of o’s discovery. This no doubt explains why Mackay’s table of contents reads like a rulership listing for o. Certainly the book’s 19th century writing style is part of its charm. Quaint, dated and sometimes vaguely tongue-in-cheek, it may leave the reader with the impression that human folly is a feature of our dusty past. This feeling jibes with our western cultural bias that equates the passage of time with progress. Sophistication, we feel, resides in the present and future. The rational has gradually circumscribed the irrational, or so we hope. Smug within his modernity at the end of the 20th Century a reader might have been forgiven for believing that sheer passage of time counted as protection from similar outbreaks of delusion and mania. Had not history itself ended with the collapse of our foes in 1989? And hadn’t the economists and technocrats fine tuned economic progress to perfection? Such hubris is a product of linear thinking, a mistake that those sensitive to nature’s cycles are less prone. For all our study of history, we do indeed seem condemned to repeat it. In just the past three years alone we have witnessed yet another investment mania and the start of a new crusade cycle between Islam and the West! The recent bursting of our own high tech/dot-com bubble is but the latest addition to the gallery of famous money manias. Our own just-popped bubble is a distant cousin to history’s first modern-style investment bubble detailed at length in Extraordinary Popular Delusions. The astrology of history’s first bubble is profound. Unsurprisingly, o looms large in this story. The first modern-style bubble and crash that was international in scope swept through Europe early in the 18th century. The French chapter of this financial crisis, known as the Mississippi Bubble, began in 1719. From there it jumped the Channel to help spark an English version 1
Charles Mackay, LL.D. Extraordinary Popular Delusions and the Madness of Crowds. New York: Harmony Books, 1980.
16
Considerations XIX: 1
of the bubble in 1720. Finished with EngSouth Sea land, the mania Company 1720 returned to the continental financial centers of Amsterdam, Hamburg and England’s others.2 South Sea Bubble, whose astrology I shall detail, is named after its focal point of speculation and bust, the South Sea Company. If the East NASDAQ India Company and 1996-2003 the Bank of England were that era’s blue chips, then the South Sea Company in 1720 amounted to the lion’s share of the early 18th century “NASDAQ”. The recent high-tech internet boom and bust, like many business cycles before it, was associated with a real advance in technology and industry. The 1720 South Sea Bubble was no different except it accompanied the innovative advances within finance itself. Economic historian Larry Neal fits the story of the South Sea Bubble within an era he has dubbed the “Big Bang of financial capitalism.”3 A modern financial system was struggling to form out of the dynamic convergence of nationstate finance and commercial interests. Accelerating trade and government expansion had outgrown a coin-based monetary system. This drove experimentation with new forms of credit, bank currencies and tradable shares in the proliferating joint stock companies, building blocks of today’s financial system.4 2
Data for South Sea Company is from Stock prices reported by John Castaing, the course of the exchange, from January 1698 to December 1753. Web site and file name are: Data for the NASDAQ Exchange was obtained from: 3 “An American Abroad: Unearthing a Cautionary Tale” in Commerce InSight, Winter 1998. p. 3. 4 Larry Neal. How It All Began: The Monetary and Financial Architecture of Europe during the First Global Capital Markets: 1648-1815.
17
Gross: Neptune & the South Sea Bubble
The only iron-clad definition of a mania as opposed to a general rise in prices is one made in hindsight: a relatively rapid advance that is entirely retraced by a relatively rapid decline, “relative” being measured by some deviation from the long term rate-of-change. Manias in progress demand rationalization from believers. The rationalization of choice often latches onto the associated innovation within technology, industry or trade just described. Then as now believers in the mania justify inflated prices by insisting that a new era has dawned, some fantasized break with the past with a new set of rules. The internet was supposed to usher in a “new economy” in our own time. In 1719-1720 it was “new world” projections of easy wealth from North and South American trade that were offered as justification. Typically, the associated advance either proves non-existent or its promise, though real enough, falls short or is realized much later in time than the bubble optimists figure. When manias go bust their cheerleader’s emotional convictions reverse course with always typical results and rationalizations. Literature on the South Sea Bubble mimics present day commentary of our own high tech bubble. Some call it a monumental fraud, a scandal designed to enrich a handful of greedy individuals and corrupt government officials.5 Meanwhile, writers like Charles Mackay focus more on the folly and psychology of the victimized masses. This class of explanation often affects a decidedly preachy tone that recasts the tale as an instructive morality tale, a warning to the greedy and gullible to beware of following the madness of crowds. A third, more academic, perspective of recent vintage by economists argues for the rationality of markets and even bubbles. Economists’ models require a theoretical identity between information and price as mediated by rational choices. Psychology and emotions are not thought to affect price swings overall. There are no manias. The South Sea disaster from this perspective was less a function of folly and more the understandable adjustments to financial innovations of the day. Furthermore, academics allow for the formation of “rational bubbles” by investors and speculators who may rationally conclude that a durable preexisting trend should have further to go on that basis alone. This line of thinking was termed “momentum investing” during our recent “rational” bubble. Critics dub this argument the “Greater fool” theory since it is premised on there being an inexhaustible supply of bigger fools than oneself to sell to! Given this characterization our “rational bubble” in fact depends on the irrationality of fools. This brings us back full circle. At one time the Behavioral school of psychology denied the 5
For instance see: Malcolm Balen. The Secret History of the South Sea Bubble, The World’s First Great Financial Scandal. London & New York: Fourth Estate, HarperCollins, 2002.
18
Considerations XIX: 1
existence of consciousness itself, as having no significance! Today’s economic theorists make the same mistake by denying the obvious. Economic man may rationally choose for advantage, but he is also subject to fear and greed and is exquisitely sensitive to social cues and herd mentality. In any event the astrology of the South Sea Bubble appears to support all of these perspectives in a kind of hierarchy of explanation, one nesting in the other. The chart of the South Sea Company identifies the fraudulent aspects of the scheme itself quite nicely. Meanwhile, the same chart, put together with the chart of the United Kingdom and the transits in effect in 1720, leaves little doubt that public psychology that year provided unusually fertile ground for fraudulent schemes. Finally, outer planet cycles implicated in the South Sea story place this event squarely within a long term developmental process of rational economic innovation as described by economic historians.
T
HE SOUTH SEA BUBBLE story is a feisty brew of politics and commerce necessitating an understanding of both. A good starting point is to consider the astrological portents of the nearest y A u. Charts of the y-u conjunctions provide a twentyyear geo-political forecast. This makes sense since this planetary pair more than any other comes closest to defining the core essence of a nation or state as ultimate legal (y) – authority (u).6 The chart of the 1702 conjunction varies little whether set for Paris or London. They share a MC at 14º 23’ z, but Paris has 15º 49’ c rising versus London’s ASC at 13º 22’ c. Either way we find a veritable hornet’s nest of a energy in the 3rd house that includes the y A u. Then the eye gravitates to the t S i across the 1st to 7th houses, but near the next house cusps. This opposition squares the MC and the hornets nest near the IC, focused especially on r. t rules this 5-planet stellium. This decidedly, provocative, perhaps hostile cardinal cross is echoed by a square between 7th house ruler e and “. There is clearly trouble in the 3rd house neighborhood; war and international power struggle likely upset the neighborhood given e’s rulership of both the 7th and 9th houses. w is at the same degree as the y A u; together they square the empowering q/“ midpoint. w = q/“: “Attempts to project power and persuasion; feelings possibly trampled in relationship.”7 y = q/“: “Expansion of power base; success; international opportunities.” u = q/“: “Potential loss in relationship through power struggle; ruthlessness; separation.”
6
For an excellent elucidation of this point see: Noel Tyl. “Jupiter-Saturn Synthesis” in The Mountain Astrologer, July 1995, pp. 39-48, particularly p. 40. 7 Unless otherwise specified all Solar Arc midpoint interpretations cited are from: Noel Tyl. Solar Arcs. St. Paul: Llewellyn, 2001, pp. 375-454.
19
Gross: Neptune & the South Sea Bubble
Chart 1:
yAu
8:57 PM LMT, 21st May 1702 NS London: 51N30, 0W10
w = y = u = q/“ As the 18th century dawned King Charles II of Spain was dying and, thanks to vigorous French diplomacy, he chose as his successor the grandson of Louis XIV. Louis capitalized on his chance to join France and Spain into a powerful union thus threatening the existing European balance of power. This is the power grab potential of w = q/“ and y = q/“. This triggered the War of the Spanish Succession which did not begin in earnest until 1702, the year of the conjunction and would rage eleven years. The war is the destructive outcome promised by u = q/“ This “world war” would eventually involve twelve European nations, and in the America’s it was known as Queen Ann’s War. Prosecution of this war helped drive both England and France into debt, though France became the worse off of the two. One reason why France was the weaker was that unlike England, which now had a limited monarchy, the Sun King had no such constraints on court extravagance and corruption. Louis pursued numerous wars, equally extravagant and contrary to the economic interests of France even before this latest power grab. Financing these wars became a major issue for both England and France and eventually set the stage for the bubble of 1719-1720. MC in z ruled by highly stressed r suggests the diplomatic break down that disturbed the peace in Europe. The war’s fiscal strain is seen by u and i’s rulership of the 2nd. Furthermore, stressed r is the natural
20
Considerations XIX: 1
ruler of money. It’s rulership of the 6th points to the heavy cost of armies, while its rulership of the 5th foreshadows the speculative mania that would play out near the end of this 20 year cycle of y-u. Indeed, following the “hot war” France and England would continue their fierce competition within economics, one reason for the spread of the financial bubble.
Inner wheel
United Kingdon 0:00 AM LMT 12th May 1707 NS Westminster: 51N30, 0W07
Outer wheel
yAu
8:57 PM LMT 21st May 1702 NS London: 51N30, 0W10
The q which is unaspected in the 6th house of armies suggests that Europe’s royalty felt free to “binge” on war-related expenditures through the taking on of heavy debt loads (q rulership of the 8th and q’s ruler, e’s, square to “ in the 8th). When the y A u chart is placed around the chart for the UK (May 12, 1707 NS, 0:00 am, Westminster, England) the exact positioning of the y A u on UK’s 2nd house r is a dramatic signal that this 20-year cycle is “all about the money”! The t S i hugs UK’s ASC-DSC-t strongly, emphasizing once again conflict and war. Meanwhile, the financial hit is indicated by that opposition’s square to r in the 2nd as r conjunct’s UK’s o, ruler of UK’s 2nd. Furthermore, the conjunction chart’s e, ruler of UK’s 5th of speculation exactly conjuncts UK’s e, in s. Their mutual square to MC-ruler “
21
Gross: Neptune & the South Sea Bubble
anticipates what we learn in the aftermath of the bubble, it nearly brings down the government and Crown.
T
HE MISSISSIPPI BUBBLE France sought relief from their deepening deficit by embracing the ideas of a Scottish financial genius by the name of John Law. Law had fled England following his conviction for murder during a duel over a woman. As he traveled around the continent he flourished as a professional gambler with his grasp of the new theories of probability. In time, growing tired of womanizing and having already made a fortune through gambling, he sought the opportunity to apply his economic theories for the good of some deserving nation. He finally got a chance to put his ideas to work in a monetary reform for France. John Law’s economic insight was that credit was identical to money. Paper currency, bonds, credit notes and company shares were just as much part of the money supply as coin. The amount of money in circulation, he realized, should pace the demand of the productive economy, not the haphazard supply of precious metal. He noticed how well the Bank of Amsterdam oiled the wheels of commerce by replacing impractical coinage with its own script, a concept he began to implement on April 16, 1716 when he underwrote the Banque Generale. With the blessings of the French regent, the Duke of Orleans, he began to issue a national currency. Later he would absorb the major trading companies in France to form the Mississippi Company which had a monopoly on trade to France’s North American colony, Louisiana. To the virtues of Dutch banking, Law added the English idea of exchanging government debt obligations for shares in a joint stock company, only he would do so on a gigantic scale. He reasoned that the monetary stimulus of paper currency and stock in a productive enterprise would stimulate the economy, pay down the debt and lift all ships. The plan demanded rising share prices of Mississippi Company stock which he encouraged by means of easy payment incentives and loans on stock purchases. He understood the proper ratios to maintain between paper and assets, but his was not the only hand on the printing presses, other politicians shared control. As he overplayed his hand near the end of 1719 he triggered a speculative bubble and general inflation in France.8 John Law meant well, but the collapse of his system in France left that nation bitter about economic innovation for a long, long time. All this was foreshadowed in the y A u chart. Referring back to that chart, note how the symbolism explicitly suggests John Law’s attempt to 8
John Carswell. The South Sea Bubble. Gloucestershire, England: Sutton Publishing, 2001, p. 69.
22
Considerations XIX: 1
substitute paper currency (e in s) for gold coinage (“ in g), the square between them. More generally the same square nicely symbolizes the general efforts in Europe to substitute paper credit for coin. o in cocksure a and e in “monetary” s arguably describe Scotsman John Law himself.
T
HE SOUTH SEA COMPANY By contrast England’s “John Law” most certainly did not mean well by any means. John Blunt, a greedy, unscrupulous scrivener, was a secretary in a company which received a royal charter to produce swords in 1691. Together with a clique of aggressive money men Blunt transformed the Sword Blade Company into a rival to the Bank of England by first trafficking in government debt which earned the gratitude of the government. In 1710 John Blunt and financier George Caswall made a proposal to Robert Harley who had recently assumed control of the government's nearly bankrupt Exchequer. Harley needed a radical solution to the government’s debt burden and an end to the draining War of the Spanish Succession. The plan was to found a new trading company that would double as a financial institution like the East India Company before it. It would compete with the Whig Bank of England, which had been created in 1694. The new company would be granted a monopoly to trade with the South Seas Spanish territories of Mexico and South America once the war was brought to an end on terms favorable to England. It would take over portions of the national debt on favorable terms for the government by enticing holders of that debt to swap their annuities for shares in the new company. This would entitle investors to interest payments by the government to the company as well as profits anticipated from its lucrative South Seas trade. Harley would become the new company’s first president, but control would gradually slip to John Blunt’s syndicate from the Sword Blade Bank. Blunt intended the South Sea Company to front for his financial manipulations, much as did the Sword Blade Bank. The proposal was formally offered as a Bill to Parliament with the title: An Act for making good Defences and satisfying the public Debt and for erecting a Corporation to carry on the Trade to the South Seas and for the encouragement of the Fishery and for Liberty to trade in unwrought iron with the subjects of Spain and to repeal the Acts for registering seamen.9
9
Viscount Erleigh. The South Sea Bubble. Manchester: Peter Davies Limited, 1933, p. 28.
23
Gross: Neptune & the South Sea Bubble
English Astrologers, Graham Bates and Jane Chrzanowska Bowles10 note that a modern UK company has legal existence for the whole day of its incorporation beginning at midnight at the start of the day. This does not necessarily apply to Royal Chartered companies of the early 18th century, but a midnight time on South Sea’s date of incorporation was found to be a fruitful assumption for astrological purposes. The Ship of Fools
o
Chart 3:
South Sea Company Incorporation
0:00 AM LMT, 10th September 1711 OS London: 51N30, 0W10
q/w = 22º 36’ x Our South-Sea Ships have golden Shrouds, They bring us Wealth, 'tis granted, But lodge their Treasure in the Clouds, To hide it 'till its wanted. O Britain! bless thy present State, Thou only happy Nation, So oddly Rich, so madly Great, Since Bubbles came in Fashion:11 10
Sea Maritime Trade Slavery Credit, belief, faith The intangible Bubble Dream Fraud Scandal
Graham Bates and Jane Chrzanowska Bowles. Money and the Markets, An Astrological Guide. London: 1994, p. 150. 11 From: Edward Ward. A South-Sea Ballad: or Merry Remarks upon ExchangeAlley Bubbles. Stanford University Libraries English Poetry Database:
24
Considerations XIX: 1
The company’s chart literally provides its name!12 Alone in the upper hemisphere (south) of the chart of incorporation we find o (sea), as a singleton planet. o is square the l in b in the 7th suggestive of associations, joining together as a group or perhaps joint stock company! Additionally, o rules the word bubble to identify its place in history.13 One might label it the "ship pattern" with o as mast and sail standing above the ship proper in the northern hemisphere. Or perhaps, in view of the outcome, we might better dub it the "ship of fools" pattern!
o
SINGLETON: something other than it seems. The South Sea Company, unlike its cousin, The East India Company, would prove to be “something other than it seems”–Noel Tyl’s characterization of a prominent o14. Other key words for o include dream, delusion, fraud, scandal, and cover-up—all of which feature in the South Sea story. o closely opposes the midpoint of e and r (o = e/r): “Fantasy; seeing through a special dimension; spiritual thoughts; deluding oneself about reality”. The planet embodies the limitless inflation of mania as well as the sense of betrayal and disillusionment when the hangover hits. Charles Harvey says of o’s moving through the signs that it represents the reigning myth of the day.15 Early in s we should expect the launch of a new era whose mythology will have to do with money and material progress among other things. o also rules credit.16 This rulership follows when one considers that credit rests on, indeed is little more than faith or belief, qualities ruled by o. “Credit” comes to us from the Latin: credere, creditum, to believe. South Sea Company was destined to become one enormous creditgenerating machine designed to enrich a few at the expense of a nation. o as the sole indicator above the horizon is very apt. The varied meanings of o are about all the public ever really sees or wants to see: The big dream of overnight riches and relief from personal debt, a motive as strong as greed at this time in England. What the public does not 12
Unless otherwise indicated by “NS” all dates appear as old style “OS” just as they appear in literature concerning the South Sea Bubble in use by England at this time. If your software cannot accept and adjust OS dates simply add eleven days to the calendar dates appearing here. The charter creating South Sea Company was sealed September 11, 1711 (Carswell, 48) 13 Rex E. Bills. The Rulership Book. Tempe, AZ: American Federation of Astrologers, 1971, p. 18. 14 Noel Tyl. Synthesis & Counseling in Astrology. St. Paul: Llewellyn Publications, 1994, p. 186 15 Charles Harvey. Anima Mundi: The Astrology of the Individual and the Collective. London: Center for Psychological Astrology Press, 2002, p. 149. 16 Rex E. Bills, p. 31.
25
Gross: Neptune & the South Sea Bubble
see until too late is what is under the hood or horizon of this wealthcreating vehicle. Noel Tyl would note carefully the retrograde status of this o. There is always a second agenda to a retrograde planet. Additionally, when conspicuous retrogradation dominates one hemisphere, as it does in this case, it directs significance to the opposite hemisphere, in this case North, below the horizon.
N
ORTHERN hemisphere emphasis: the root of the problem. A North Horizon emphasis warns of “unfinished business”17 rooted in any entity’s origin or beginnings that may hold back or preoccupy the present, perhaps to undermine the future. From its inception South Sea Company was a product of dubious hopes by government and outright fraud by John Blunt’s Sword Blade Bank syndicate. Its business plan would in time prove to be as empty as many internet IPOs (initial public offerings) were in our own time. The touted riches of a maritime trading monopoly never prove out. They instead provide a smoke screen for the company’s secondary agendas, government debt relief, and more sinisterly, the fraudulent machinations of its directors. Just as the collapse of the bubble in our telecommunications and energy corporations eventually revealed headline filling tales of corruption, bribery and scandal, South Sea’s bust would reveal an enormous web of public and private manipulation and bribery that reached to the very top of government circles. Until it was too late the truth hid safely from view below the horizon of South Sea’s chart. At first the company’s maritime potential seemed credible. In 1713 Harley’s hope for an end to the war to South Sea’s benefit was realized. The treaty of Utrecht, April 11, 1713 (NS), ending the war had transiting y opposing South Sea’s i. At first it seemed the hoped-for breakthrough and opportunity predicated on peace had arrived! Alas, true to its nature, o “watered down” Harley’s South Sea hopes. Spain was not about to relinquish the lucrative trading rights to its own territories. Instead, South Sea won only a thirty year contract, the infamous Asiento provision, to transport slaves to the South Seas and a license to send a single merchant ship once a year to a single port. Slavery is also ruled by o, the ruler of South Sea’s Midheaven. South Sea’s slaving ships would eventually sail for about two years until renewed war in 1718 cut off even that token commercial activity. No profit would ever be gained through trade whatsoever. The chimera of South Sea trade would, however, prove useful to Blunt as propaganda to support its stock price. 17
Noel Tyl. Synthesis & Counseling in Astrology. St. Paul: Llewellyn Publications, 1994, p. 166
26
Considerations XIX: 1
A
MBITION IN THE DETAILS Of a h q with a ¦ w Noel Tyl says: “Two Earth signs promise practical success. Mental energies are taken into behaviors to administrate progress and make things happen. The power of persuasion is fluent and strong. Confidence supports unusual extroversion. Organized activity is a real comfort zone.”18 The scrivener was a profession that rode the wave of paper proliferation and fine-print of 17th and 18th century business expansion. He was an all-round can-do agent and solicitor with expertise in land, leases, documentation. He could introduce lenders to borrowers and follow money’s paper trail for businesses. Astrologically, a scrivener was a kind of super-h that kept the gears of business in place and oiled. In a corporate chart the q has something to say about the founders. South Sea’s founder is described by its h q. Writing of scrivener John Blunt, South Sea Cashier Robert Knight and the rest of the coconspirators, Carswell puts it this way: “They themselves were financial innovators in the sea of small print and publicity.”19 The h emphasis of i-“ and q describe perfectly the leading figures of the South Sea scheme. On one level the i-“ pair identifies the two prime proponents of the South Sea scheme: John Blunt and Robert Knight. Blunt provided cunning and bluster as well as the technical draftsmanship honed during his apprenticeship as a scrivener. Robert Knight in his capacity as Cashier of the South Sea Company proved to be John Law’s equal in providing the power of innovation and charm required by the scheme. South Sea’s hollow business plan meant that share price appreciation would rest solely upon masterful persuasion (3rd house i-“) to project an attractive public image (trine to o, MC ruler). The South Sea Company’s midpoint picture r = y/k promises success and reward while the midpoint picture “ = t/y signals its wideranging ambition: “Extreme creative self-application; enterprise; demanding success from the environment; the really big picture.” u in this chart has many levels of meaning: company directors, government, and debt itself. One can see government’s controlling hand and enmeshment with company affairs in u’s square to the r A t. u = r/t: “The sense of being controlled in love matters, of losing freedom; inhibitions; possible relationship with someone of conspicuous age difference; renunciation.” h q’s square to y in its own sign of c enables a grand scheme to be built upon carefully thought out details. On the other hand, it warns of the dangers of over-reach, hubris and inflation in every sense of the word. The company’s r A t sits on the midpoint of q/y and as we just saw above u squares that point with the heavy hand of control. This 18
Noel Tyl,1994, p. 89. John Carswell. The South Sea Bubble. Gloucestershire, England: Sutton Publishing, 2001, p. 242. 19
27
Gross: Neptune & the South Sea Bubble
yields the additional midpoint u = q/y: “A ‘collision’ between what one wants and what is demanded; a quandary of law and order; a curtailment; negativism; loss.” In other words, the hubris of ambition on the part of company directors and government ministers embodied in this midpoint picture warns of legal trouble and loss, despite the promise of r = y/k It is interesting to note that this signature of over-reach and inflation, q D y, is equally present in the chart of John Law, born April 16, 1671 OS, Edinburgh, Scotland. Law actually incorporates the o factor strongly as well since his q T-squares a y S o.
l
ASPECTS: strong maternal influence Companies do not ordinarily have mothers. But chart synastry between Sword Blade Bank, South Sea Co. and the UK show deep familial ties and specifically a mother relationship between Sword Blade Bank and South Sea Company. For Noel Tyl close contacts to the l suggests the presence of a strong maternal influence in terms of the aspecting planet.20 The aspecting planet in this case is o. o D l indicates that South Sea Company owes its fraudulent ways to the “mother’s” influence.
Chart 4:
20
The Sword Blade Company (Bank)
0:00 AM LMT, 13th October 1691 OS London: 51N30, 0W10
Noel Tyl. 1994, p. 62.
28
Considerations XIX: 1
The Sword Blade Company received its charter to manufacture sword blades on October 13, 169121, see Chart 4. Swords are ruled by t. t forms an apparent Grand Trine with q and l: A closed circuit of social or intellectual self-sufficiency, intellectually a law unto itself. q D “ suggests a suppressed will to power, as well as a decided capacity to reinvent itself when necessary. When the sword business dropped off the company’s remaining asset was its possession of a royal charter to be in business. Blunt acted to reinvent the company as a bank to front for his scheming, a potential shown by elevated y in the 10th and s MC. In this way he tapped into the aggressive scrivener potential of Sword Blade’s t in d. d in turn is ruled by e, retrograde in x suggesting the underhanded manipulation potential of the bank.
Inner wheel
South Sea Company
Outer wheel
Sword Blade Company
Certainly Sword Blade Bank was metaphorical mother to South Sea Company and its fraudulent ways. In fact throughout 1720 the umbilical cord was never cut. Blunt’s syndicate members moved from one to the other and back again. Sword Blade Bank’s y opposes South Sea’s r putting its banking expertise to work as South Sea’s banker. In the same way Sword Blade’s d t D South Sea’s MC offers its aggressive scrivener talents. Sword Blade Bank would be the credit creating agency be21
John Carswell. The South Sea Bubble. Gloucestershire, England: Sutton Publishing, 2001, p. 26.
29
Gross: Neptune & the South Sea Bubble
hind South Sea’s lending, one key to the tremendous leverage that would build the bubble. Remembering that o signifies “credit” we see this role clearly in Sword Blade’s MC (purpose) conjunction with South Sea’s o. Both are opposite Sword Blade’s q and the entire opposition is square to South Sea’s l of “maternal” contact! Additionally, Sword Blade Company’s metamorphosis into a bank with a q-“ will to power put it on a collision course with its rival the Bank of England (July 27, 1694)22. Sword Blade’s t receives an opposition and a square respectively from Bank of England’s u and t, a “crossing of swords” indeed. Bank of England’s q and MC square Sword Blade’s y, aiming to frustrate the interloper’s banking aspirations.
UK & South Sea Company synastry: A marriage of convenience Inner wheel
Outer wheel
United Kingdom South Sea Company South Sea’s q/w = 22º36’ x A UK’s k The South Sea Bubble has reference to several companies as well as to the English nation. Like many other countries England has had many incarnations and charts stretching from William the Conqueror’s coronation in 1066 to the favored chart today, that of the Union of Great Britain and Ireland in 1801. Most proximate to the events of the bubble year is 22
James Trager. The People’s Chronology. New York: Henry Holt & Co., 1994, p. 268.
30
Considerations XIX: 1
the chart of the UK, the Constitutional Union of Britain and Scotland in 1707, formed only four years prior to South Sea Company. Astrological measurements to this chart track the bubble well. Not unexpectedly, if there is any validity to astrology at all, most of the degree areas of significance to the UK chart in 1720 are shared by points and angles in England’s other charts as well. For example as will be seen, the position of the 1707 chart’s w, q and MC are in focus in 1720. Since UK’s w (28º 18’ h) opposes the 1066 w (29º 07’ n) while UK’s q-k fall on 1066’s o both charts “work.” Astrological researchers including Carl Jung have long noted the likelihood of q-w synastric ties between married couples. Four years before South Sea Co.’s incorporation the United Kingdom of Great Britain was created, uniting England and Scotland. When one places the chart of South Sea around that of the UK (May 1, 1707 OS (May 12 NS); 0:00 am LMT, Westminster, England) one is immediately struck by their profound alignment of purpose in a series of reciprocal aspects between them. South Sea’s q closely conjuncts UK’s 8th house w, who also receives the square from the company’s y. Meanwhile, South Sea’s w is conjunct UK’s ASC. This clearly expresses South Sea’s purpose as answer to UK’s financial need for debt reduction. South Sea’s q/w midpoint (22º 36’ x) falls on UK’s MC while UK’s q/w midpoint (24º 26’ f) conjuncts South Sea’s ASC, a profound marriage of interests, and a fated enmeshment. UK’s y A “ foreshadows the long term rise of Great Britain over all its rivals as an era of British dominance financially and militarily unfolds throughout the 18th and 19th centuries. This conjunction falls in South Sea’s 2nd house as if to signal the company’s intention to get its cut! South Sea’s u conjuncts UK’s i while UK’s u squares the company’s i. Certainly this guarantee’s an innovative relationship even as it poses a mutual threat to each other’s structural integrity and security. A key consideration in Mundane Astrology is that of shared degree areas between charts. The larger mundane significance and predictive power provided by Solar Arc directions or transits to South Sea Company rests on the close synastry demonstrated here. Key transits to South Sea’s q-k and q/w midpoint simultaneously hit UK’s w and q-k respectively. In other words, South Sea’s objectives (q, k) align with UK’s public mood (w) and the quality of South Sea’s key relationship (q/w) to the government depends on the state of UK’s leadership and its sense of direction (q, k).
S
IGNATURE OF REVOLUTION: i A “ The South Sea Company came to be in the year 1711 during a i A “, still in orb but separating from its exact point, August 27, 1710 OS. This gives a clue to its historical context. i-“ is the
31
Gross: Neptune & the South Sea Bubble
signature for revolution: “overturning the status quo”23. Alan Meece has described how a i A “ can be expected to seed a revolutionary impetus that typically peaks at the full w phase of the opposition.24 And what is revolutionary at the opposition becomes the norm by the next conjunction and seeding time. Durations between successive conjunctions alternate between 111 years (approximate) and 143 years (approximate) due to “’s eccentric orbit. The conjunction was exact at 28º 51’ g. This cycle’s g emphasis manifested within relationships to royalty and the rising consciousness of individualism. At the seed conjunction we see unfolding the early career of Voltaire whose ideas would eventually usher in Enlightenment ideals that elevated the individual over the King, a threat to monarchies everywhere. Alan Meece explains how the French revolution took place at the “fruition” point of the cycle, the opposition, (along with the earlier American Revolution at the X-V phase) 80 years after the conjunction. i-“ also signifies the power of innovation. In this way the industrial revolution based on technological innovation has also been associated with the 1710-11 seed conjunction. Less often mentioned is the Financial Revolution that accompanied the same cycle. The South Sea Company was to be a major player in this revolution. Pluto is power, but also banking and finance, “financial power”, while Uranus points to innovation. South Sea Company was incorporated in 1711 just as the pressures of a peaking Commercial Revolution necessitated a revolution in finance.
The Commercial & Financial Revolutions 23
Noel Tyl. Solar Arcs. St. Paul: Llewellyn Publications, 2001, p.441. E. Alan Meece. Horoscope For The New Millennium. St. Paul: Llewellyn Publications, 1997, pp. 69-71.
24
32
Considerations XIX: 1
The commercial/ financial revolution actually came to fruition across two full i-“ cycles. The first cycle beginning with the conjunction of 1597-8 inaugurated the Dutch phase of dominance and commercial innovation. As this cycle neared its end, the English began to incorporate Dutch innovations. The second cycle beginning with the conjunction of 1710-11 kicked off the British phase of dominance and financial innovation that would put into place much of the modern financial system we take for granted today.
A
o CONNECTION: Working with the Intangible 25 Both revolution cycles had something in common: their seed conjunctions were both trine to o, all in Fire signs full of creative spark, initiative and idealism. Moreover, the 1597-8 conjunction in a is F o in g while the 1710-11 conjunction in g is F o in a, a reversal of sign! The o F injects idealism into both cycles. More importantly the trine grants ease with dealing with things abstract or intangible, also associated with o. Ease with the abstract and intangible translated into facility with the abstraction and mathematics underlying the Scientific Revolution that began to bloom with the first of our two cycles, beginning in 1597-8. The second cycle, beginning in 1710-11, found increasing application of this science during the Industrial Revolution. In terms of the Commercial and Financial Revolutions the trine to o has significance in two respects. First, the o F describes the enormous outreach of maritime trade and colonial expansion by the European powers. Both the Dutch and English East India companies, commercial powerhouses without parallel today, were launched at the beginning of the first cycle. Second, commercial growth eventually demanded financial innovation. There is nothing quite as intangible and abstract as money! The trine to o facilitated experimentation and innovation with financial institutions and complex forms of credit (o) and paper currencies.
A
SIGNIFICANT shift of sign o’s rulership of South Sea Company’s Midheaven and 9th house cusp reiterates the stated objective of foreign, maritime trade just discussed as well as its role in the financial revolution. But note that the trio of planets has since moved out of Fire and into Earth signs since the exact conjunction. Midheaven ruler, o, now occupies s signifying its concern with material, commercial gain, the shift of myth focus discussed earlier. Meanwhile, i and “ have moved to h and straddle the 3rd house cusp. Now our innovative power finds its outlet 25
“Working with the Intangible” in Noel Tyl. Synthesis & Counseling in Astrology. St. Paul: Llewellyn Publications, 1994, p. 76.
33
Gross: Neptune & the South Sea Bubble
through h’s fine print and also i-“ style communications—suggestive of influence and propaganda, tools that will later be deployed to manipulate share prices. The trine to o that we cited above as easing 18th century economists understanding of monetary abstractions also becomes in the context of the South Sea scheme itself the ease with which the fraud was put over on exuberant, gullible investors.
D
UTCH FINANCE to Change Alley The years leading up to the conjunction of 1710-11 were prosperous and optimistic in England. The Glorious Revolution of 1688 brought to the throne the Dutch William of Orange who was followed in turn by Dutch and Huguenot immigrants who brought with them the capital and knowledge of “Dutch finance”. England gradually incorporated these innovations to bankroll the war with France. These innovations included: growth of a stock market in liquid, transferable shares of joint-stock companies during the 1690s; Government loans guaranteed by Parliament (1693), the first “National Debt;” chartering of the Bank of England with power to circulate a paper currency (in 1694); Exchequer bills (1696); the Promissory Notes Act of 1704, which made all debts transferable.26. In London stands a famous Pile, And near that Pile an Alley Where merry Crowds for Riches toil, And Wisdom stoops to Folly;27
Prosperity set the stage for a bull market led by promoters of new companies that ran from about 1687 until 1696. When in 1687 William Phipps returned to England with 32 tons of silver and jewels salvaged from a sunken Spanish ship off the island of Hispaniola his luck triggered a bull run. “Diving” ventures and new patents on devices to help locate and salvage wrecks became all the rage. As interest ebbed in such ventures, promoters set in motion new excitement over ideas and schemes to replace lost imports from France following the war that began in 1689. Stock prices began to appear in John Houghton’s twiceweekly periodical.28 With the end of the War of the Spanish Succession in 1713 by the Treaty of Utrecht a stock boom slowly got underway. It would endure until 1720, peaking with the South Sea bubble. During this time the pub26
Edward Chancellor. Devil Take the Hindmost, A History of Financial Speculation. New York: Farrar, Straus, Giroux, 1999, p. 32. 27 From: Edward Ward. A South-Sea Ballad: or, Merry Remarks upon Exchange-Alley Bubbles. Stanford University Libraries English Poetry Database: 28 Edward Chancellor, 1999, pp. 34-44.
34
Considerations XIX: 1
lic heard more and more about “Stockjobbing”—the touting, brokering and speculating of stocks—that had grown up in London’s Exchange Alley, a labyrinth of lanes between Lombard Street and Cornhill. Financial business and stock trading were conducted within the Alley’s numerous coffee houses, the most famous of which were Jonathan’s and Garraway’s. In such establishments gambling, stockjobbing and a nascent insurance industry drove an interest in probability theory. In a play called The Volunteers, Thomas Shadwell described the reputation the stock market had garnered as a world of sharpers and cheats, where men “bubbled” each other for profit (at this date, “to bubble” meant to perpetrate a fraud).”29 By the time of the South Sea story “bubble” came to mean what we now call an IPO, initial public offering.
T
HE SCHEME IN A NUTSHELL In principle the debt conversion plan as envisioned by Robert Harley was neither unsound nor fanciful. The government had already financed itself through similar means by inviting holders of high interest government debt to swap their debt claims for the capital gain potential of trade (East India Company) or banking (Bank of England). The government’s interest payments would become more manageable, and the over all effect was similar to the economic boost we get when the Federal Reserve lowers rates today. The problem with South Sea Company was two-fold. First, their trading monopoly “business plan” as mentioned above would prove to be hollow. Second, the founders of South Sea, unlike the founders of the East India Company and the Bank of England, were unscrupulous, didn’t have the slightest expertise or interest in overseas trade and went into the venture with the sole aim to enrich themselves through what they knew best, stock-jobbing and fraudulent stock manipulation. Blunt, Knight and the others quietly cultivated a relationship with government all the while waiting for the opportunity to put the grandest scheme into play. This they did in the autumn of 1719 once political pressure for debt relief had grown sufficiently acute. During negotiations for this ambitious plan Parliament should have insisted, as indeed Parliamentarian Robert Walpole did, on fixed terms of conversion beforehand—a set number of government-authorized South Sea shares for so much debt. Instead, the Company finagled its political influence to be left free to set the terms at which the authorized number of shares would be converted. This was the key to the whole scam. By propelling the price of South Sea stock upwards through stockjobbing and assorted means, they would be free to offer less numbers of shares for a given amount of debt, leaving surplus shares to be sold at 29
Thomas Shadwell. Works. London, 1720,IV, p. 435 cited in Edward Chancellor, 1999, 48.
35
Gross: Neptune & the South Sea Bubble
pure profit to Blunt and his cronies. Just as our own 1990’s accounting fraud in telecommunications and energy companies aimed to produce phony earnings numbers to drive up share prices to benefit CEOs and holders of stock options. The South Sea scheme depended utterly on a rising market, doing whatever it took by manipulating rumor or offering free shares to government figures. South Sea Company’s financial purpose of debt conversion via speculation is implied within the relationships tying 5th and 8th house rulerships by i A “. “ rules the potent 5th house r A t in x: the power of money; speculation. That conjunction squares u in g. u rules debt, g points to the Crown, or government. Again, the symbolism suggests government debt conversion via speculation. Furthermore, u’s rulership of the 7th and w describes the holders of the government’s debt and eventually South Sea shares, the public. After each subscription Blunt would take the cash in one door only to hand it out another door in the form of loans to subscribers for more stock. Carswell describes it this way: “Like Law, he [Blunt] had constructed a financial pump, each spurt of stock being accompanied by a draught of cash to suck it up again, leaving the level higher than before.”30 Carswell’s pump metaphor rides on the same symbolism just discussed. Firstly, r A t in x suggests output (t) through loans of the money taken in, input (r). The i A “ suggests much the same through rulership reference: other people’s money (8th) loaned out again to spur share purchase speculation (5th). All this goes on below Neptune’s smokescreen. Our scrivener manipulators wear a pleasing and attractive mask (e in z in the 4th), but they will ultimately let the public down (e D w) even as their hubris threatens to be the undoing of the whole show even for themselves (e’s rulership of the 12th and 4th).
T
HE ASTROLOGY OF 1720 The South Sea bubble achieved in 12 months what the NASDAQ bubble did in seven years! South Sea’s glorious six month bull market in shares rose 10-fold into mid-summer before crashing back to where it started by the end of the year. Financial astrologer Bill Meridian has discovered that years that have a greater than average number of eclipses experience a disruption in the prevailing economic trend, reversing the trend in force.31 Like the year of the NASDAQ bubble top in 2000, 1720 featured six eclipses. In both cases a mania in force collapsed. 30
John Carswell, 2001, p.111. Bill Meridian. Planetary Economic Forecasting. New York: Cycle Research Publications, 2002, p. 111. 31
36
Considerations XIX: 1
Bill Meridian has also determined that the cycle of y through the signs has the most powerful correlation of all planetary cycles to economic production. Historical panics and crises tend to occur near that cycle’s low, between g and x. The midpoint of that span is late h, the degree area of y at midyear 1720 when mania gave way to panic. In South Sea Company’s ninth year u-o measurements by Solar Arc and transit came due. u-o would dominate the year 1720, a year in which reality faces off against delusion and ambition struggles against a loss of focus. The negative resolution of this tension is held in check so long as y measurements support the ebullience and optimism that supports the bubble in share prices. Once the y measurements end at midsummer, the negative potential of u-o stress quickly resolves itself into an autumn crash. The mass psychology of the public can be characterized as happy and anesthetized throughout the bubble’s expansion as transiting y conjuncts UK’s w and o conjunct’s its q. When illustrating the synastry between the UK and South Sea Company I pointed out the shared degree areas between England’s oldest chart, 1066 and that of the 1707 chart. I said that 1707’s q and IC were at the same degree as 1066’s o. So when in the bubble year o transited over UK’s q and opposed its MC it was simultaneously the o return to the 1066 chart—the fourth o return in fact since 1066!
M
ATERIALISM & FAITH: u-o The South Sea story is bound up with faith and credit ruled by o. u rules the other half of the relationship, or debt. u is also the great tester of integrity and material ambition. The u-o dyad therefore describes the ebb and flow of the faith or the credit we invest in material progress. u-o will be discussed in terms of its cycle and then in terms of their separate transits to South Sea Company and the UK. The South Sea Company was formed a few months past an opening u D o. u-o’s 36-year cycle completes a quarter turn in about nine years, so its opposition phase fell due in 1719-1720. Bill Meridian sees u-o as a cycle of deflation.32 All six angles have witnessed examples of economic peaks where price inflation was halted by this planetary pair’s deflationary touch. The opposition became exact three times. The old style dates are: November 18, 1719, May 17, 1720 and September 21, 1720. The first direct hit correlated with the peaking phase of the Mississippi bubble and currency inflation in France. The second retrograde and final oppositions perfectly bracket the South Sea top, the former kicking off the final phase of extreme price appreciation while the latter marked its complete erasure. In relationship to charts of the company and the UK during this pe32
Bill Meridian. 2002, p. 80.
37
Gross: Neptune & the South Sea Bubble
riod u and o individually and at exact opposition fell three times on UK’s MC-IC axis and q. Simultaneously, the opposition also hit South Sea’s q/w midpoint which is conjunct UK’s MC. Each successive hit fell more exactly in orb, the final hit the most exact. u would conjunct UK’s MC and oppose its q which received the conjunction from o. In essence the challenges the country faced, particularly the economic problem of the national debt is captured by u’s opposition to UK’s q. o A UK’s q greatly compromised its ability to meet that challenge. It signifies in a sense too much faith or credit. It undermined the leadership’s judgment in the face of South Sea’s scheme, and explains why government ministers, MPs and even the royal family itself were so open to corruption by way of the schemers bribes. In a more general sense, and keeping in mind that o rules UK’s 2nd house, o on UK’s q put the whole country under South Sea’s anesthesia as South Sea directors picked the nation’s pockets. Meanwhile the same u S o fell on South Sea Company’s q/w midpoint which is conjunct UK’s MC. I read u’s conjunction to South Sea’s q/w midpoint as the schemer’s ambition via its relationship with the government. u’s hits to the midpoint come earlier than do the o’s hits— beginning in January 1720 while o’s first exact opposition to the midpoint holds off until September as the scheme collapses with the bubble. o’s hit to the midpoint in the fall therefore signifies the dissolution of the company’s relationship with the government in the face of deceit and scandal. As mentioned, South Sea Company was formed a few months after the u D o leaving it out of orb by nine degrees, the same number in terms of years for a quarter cycle of u-o’s transiting cycle. This means that Solar Arc o would complete a square to u in nine degrees or years. Its exact completion within days of the midsummer high stopped the advance as public credulity turned to disillusionment. Rewinding astrological analysis to late 1719, France looked like it was about to become the dominant world power not through war but through finance, thanks to the apparent success of Law’s scheme. England was beside itself with envy and at its wits end about what to do hamstrung as it was with an enormous government debt accrued since 1711. The political pressure to do something really, really big drove negotiations between the company and the government in late 1719. By the time the King returned from Hanover in mid-November a proposal to reduce government debt had been hammered out and would be included in the King’s opening address to Parliament on November 23rd.
38
Considerations XIX: 1
The dominant Solar Arc for the UK in late 1719 was i = k: “Reform as a national goal; a new legislative platform calling for correction of old policies and people;”33 On the exact day of the King’s speech to Parliament which introduced the debt reduction scheme hammered out between the company and the 33
Michael Munkasey. Midpoints: Unleashing the Power of the Planets. San Diego: ACS Publications, 1991, p. 328.
39
Gross: Neptune & the South Sea Bubble
government UK had the Solar Arc: o = y/j: “Living with hope and speculation; dreaming…”, the perfect kick off to UK’s o on q experience. Meanwhile, South Sea Company’s major Solar Arcs for late 1719 include: • “ = e/j “Becoming prominent and influential; putting one’s personal stamp on something.” • y = i/o “Good luck coming out of nowhere; rejuvenation of the spirit.” • w = j “Focus on personal needs and relationships to fulfill them.”
T
HE RELIABLE REWARD CYCLE: u A q Noel Tyl describes y’s twelve year cycle to an individual’s natal q as the reward cycle.34 Working toward one’s goals at other times may or may not be rewarded, but preparation in gear with this cycle invariably is rewarded when y is conjunct one’s q. South Sea Company was incorporated at the time of a q D y, y ahead of the q zodiacally. The company’s “ship” was due to come in therefore exactly ¾ of a reward cycle, or 9 years later, 1720. Furthermore, y to South Sea’s q meant it was conjunct UK’s 8th house w simultaneously. Good news for the company had to be good news for UK’s debt burden problem as well signaling the public’s “era of good feeling” sentiment!
F
IRST PASS: y CROSSES SOUTH SEA’S q (& UK’S w) There would be three passes of y to South Sea Company’s q and UK’s w. The first was on October 23, 1719, followed quickly by y’s opposition to South Sea’s MC on November 2nd. This was the time that South Sea Company and government negotiators were finalizing details of the debt reduction scheme that would be included in the King’s opening speech to Parliament on November 23rd. The ambitious scheme sought to convert the entire national debt amounting to 31-million pounds sterling into South Sea shares. The reward cycle brought more good news late in the year when peace was once again declared with Spain on good terms, breathing new life into South Sea’s trading hopes! South Sea’s proposal was formerly offered to Parliament on January 22, 1720 by John Aislabie, the current Chancellor of the Exchequer. Not only did he argue in favor of the proposal he made a quick trade on South Sea’s stock based on his inside information of the proposal’s tim-
34
Noel Tyl, 1994, pp. 496-7.
40
Considerations XIX: 1
ing.35, but one instance of the huge scale of government corruption during the bubble year. Formal proposal sparked broad public interest in South Sea shares which began to move. The increased public interest is signified well by Solar Arc w’s (public) degree-for-a-year contact with the Descendant to oppose South Sea’s Ascendant (w = j). Meanwhile, SA o’s opposition to 5th house r-t (o = r/t) spiced up the speculative juices of that public as South Sea shares took on more allure. England’s market was just about to heat up for another reason as well. Mid January brought news of a breakdown in Law’s Mississippi bubble. Capital would start to flee from Paris to stoke the building speculative interest in London. The proposal was offered six days shy of a Solar Eclipse on the 28th. Things initiated within proximity to an eclipse invite unexpected outcomes or at least encounter a kink in planned outcomes. Sure enough one day before the eclipse the Bank of England, rival to South Sea Company and its banker, Sword Blade Bank, made a counter offer to Parliament for its own debt reduction scheme. The eclipse at 19º b fell on the Bank of England’s w, motivating their offer and at the same time opposed UK’s y A “ and squared its q forcing it to consider matters. Blunt had no choice but to sweeten his offer four days later. South Sea’s reward cycle prevailed. Blunt’s sweetened offer took the House by storm February 2nd. The bill would stay before the House of Commons for two months. A full w, lunar eclipse at 4º 33’ h fell on South Sea Company’s 3rd House i A “ on February 11th. Blunt got busy in mid February bribing MPs with SSC stock options in return for their consideration for the bill’s passage. A similar bribe was also made to friends of the Royal Family including George I’s mistress36.
I
NGRESSES OF 1720 PACE THE BUBBLE The 1720 bubble marched so closely to the beat of the seasons that John Carswell’s chapter titles divide the action up by season. Of course most students of the markets are aware of the downward seasonal bias into the fall from a summer top, but price action in this bubble slavishly moved in lock step to each of the q’s seasonal turns: Shortly after the a Ingress South Sea’s share price springs ahead into a parabolic curve right into Midsummer’s f Ingress. From there a corrosive decline draws South Sea lower, falling back finally to a near vertical crash whose midpoint centers on the Autumnal equinox in z. The decline continues into a low for 1720 virtually to the day of the Win35
Malcolm Balen. The Secret History of the South Sea Bubble, The World’s First Great Financial Scandal. London & New York: Fourth Estate, HarperCollins, 2002, p. 77. 36 John Carswell, 2001, p. 96.
41
Gross: Neptune & the South Sea Bubble
ter Solstice in ¦. Shortly after the Aries Ingress of March 10 the bubble began its inflation in earnest. The a Point was conjunct South Sea’s MC as it naturally would every year, only this year y directly opposed the a Point and South Sea’s MC. The company’s prominence during the coming year was clinched. Shares surged following March 19 reports of setbacks in France for John Law. During the second reading of the debt conversion bill March 21st, just a day before retrograde y’s exact opposition to the company’s MC, supporters fought off Walpole’s arguments to hamstring the company with wording specifying fixed terms of conversion.
S
ECOND PASS OF y OVER SOUTH SEA’S q (& UK’s w) On the day of retrograde y’s 2nd pass over South Sea Company’s q, April 7th, an exact partile conjunction (!), South Sea Company’s debt reduction scheme received royal assent. On April 14th Blunt held the first “money subscription”. This was technically illegal as money subscriptions should have followed subscription by debt holders. This was the intake phase of Blunt’s “financial pump”. A week later he offered loans to subscribers for share purchases, the outtake phase of his pump. At the same time he announced plans for a midsummer dividend of 10%. A second money subscription followed on April 30. Blunt arranged a well orchestrated propaganda campaign during conversion week (beginning May 19). Favorable news stories were seeded in newspapers, and rumors were strategically seeded by company personnel as they made their coffee house rounds.37 By now stories of conspicuous consumption, a hallmark of most bubbles, began to spread as luxury goods, estates and carriages (think BMWs) were bid higher. “Unlike France there was not set off an inflation of cash, but one of credit”;38 resulting in a land boom. Few Men, who follow Reason's Rules, Grow Fat with South-Sea Diet; Young Rattles, and unthinking Fools, Are those that flourish by it.39
37
John Carswell, 2001, p. 126. John Carswell, 2001, p. 131. 39 From: Edward Ward. A South-Sea Ballad 38
42
Considerations XIX: 1
T
HIRD & FINAL PASS OF y y’s beneficient and expansive energy lifted share prices relentlessly higher in a speculative frenzy up to near their peak by y’s 3rd and final conjunction with South Sea’s q on June 14. y opposed the company’s MC on June 30, one day shy of the all-time high recorded by Castaing’s share price listings on July 1st. The advance would immediately begin to wither as y separated from these favorable aspects, mainly because the building undertow of inhibiting transits and solar arcs quickly seized control of public sentiment.
Inner wheel
Middle wheel
South Sea Company
q f Ingress
0:00 AM LMT 10th September 1711 OS London
9:56:15 AM LMT 10th June 1720 OS London
Outer wheel
Market Top 12:00 PM LMT 1st July 1720 OS London
Once again the advance was drawn higher into a seasonal marker, the f Ingress of June 10. Even as y’s conjunction to South Sea’s q marks an exhilarating peak the Ingress chart foreshadows the disaster that lies just ahead. Ingress w tightly quindecile Ingress q, applies to conjunction with South Sea’s w and UK’s ASC. The quindecile (165º) is an aspect of upheaval, obsession, and separation. A disconnect between public expectation and the implicit promises of government and company was about to break out. Ingress t in f conjunct the Company’s ASC would soon bring public acrimony down on company directors. Mean-
43
Gross: Neptune & the South Sea Bubble
while, Ingress o conjuncts UK’s IC (21º 29’ s) to dissolve the country’s moorings in the face of gathering storm clouds. Our mid-summer astrological cluster of measurements also includes one of 1720’s six eclipses. In fact the mania’s high as recorded by Castaing’s records (July 1st) fell one week prior to the 4th eclipse of 1720 (July 8), yet another example of events encountering unexpected outcomes when falling near eclipses. This eclipse’s significance is obvious: This Full Moon Lunar eclipse at 27º ¦ fell within 1º of South Sea’s ASC-DEC axis and less than 3º from UK’s q/w midpoint. Concerned about competition with rival bubble companies, South Sea Company supporters had lobbied for the enactment of the “Bubble Bill” which would restrict legality only to companies granted a Royal Charter. The bubble had inspired many “IPOs” of joint stock companies without charters to be in business. On June 11 the English bubble Act received royal assent. A 3rd money subscription followed y’s conjunction to South Sea’s q by one day.
T
HE TIDE TURNED The major Solar Arc measurement that pricked the bubble just as transiting y’s beneficent influence ended was o = u, exact on June 19th : “Losing focus of ambition; depression; sense of being wronged.” This Solar Arc was bracketed by two others. On June 2nd there is: q = u/“: “Threat of loss; hard work; enforced change; separation…” and on July 8th there is: r = u/“: “Troubled by the loss of love” But should our South-Sea Babel Fall, What Numbers would be Frowning, The Losers then must ease their Gall By Hanging or by Drowning.40
A feeling of apprehension built as news was received of French mob action and violence at John Law’s Bank on July 7th. Anxiety deepened further with news of an outbreak of bubonic plague in Marseilles July 20th. This plague would eventually kill 50,000 in France in this year which witnessed a u S o, a planetary pair often related to the emergence of new diseases and epidemics. It is also interesting that financial crises that move from one nation to another are often likened to a spreading disease. The most recent example in our time was the “Asian contagion” outbreak of 1998 when international capital flows exited one Asian market after another to bring down their stock markets and economies. As share prices eroded, the Bubble Act was invoked (August 18) against competing bubbles, ostensibly to support South Sea Company in preparation for a fourth money subscription on August 24th. Bubble 40
From: Edward Ward. A South-Sea Ballad
44
Considerations XIX: 1
companies’ share prices collapsed as a result. What Blunt and company had not foreseen was the resulting cash crunch as traders who owned bubble company shares on borrowed money had to sell South Sea shares to cover their shortfall. Besides, even had the bubble act succeeded, the scheme still suffered from a cash drain as capital followed the focus of speculation to Amsterdam and other European centers, just as it had originally moved from Paris to London. Blunt then tried to bolster confidence with the announcement of a large dividend on August 30. But the time of willful suspension of disbelief had passed. Now in more realistic frame of mind shareholders saw the improbability of such a dividend and began to sell in earnest. A General Court of South Sea Company investors was hurriedly called to meet at Merchant Taylor Hall on September 8th. Cheerleading speeches that worked their magic in the past now rang hollow. Instead these efforts kicked off a stupendous crash, described well by the two solar arcs of August 24: “ = w/o “Emotional shock, upheaval, change.” And September 5th: l = w/“ “Emotionalism disrupts teamwork and getting along with others.” Blunt’s reputation and power over fellow directors quickly collapsed with England’s paper fortunes. Emergency pleas by South Sea Company directors to the Bank of England for help resulted in an agreement on September 18th to support shares at a price of 400. This halted the slide for only a couple of days. A bank run on South Sea’s financier The Sword Blade Bank forced its failure on September 24. The Bank of England had finally defeated its rival. The Solar Arc i = w/t, exact on October 3rd reads: “Intensified temperament; eruption; anger; arguments leading to emotional wear and tear.” On October 18th another meeting at Salter Hall nearly turned into a riot by disgruntled shareholders. All decorum in this gathering disappeared when: a gentleman who, 'tis said, had lost a great deal of money and was perhaps a little in Liquor, cry'd out that the Directors were a Cabal of Sharpers and that he knew them to be such. He added, Must we compound our debts with them? Where will be the Honour of our King and the Sanction of our Laws in Parliament?41 A Sheriff fearing that the Monarch's name was about to be impugned, at this point stood up and threatened to read the Proclamation against rioting if the meeting did not immediately disband. On November 9th the Bank of England withdrew from its earlier agreement to support South Sea stock. Even the King's return from Hanover in light of the deteriorating conditions across his realm did nothing to stop South Sea stock from falling to 135 in early November.
41
Viscount Erleigh. The South Sea Bubble. Manchester: Peter Davies Limited, 1933, p. 124
45
Gross: Neptune & the South Sea Bubble
At length corruption, like a general flood (So long by watchful ministers withstood), Shall deluge all; and avarice, creeping on, Spread like a low-born mist, and blot.42
B
LAME GAMES & REPRESSION Following a bubble’s popping, financial distress and disillusionment cry out for political scapegoats to carry all the blame despite the mutual collusion behind most victim & victimizer relationships (o). Finally, authority (u) punishes offenders, tightens regulation of markets – effectively ending the phase of technological innovation. Part of the paradigm of every bubble is the post-bubble investigation and witch hunt. On December 8th Parliament met and launched an investigation that would come close to bringing down the government. By early 1721 it uncovered widespread corruption and fraud among the directors, company officials and government officials. Robert Walpole used the crisis to put into place a plan for the rescue the nation’s credit. This maneuver would advance his own political career, eventually making him England’s first Prime Minister. Just as in our own times 1720’s bull market heroes now became targets of investigation and invective. Some despondent losers committed suicide while others sought revenge. Mobs attacked company directors. The acquittal of England’s Secretary of Treasury on charges of misconduct by Walpole triggered rioting and arson throughout the city. The sending of one director to the Tower was celebrated with street parties and the building of bonfires. Politicians were accosted and booed in the streets. Attacks by pamphleteers and critics brought on government repression and bans on theatrical and other satire.43 Due to Robert Walpole’s skillful manipulations retribution was mostly visited on his political enemies and South Sea directors, many of whom had their property confiscated while others were thrown in the Tower. After throwing enough sacrificial victims into the maws of an enraged populous he then gradually put into place a grand cover-up, a firebreak to protect what remained of government and Crown. His biggest headache had to do with South Sea Company’s treasurer Robert Knight who escaped across the channel with key ledgers in his posses42 43
Alexander Pope. Moral Essays. (EP III, I. 135) South Sea Bubble Riots..
46
Considerations XIX: 1
sion. So incriminating were these records that his capture threatened to bring down the entire government including the King. Some of the amazing details of this story remained hidden until recent times, not even making it into John Carswell’s first edition of The South Sea Bubble, published in 1960. Historical research now can document the great lengths Walpole went to for damage control. Following Knight’s capture by the Austrian government Walpole and Parliament continuously issued demands to that government for the return of the miscreant to England. Meanwhile Walpole sent secret correspondence to the Austrian Emperor offering various political concessions and favors to do precisely the opposite, to publicly refuse extradition and eventually to actually arrange Knight’s “escape” from prison! Interestingly a similar situation in France probably saved John Law.44 The French Regent had genuine affection for John Law even as the country increasingly called for his head, but he was under severe pressure to turn Law over to investigators. Law was increasingly worried about his own physical survival as his system imploded about him. As in England, however, the government realized that other hands than Law’s had been on the printing presses. They realized that John Law was one of the few people who knew where all the money that had been printed had gone. Under torture he might implicate others. So, his safe exile from France was arranged. Both Law and Knight would wander Europe safe from prosecution because of what they knew and who they might implicate. The result of the crash of 1720 was default and misery spread widely and touching every class and locale. Though advances made in credit and a paper currency system represented real progress from the largest human perspective, financial affairs would for a time regress to that of an earlier generation. Carswell likens the collapse in credit to what would happen today if bank notes ceased to be legal tender.45 Ship building and home construction ceased. Coal prices declined unseasonably. English coffee houses in Amsterdam were burned by infuriated Dutchmen. In France the collapse of John Law’s system forced investors there to liquidate their overseas holdings effectively spreading the pain internationally. Holland’s Dutch West India Company collapsed with great speed. The French experience with paper currency ended in bonfires. Such a bitter legacy was left in France that banks came to be deeply distrusted. It was not until 1793 (i S “) that France again adopted a paper currency system. In England the reaction against joint-stock companies embodied in the Bubble Act would suppress the evolution of limited liability corporations for decades, arguably retarding the Industrial Revolution’s poten44
Janet Gleeson. Millionaire: The Philanderer, Gambler, and Duelist Who Invented Modern Finance. New York: Simon & Schuster, 1999, p. 232. 45 John Carswell, 2001, p. 160.
47
Gross: Neptune & the South Sea Bubble
tial. The long term record of stock prices shows the resulting bear market in share prices would last over half a century, the longest downturn for which we have statistical records. Not until the 19th century would the former pace of advance resume. Only then would the instinct of timidity acquired in only a few tumultuous months finally fade and the Industrial Revolution gather steam to begin another long cycle. On the other hand the South Sea debacle led to significant financial innovations we depend on today. For example, economist Larry Neal views the crash and aftermath of the South Sea bubble as the route by which a key ingredient of the evolving capitalistic system came to be. The crisis in the wake of the bubble led to creating the perpetual annuity, a financial innovation that “…completed a structure of marketable, liquid financial instruments for the British government that proved its worth in each war for the next two centuries.”46 South Sea Bear Market 47
South Sea Bubble 64 Year Bear Market
Graph Courtesy of Foundation for the Study of Cycles
The astrology of the South Sea Bubble on balance argues in favor of crowd psychology and emotionally-based decision making, not rationality as academic economists insist. This is clearly shown by y’s transit over UK’s w and o over its q throughout the first half of 1720. It seems to me more plausible that choices are rational, but that ra46
Larry Neal. “How the South Sea Bubble Was Blown Up and Bust”, in Eugene N. White, ed. Crashes And Panics: The Lessons From History. New York University, 1990, p. 53. 47 Economic History Illustrated. Foundation for the Study of Cycles. Wayne PA, 1996.
48
Considerations XIX: 1
tionality is relative to the larger imperative of mass psychology of the times.48 A mania therefore represents rational decisions within a “foolish” context or zeitgeist. A depression represents rationality in the face of fear. It is not so much that economists are wrong to assume that consumers or investors seek advantage in their choices, but that net social mood affects, for example, how much risk they are willing to take. At all times our social herd inclination affects choice. The irrational whether it manifests as fear or as foolishness is an anarchic force. Rational decision makers would not make the same historical mistakes twice. The circularity of history argues for an irrational imperative, even if from the standpoint of history these forces may be said to have provided the impetus to evolve otherwise rational institutions and technology. Speculative bubbles seem to follow the same playbook every time. Perhaps the astrology is saying that times of innovation (i) require a temporary relaxation of conservative, critical judgment (u). Visionary expectation (o) invariably gets ahead of itself and results in a heightened vulnerability to fraud (o), the price of progress. Aside from granting vision, o usually gets a bad rap, but Noel Tyl insists that bewilderment—another of his favorite key words for o—is necessarily present at the cusp of any critical change: “[Bewilderment] is part of the natural process as the indeterminate emerges from chaos and tries to take form and meaning in our life. Bewilderment is necessary before illumination. We don’t know until we do.”49 Despite the lessons learned by the example of historic manias, the memory of the experience always fades or the participating generation simply passes away. And even were this not so, an irrational imperative would eventually bring on a similar developmental sequence. All is flux, change. And the medium of change is the cycle, cycles within cycles. History repeats because it must, and we are surprised every time.
48
I am basically persuaded by the tenets of Socionomics, a new sociology whose originator, Robert Prechter (), demonstrates the primacy of mass psychology in all historical change. Mass psychology follows waves that adhere to a fractal like geometric imperative based on the Golden Section. It is not the reigning scientific paradigm, but it is scientific. Astrology is by comparison “something completely different” either outside any possible scientific order or at any rate elusive to scientific grasp. Yet, if I may paraphrase Noel Tyl, neither is Astrology something to believe in, it is rather something to know about. 49 Noel Tyl. 1994, p. 122.
49
The Arrest of Saddam Hussein NICOLE GIRARD
I
HAD worked earlier on Saddam Hussein’s chart and concluded that 8:55 AM local time on 28th April 1937 at Tikrit was correct for his time of birth but then stopped looking at. His arrest on 13th December 2003 reawakened my interest.
Inner wheel
Figure 1
Outer wheel
Saddam Hussein 8:55 AM LMT (05:55 UT) 28th April 1937 Tikrit, Iraq: 34N36, 43E42
Arrested 8:30 PM LMT (17:30 UT) 13th December 2003 Tikrit
On the day of his arrest the only three difficult aspects in his natal chart, the components of his r-y-“ T-square, were all present in the sky:
50
28th April 1937
13th December 2003
r 21º D “ 26º r 21º D y 26º y 26º S “ 26º
r 20º C “ 20º r 20º F y 18º r 18º D “ 20º
Considerations XIX: 1
Saddam Hussein’s solar return (see Fig. 2) has u in d close the Ascendant, an indicator of possible limits on his freedom and perhaps jail with the l and ^ in the 12th house, and the 12th ruler, e, on the cusp of the 12th.
Figure 2:
Solar Return
8:27 AM LMT (05:27 UT) 28th April 2003 Tikrit, Iraq: 34N36, 43E42
w A r in a in the 10th house of the solar return F y, cannot be considered a good aspect because of the natal r D y—y here rules the 7th house of open enemies and is opposed by t and closely squared by the q in a difficult T-square. o in the 9th house close to its cusp and square to the s bodies (o = q/e) is an indicator of foreigners, possibly spies, while i, also in the 9th, suggests unexpected difficulties with them (with US forces). At the time of this solar return the mean – is A q. At the time of Hussein’s capture the true – at 24º c was conjunct the transit q at 20 c. The – always plays with karma; culpability makes a spot on the q. The secondary progressions calculated to the day of Saddam Hussein’s arrest (see Fig. 3) are more significant with 29º 53’ g at the progressed Ascendant (by Naibod in RA) where it is opposed by transit i at 29º 24’ b. The q is still his Ascendant-ruler but only just. The progressed Ascendant is about to move into h, a sign of a much smaller, humbler existence. g is departing; there will be no more king-like living. In h his
51
Girard: The Arrest of Saddam Hussein
ruler will become e which is D u by secondary progression—he will be deprived of his freedom of movement, and may also experience a nervous breakdown.
Figure 3:
Secondaries
at 13th Decmber 2003
The secondary MC has come to conjunct secondary r, the most difficult of Hussein’s natal planets, at 25º s (under the baleful gaze of Algol) and r A k aspects the y S “. Thus the secondary MC is aspecting Hussein’s three ‘bad’ planets and does so under the influence of ‘the severed head’ Algol, the star unanimously agreed to be ‘the most malefic star in the heavens’. To say the least, Saddam Hussein is experiencing ‘bad fate’ at this time. The progressed w is closely conjunct progressed i in the progressed 9th house, the area of life that represents his main problem: foreigners. The conjunctions and oppositions the secondary w makes as it moves around the wheel in its 28-year cycle through the signs invariably time major events in an individual’s life and here we see that its conjunction with secondary i does just that. w A i by progression need not normally indicate an unpleasant event but it does so here because r, the ruler of s, the sign in which this progressed conjunction occurs, is so badly afflicted. There is a wide q D u by progression which becomes particularly meaningful when we realize that transit u at 11º 15’ f was crossing the place of the secondary q at the time of Saddam Hussein’s arrest.
52
Considerations XIX: 1
These tight and very appropriate secondary progressions confirm the correctness of 5:55 AM local time, or very close thereto, as the moment of Saddam Hussein’s birth in 1937. Hussein’s long rulership over Iraq may be related to his natal q in the 8th degree of s being conjunct the 8º 19’ s MC of the 14th July 1958 chart (7 AM LMT at Baghdad) for the Iraq Republic (see Fig. 5). It was however a surprise to read Volasfera’s symbol for Hussein’s s placement in light of his appearance at the time of his arrest and where this occurred close to the River Tigris. Volasfera wrote for this 8th degree of s: An old man, poorly clad, stands by the side of a river, from which he collects bits of wood and straw with a rake. This symbol had no obvious meaning during Hussein’s years in power but it is appropriate now.
Figure 4:
Diurnal
8:55 AM LMT (05:55 UT) 13th December 2003 Tikrit, Iraq: 34N36, 43E42
Hussein’s diurnal chart for the day of his capture (see Fig. 4) has $ rising with the Ascendant at 16º ¦. $ is said by some to relate to wisdom, science and kindness, none of which attributes are usually associated with Saddam Hussein. However, the mythological $ did live in a cave, in a hole, which is where Saddam Hussein was found that day. At Hussein’s birth, $ was located in his 12th house, an indication perhaps that his life began in a retired, restricted space; perhaps its placement in this diurnal indicates that now wisdom will come to him.
53
Girard: The Arrest of Saddam Hussein
Figure 5:
Republic of Iraq
7:00 AM LMT (04:00 UT) 14th July 1958 Baghdad, Iraq: 33N21, 44E25
The chart for the Republic of Iraq (see above) resounds well with this event. At the time of the capture the w was transiting 15º g, the Republic’s Ascendant degree and the place of its e. Transit q and “ were crossing the Republic’s u and opposing its w A r; transit o was opposing the Republic’s rising i; and transit u was opposing the Republic’s ^.
54
Will I be Able to Join the Society? RUTH BAKER DTAstrol. QHP. CMA
T
HE QUERENT had applied for membership of a prestigious society, but was very uncertain as to whether she would qualify for acceptance.
8:48 AM GMT, 1st April 2002 51N48, 1E09
t hour, w day
She is signified by the Ascendant ruler, e, and the w is her cosignificator. e is in the 11th house of societies and of her hopes, but is very weak being combust and peregrine—obviously the querent is not very optimistic about her chances. The society she wishes to join in represented by the 11th house ruler, y. y is angular and very strong in its own exaltation and term. Clearly, this society holds itself in high esteem. The q, natural ruler of authority, is also in the 11th and represents the person who will probably have the final word on the querent's acceptability. The q is not only exalted, but also in its own triplicity and face. This society is certainly a very highranking one—in its own estimation too.
55
Baker: Will I be Able to Joint the Society?
The position of y in the 1st house suggests that the society will approach the querent (emplacement). The w and y are in mutual reception by sign as well as in mixed reception. The querent's significator, e, is in the exaltation and triplicity of the q—another hopeful sign, especially as the q rules the end-of-matter 4th house. Although e is combust, it is fast moving and heading for cazimi, the very heart of the 11th house q. With e so weak it seemed to me that the querent was putting far too low a value upon herself, and that far from being rejected as she feared, she would received a very positive invitation to become a member. e applies by square aspect to y, showing that there may be some initial difficulty to be overcome, but the w applies to a nice trine aspect with e in the 11th. These factors confirm a positive answer. As an added bonus, r positioned in the 11th house, is the strongest planet in the chart. ^ is in the 10th house (within 5º of cusp) of honor and preferment1. Every planetary significator in this chart is fast moving which means that the querent should not have to wait long for a decision to be made. When I gave her this affirmative answer she looked at me with what I can only describe an extremely skeptical expression. Nevertheless, only a few days after the question was asked (w to e in 4º) she received a very cordial invitation to become a member of the society, Unfortunately though, she was disappointed to find that she had an engagement on the suggested interview date which could not be cancelled or postponed. (e D y), but another date was arranged and the querent was formally accepted much to her delight.
q w e r t y u ^
1
Sign t y t r r w e u
Essential Dignities Exalt Trip Term Face q q r q q y e q q y t w r r e w r y u y t y r u y t u r e w MR y. w from S t to F e
Christian Astrology, p. 55.
56
Exalted Peregrine Detriment Exalted
More Time Twins KEN GILLMAN
Beniamino Gigli
and
Lauritz Melchior
T
WO operatic singers who were renowned throughout the world in their day. They were diurnal babes, born the afternoon of Thursday 20th March 1890, an hour or so before an equinoctic new w, one in Italy and the other in Denmark. Their horoscopes differ with respect to the angles, house positions of the planets, and by the sign placement of the q: at the very end of n for Melchior and just over the cusp into a for Gigli. The Italian tenor Beniamino Gigli had h rising. He was considered the greatest exponent of the art of bel canto (beautiful song, lyrical) singing in the 20th century. He was most popular when performing in the operas of Verdi and Puccini. The Danish singer Lauritz Melchior was a dramatic tenor, a heldentenor, born with g rising. He was considered the greatest Wagnerian singer of the 20th century. Besides Wagner’s operas, Melchior was also famous for his interpretations of the songs of Schubert, the music of Grieg, and the work of the Danish composers Heise and Lange-Muller. As boys in church choirs both Gigli and Melchior attracted attention with their sensational signing voices. Both men were long-time stars of New York’s Metropolitan Opera: Gigli for twelve seasons before he quit after a bitter quarrel over salaries in the early 1930s; Melchior for twenty-four seasons between 1926 and 1950. He had hoped to perform for a record twenty-fifth season but Rudolf Bing, the Met administrator, refused to renew Melchior’s contract despite protests from his many fans.
57
Gillman: More Time Twins
Figure 1:
Lauritz Melchior 12:51 PM LMT (12:01 UT) 20th March 1890 Copenhagen, Denmark: 55N40, 12E35
Figure 2:
Beniamino Gigli 4:30 PM LMT (15:40 UT) 20th March 1890 Recanti, Italy 43N24, 13E32
58
Considerations XIX: 1
With an exact y F “ in their charts, it is not at all surprising that both tenors did very well financially: at the peak of his popularity Gigli was the highest paid singer in the world, while Melchior received the highest salary given by the Met. Both were heavy men. Melchior with y angular was 6’ 3” tall, weighed nearly three hundred pounds, had an 18-inch collar and wore size 13 shoes. I don’t have similar details for Gigli. As befits his g Ascendant, Melchior was a great practical joker and he loved hunting and fishing. Shortly before his death in 1973, he explained that there were two major disappointments in his life. One was his rejection by Bing in 1951. The other was that, despite his enormous success elsewhere, he had never been popular in his native Denmark— his 4th ruler, r, is weak by sign and a retrograde i is positioned in his 4th house—his joking with the local media had invariably misfired. Gigli, who was idolized in his native Italy, had t at the IC in c, a sign in which t is always comfortable by virtue of its trine to a. It is informative to see how the exact r F t in these two charts—this aspect is very important in explaining the nature of their identical occupations— manifested in these two contrary ways when the different ends of the aspect are related to the respective ICs. The tenor is the highest non-falsetto adult male voice. There are two types of tenor: the lyrical tenor, with the highest and lightest tone (Gigli), and the tenore robusto (German, Heldentenor, “heroic tenor”) with dramatic vigor and pathos (Melchior). Because they were born within a few hours of each other on the same day and achieved immense success in the same specialized field—they were the two greatest tenors of their generation—it is tempting to suggest their two charts contain the astrological archetype of the successful operatic tenor. In both charts r trines an angle: the Ascendant for Melchior, the IC for Gigli. There are also two sets of tight sextiles, which are more easily seen in Gigli’s chart. There we have l G u G i to the east of the meridian, and t G y G r G “ mainly to the west. e at Gigli’s Descendant forms a mundane T-square with the t S “ on the meridian, and its sesquiquadrature to i forms a bridge between the eastern and western sets of planets.
59
Gillman: More Time Twins
Figure 3:
Enrico Caruso 3:08 AM LST (02:10 UT) 27th February 1873 Naples, Italy: 40N51, 14E17
Figure 4:
Luciano Pavarotti 1:40 AM MET (00:40 UT) 12th October 1935 Modena, Italy: 44N40, 10E55
60
Considerations XIX: 1
Enrico Caruso
Luciano Pavarotti
Figures 3 & 4 are the horoscopes of two more superstar tenors: Enrico Caruso and Luciano Pavarotti. They were born 152 years apart so are definitely not time twins. Like Melchior and Gigli they achieved immense success. Do the planetary configurations on 27th February 1873, the day Caruso was born, and those on 12th October 1935, Pavarotti’s birthday, echo those present on 20th March 1890, the day suggested as the archetype? I admit to feeling a distinct sense of awe viewing Caruso‘s horoscope; such is the legend he has become. Like Gigli and Melchior, he was born during the dark of the w, the w applying to A q each time. Note the similar sign positions of the w, e and r in all three charts: the w and e in n, r in a. Also the q is in n for Melchior and Caruso, and has just left it for Gigli. There is also a very similar u-l tight aspect: the sextile from 28º d to 28º g at Gigli’s and Melchior’s birth resonates with the exact trine from 28º ¦ to 28º s in Caruso’s chart. Again we find r closely aspecting an angle (A IC) and in a tight aspect to y, the trine from a to g. Pavarotti was also born very close to a syzygy, in his case a full w, and here again the w is applying to the partile aspect. u G i is present again (Caruso had u S i) as is a e-i aspect (not present for Caruso). The connection between these three planets points to the many years of arduous training a successful operatic tenor must undergo in his early years. We might also consider the planetary placements at the births of the Spanish tenors Placido Domingo and Jose Carreras, partners with Pavarotti in their highly successful The Three Tenors act. Their charts are shown at Figures 5 and 6. Neither Domingo nor Carreras were born during a syzygy of the lights. They do not have r on or closely aspecting an angle, and neither e nor u aspects i. Both do however have close r-y aspects, Carreras the conjunction, Domingo a trine. Are there other connections not previously noted?
61
Gillman: More Time Twins
Figure 5:
Placido Domingo 10:00 PM LMT (21:00 UT) 21st January 1941 Madrid, Spain: 40N24, 3W41
Figure 6:
Jose Carreras 4:00 AM MET (03:00 UT) 5th December 1946 Barcelona, Spain: 41N23, 2E11
62
Considerations XIX: 1
Placido Domingo
Jose Carreras
t appears meaningful. Domingo has it conjunct the IC just as Gigli did, and it is in c in both charts. Indeed, t is in c in five of the six charts. Caruso’s chart is the single exception—he was born with t in the 10th house in x, closely F e in n. I don’t believe the placement of Pavarotti’s o smack-dab on Domingo’s rising degree tells us Pavarotti was Domingo’s drug supplier. Instead, it suggests Pavarotti has been an inspiration to the younger tenor and, by including him as one of The Three Tenors, has helped Domingo become a media idol. Additionally, this connection may be another clue to identifying the astrological indicators of a successful tenor singer. Placido Domingo has the 16th degree of h rising. Carreras has nothing in h or n but he was born with two tight oppositions from c to d: t S i and q S l, the midpoint of these oppositions centering on the 16th degree of h-n. Pavarotti has a wide r A o in h, u in n is widely opposed by r, and t in c (F w and G q) squares o. The ties between these three charts may of course only serve to explain how these three tenors came to form and succeed in their The Three Tenors act, but let’s see how they relate to the other three charts. Caruso has the w, q and e in n, as does Melchior. Gigli has the w and e in n, and additionally has e closely opposing his n Ascendant. There may therefore be something in this h-n connection, especially involving the second decanate. In choosing the charts of Melchior and Gigli as examples of similar people born around the same time, there was no thought of identifying the astrological criteria associated with becoming a world-renowned operatic tenor. Yet this can emerge from a study of time twins; the connections we’ve noted running through these six charts, t in c and the emphasis on h and n, for example, suggest this may be done.
63
Gillman: More Time Twins
All Saints’ Day, 1755
Figure 7:
Lisbon Earthquake 9:30 AM LMT (10:07 UT) 1st November 1755 Lisbon, Portugal: 38N43, 9W09
64
Considerations XIX: 1
P
URE, focused destruction and terror—the most destructive earthquake in recorded history shook the city of Lisbon to pieces on All Saints’ Day, 1755. The city’s cathedrals were packed with kneeling worshippers as the city was hit by a sudden sideways lurch of the earth, now estimated at magnitude 9.0, and shaken ferociously for seven full minutes—the San Francisco earthquake of 1906, by comparison, measured an estimated 7.8 on the Richter scale and lasted less than thirty seconds. The convulsive force was so great that the water rushed out of the city harbor and returned in a wave fifty feet high, adding to the quake’s destruction. When at last the earth’s motion ceased, survivors enjoyed just three minutes of calm before a second shock came, only slightly less severe than the first. A third and final shock followed two hours later. At the end of it all, more than 60,000 people were dead (some reports say 100,000) and virtually every building for miles was reduced to rubble. Shock waves from the quake were felt throughout Europe and North Africa. Immense sea waves hit a vast area stretching north to Finland and across the Atlantic to Barbados.
Figure 9:
Solar Eclipse 8:02:43 AM UT, 6th September 1755 Lisbon, Portugal: 38N43, 9W08
The prior solar eclipse in September was just about as bad as it could be. It occurred exactly—‘exactly’ here means within a minute of arc!— opposite the as-yet undiscovered i and it also squared the place of the not-even-dreamt-of “. These aspects are so tight that we tend to over-
65
Gillman: More Time Twins
look the eclipse’s square to t, which makes this a particularly nasty grand cross in mutables. At Lisbon’s earthquake “ is rising and w A y is across the MC. Why Lisbon and why this particular day? y seems to have been the culprit. y is ever out of sorts in h, in its detriment, and such a virulent eclipse in that sign while y was moving cautiously through it can only have made matters far worse—think of it as nervously being on your way to an IRS audit (that’s the y in h part) when your car gets back-ended by a speeding truck (the i involvement), the only copies of your financial records and receipts are destroyed in the resulting fire (t), and you need several months of painful physical therapy (the eclipse in h) before being able to again stand on your own two feet. This is not a happy, smiling y. Now, two months after the eclipse, y has moved to place of the eclipse’s Ascendant at Lisbon, the 6th degree of z, where it culminates as “ rises; the approach of transit w prompting this massive quake. The day following Nature’s destruction of Lisbon a girl child was born in the royal palace of Vienna, 25º of latitude east of Lisbon. The baby’s mother was the Empress Maria Theresa, her father was the Holy Roman Emperor Francis I, and the baby would grow up to become her parents’ favorite child, the queen of King Louis XVI of France, be blamed for the corruption of the French court and die at the guillotine at the age of 37. There is a distinct similarity between the Lisbon earthquake of 1755 and the French Revolution that began in 1789. Both brought about pure focused destruction and terror. Like the earthquake, the French Revolution was sheer uncompromising violence, a phenomenon of such uncontrollable power that it swept away all that stood in its path. These two times of terror are linked by the seemingly innocent girl child who was born the day after the earthquake. Capricious and frivolous, Marie-Antoinette aroused criticism by her extravagance, disregard for conventions, devotion to the interests of Austria at the expense of those of France, and opposition to reform. From the outbreak of the French Revolution, she resisted advice and helped to alienate the monarchy from the people. She was imprisoned in 1791 and guillotined two years later.
66
Considerations XIX: 1
Figure 10:
Queen Marie-Antoinette 8:05 PM LMT (19:00 UT), 2nd November 1755 Vienna, Austria: 48N13, 16E20
At Marie-Antoinette’s birth, i, retrograde in n, is conjunct the MC, in a grand trine with the q and r in x and t rising in f. Each of the bodies forming this watery grand trine is in turn severely afflicted by other aspects. i is part of a T-square with the l and “. t is clearly involved in a T-square with u and w A y. And the q (with r) is closely D o. The t position in f, r in x, and e in c all denote weakness, problems in terms of the houses these planets occupy and those they rule. The only planet in the chart that is at all content in the sign it is placed is u, which is in the 7th, rules the 7th, 8th and 9th houses and afflicts her overly emotional t and will destroy her children (w in 5th, D u & t)—a cold marriage, powerful enemies, problems with the money of others, and difficulties in foreign parts. At the time of the Lisbon earthquake, when pure focused destruction and terror ravaged the Portuguese capital, the w was high in the sky, at the local MC. When Marie-Antoinette’s MC came by primary direction (Naibod measure) to exactly oppose her natal w (exact on 16th October 1793) she experienced the pure focused terror of the French Revolution. Dressed in a thin, white negligee, a cap over her shorn head, she was taken to the scaffold riding backwards in a common cart, her hands tied behind. On reaching the platform, she stepped on the executioner’s foot. Her last words were “Pardon, sir. I did not do it on purpose.”
67
Uranus Treading On Your Toes PRIER WINTLE
A
T 8.53 p.m. Greenwich Mean Time on 10th March 2003 i entered n, which he will be traversing for the next seven years apart from a brief dip back into b between 15th September and 30th December this year. He does it every 84 years, the last entry being on April Fools Day 1919, the year of the Peace of Versailles after the First World War, the peace which led on so neatly to Hitler and the Second World War. The two previous transits of i in n were in May 1835 and June 1751 when the Industrial Revolution first really got under way. But there is something unusual about this present entry. For the past six years of i's seven-year transit through his own Sign b, the planet o has been in that Sign too. And o will continue to be there during the whole seven years that i will be in the n. So what is the significance of that? Well, n is o's Sign, just as b is i's, and these two planets will be in Mutual Reception for the whole of the next seven years, each in the Sign ruled by the other. And that gives them extraordinary strength, for any planet that transits either of those two Signs during the whole of those seven years will come under their rulership since it will be “disposed of� by them. And so will any other planet that happens to be in the Sign ruled by the planet being disposed of. Thus suppose the q is transiting either b or n and that t or y is in the sign the q rules, g. i and o will have power over the q and also over t or y (whichever it happens to be). And it doesn't even stop there. Any other planet that is in a Sign ruled by t or y (that is a or c) will also be ultimately subject to the influence of i and o. In fact it is not impossible for every other single planet to be ultimately influenced by them in this way, as is in fact happening just at the present time as I write this in March 2003. And what threatens us now? Is it not a cyber war (i) to which (according to President Bush and Prime Minister Blair) there will probably be retaliation with chemical and biological weapons? (o) And it won't just be one small nation that will be affected. We shall all be under the threat of it. However, that doesn't mean that we can now sit back self-satisfied and say we have
68
Considerations XIX: 1
found the astrological reason for war in Iraq! Namely i entering n! What is it about this particular war threat that fits this particular celestial configuration? President George W. Bush and President Saddam Hussein happen to lead and personify two different and contrasting attitudes or concepts’ concerning what life on this planet is really all about. On the one hand there is the American way of life, materialistic and scientific, boasting that it is democratically based and so justified, and claiming that it has the effective answers to all the world's problems or at least the most efficient intellectual attitude and know-how to work them out for the future. Most of this is Uranian, though an idealistic component owes something to o and a money-based driving force derives ultimately from the planet Minos, ruler of s. Minos is not well known, but its position was determined by the German astronomer, Professor Kritzinger, in 1961. At the present time it is also in n, so it comes directly under the influence of i and o in mutual reception. It explains the way the idealistic American democracy is using all its Uranian scientific knowhow to bring the whole world under its financial thumb. With o in the picture, the world's oil naturally occupies a prime position in this effort. In opposition to this idealistic-materialistic American dream world there stands the Muslim religious world, not yet completely united, but teetering towards unity and basing its world-view upon absolutely rigid and not-to-be-questioned religious dogma. It has less Uranian scientific strength than the American world does though it will use whatever scientific crumbs it can gather from the rich man's table. It is ruled by o's brand of religious fervor, totally impervious to intellectual analysis or argument, with a touch of “, ruler of the Sign Scorpio—the Sign opposite to Minos's s—added in as well. “, discovered in 1930, is the planet that motivated the Nazi and Fascist dictatorships of the 1930s and 1940s and their Japanese ally. Traditional ruler of Hades, the realm of the dead, he encourages fanatical Muslims to be prepared to die for their faith, as we see today in the suicide bombers in Israel. The kamikaze pilots of Japanese aircraft in 1945, who were prepared to dive their aircraft, loaded with bombs, down the funnels of allied warships, were exactly similar. America has the edge in a cyber war, but we have yet to see what surprises the Muslim world has yet to reveal in reply. And every apparent or surface American success leads to a surge in recruitment to the underground terrorist movements. But i in n doesn't only affect or rule the political world. He affects us all, privately as well as publicly, and we need to look at that aspect as well. Both i and o are “outer” planets, moving in orbits beyond that of u who represented the limit known in Classical and Medieval times—until March 1781 when the astronomer William Herschel first spotted i. i
69
Wintle: Uranus Treading on Your Toes
represents new science and o new intuition (Spiritualism among other things), and both have affected us profoundly. Thus 90% of humanity lived in the countryside before Uranian industry changed things. Now it is the other way round—90% or more live in towns. And o rules the films and the TV culture that hypnotize 99.999% of today's humanity. A mutual reception is like a conjunction of the planets, but whereas actual visible conjunction only lasts a very short time—a few days in the case of the faster moving planets and not more than a month or two even with the slower, outer ones—a mutual reception can last years, as in the present case. And when it does, some things may become explicit and generally accepted which previously were hardly acknowledged at all, or only spoken of behind closed doors. Sexuality and variations of sexuality are one instance of this. i is an outspoken planet and the planet of explicit unorthodoxy, bringing about political revolutions and revolutions in things generally, so in the sexual field as in every other he has been doing away with the wraps that Saturnian orthodoxy had draped around everything throughout the 19th Century and the first half of the 20th Century. This has been particularly noticeable while he was in his own Sign b from 1995 till the present year 2003. Gay (homosexual) culture now advertises itself more openly than it has ever done since Classical Greek times. In the 4th Century BC, Aristotle could write matter-of-factly about small city states that deliberately introduced the love of boys as a remedy for overpopulation problems, but no one else has been quite as explicit and non judgmental about it since then—not even in Rome, although homosexuality was widely practiced there. But we now have Gay marches and Gay Film Festivals. "So there you are", i is saying. But the link with o introduces another aspect. o doesn't like explicit definitions and demarcation lines. Why should a man only be a man and a woman only be a woman? What if he feels inwardly he is female, or she inwardly feels she is male? Are they therefore ipso facto' 'wikkid'? If God made them with male or female bodies and genitals must they accept them as God-given limits which must never be argued with? What about hermaphrodites born with the genitals of both sexes, and what about people born with defective arms, legs, lungs, livers or oesophaguses? Must they never have operations to correct these? o rules feelings, and to him it is intensely important if a man feels he is a woman or a woman feels she is a man. On his own he might not be able to do much about it except behind closed doors, or in the psychiatric hospital wards where he rules through his sign n, but now in mutual reception with i it's a different kettle of fish! (Forgive the Neptunian pun). i will broadcast it all out in the open, and set up groups and societies and advocacy bodies in his own inimitable b 11th House manner through which transsexuals can both receive support and also advertise themselves, so that others with similar feelings who were previously too scared or shy to acknowledge them, either to other people or even to themselves, can
70
Considerations XIX: 1
flock to join them. It's a challenging world, and while George Bush and Saddam Hussein and the Born Again Christians and the Jihadic Muslims fight it out on the open surface, a great deal will be going on beneath that surface which would send them rushing to the toilet if they could only spare a moment to look at it. It is necessary to realize that on his own i is not well suited to the Sign n. He is an extremely masculine, externally orientated planet, absolutely brilliant at defining concepts scientifically so that those who work with them can go on to apply them and produce material results from them in the external world. He does well in masculine Signs and is also pretty comfortable on the whole in feminine Earth ones, s, h and ¦. And he can also get by in the tough Water Sign x. But f and n, especially n, are too indefinite and emotionally and subjectively caring. They take account of considerations that seem to him to be side issues—time wasting and irrelevant. This is why the Treaty of Versailles in 1919 was such a disaster, in contrast to that of Vienna in 1815 after the wars with Napoleon. In 1815 the victorious statesmen sat down and worked out the best solution for the whole of Europe, including France whom they had been fighting, and as a result there was peace between the leading nations until Bismarck rocked the boat with his wars for the unification of Germany in the 1860s and 1870s. In 1919, by contrast, the allied powers (France especially) were consumed with rage and resentment against Germany whom they accused of having been the cause of all the pain and suffering of the 1914-18 war. As a result they tore whole provinces from her and imposed crippling reparations payments which were impossible to pay and simply bankrupted her. Inevitably there would be a reaction, and sure enough it came fourteen years later with Adolf Hitler. And that meant another war—a worse war. Not that you can blame everything on i of course. All the other planets were also involved. But i has the faculty to define things and bring them to a head and encourage swift action on current issues. So he certainly played a leading part. What of today then, when we have another war on our hands, in Iraq? The difference is that mutual reception with o. i still wants swift, unilateral action as shown by Bush's ultimate 48-hour ultimatum to Saddam Hussein—after a year of unsuccessful pushing for UN support. But the way the war goes and its ultimate results will not be what Dr i ordered, even if the initial conflict is over swiftly. There are far too many other considerations this time and the whole complex Muslim world has become involved, directly or indirectly. It is a vast emotional religious issue, the domain of o. Really it was already that, even if Bush hadn't attacked them and thrown his torch into the powder keg. But now it can only become more and more explicit as Muslims the world over feel identified with Saddam and Iraq, and a non-intellectual gut hate for America. There will be emotional-religious terrorism everywhere. Bush
71
Wintle: Uranus Treading on Your Toes
wants to go on and attack Syria next, and Libya, and North Korea. It all seems logical. But religious war is not a matter for logic. It comes in waves, like a Neptunian sea. The winner of it must eventually be someone, or some nation, who can constructively combine both Uranian know-how and decisiveness and Neptunian subtlety and awareness of a hundred different sides to a matter all at once. And since there is this mutual reception, we must also consider o's own individual part in the whole picture. How does he like being in b, and what is he doing in a general sense? o is a much more subtle dissimulator than i, so it will not be so easy to spot when he is not happy in a Sign as when the latter is. What he will usually do is to appear to be doing something totally in the character of the Sign he dislikes while actually working it so that it turns out in precisely the opposite way. In general he gets on well in all the negative Signs, with the possible exception of h, which is too precise and prissy for him. He is in Detriment there. (In a similar way i is in Detriment in g, but he usually manages to bash his way along when he is in that Sign without it troubling him too much. o is more troubled by h). Very positive Signs like a and d are also distasteful to o, and he is not too happy in c, though as I have said above, you won't always know it. b is an inbetween case. As the Sign which is most completely home to i it is associated with learning and libraries and with the general dissemination of specialized information to as many people as possible without consideration of race, color or creed. Here we are not concerned simply with the means of communication of anything and everything, as we are in the case of d. i and b assume you are interested in what they have to offer and they offer it in an 11th House way as to a friend; though without any deep emotional commitment. They just take it that you will be a friend if you are able to look at a matter detachedly and objectively. So how would you expect o to fit in with that? How many people are really able to look at things—any things—with detachment and objectivity? Despite all of i's sincere desire to define and clarify everything scientifically, and change everything, this world remains an intensely emotional place. And that is how o enters the picture. His home Sign is n, the Sign that lives first and foremost by and for emotion. It does not define anything, but it empathizes with everything and everybody. From the point of view of almost all the other Signs, (f is a possible exception), it is a great big mess and muddle. But of course that is also a very good description of the state of the world in general today. And o won't change it. He'll empathize with it. He'll enthusiastically go along with i's Aquarian ideal of disseminating advanced information but it will almost certainly turn out to be another case of the final result being the exact opposite of what i originally expected and intended. And isn't that an extraordinary good description of the role of the
72
Considerations XIX: 1
whole world's news media today? The newspapers, the radio, the TV, the films, the political speeches by politicians of every nation over the whole globe, are all giving us information we need to live on and by, but it is always the information they want to get across presented in such a way as to appear to be saying what they think we want to hear. It has been going on like this for years, more and more and more so, but it has been particularly noticeably so ever since o entered b on 29th January 1998. Listen to the news coverage of the present Iraqi war, moment by moment, tank by tank, rocket by rocket. We hear both sides. Here are the Iraqis saying that the invading Americans will be tried in Iraqi courts as mercenaries. But always the push is towards what we are supposed to know and expect will be the eventual result of it all, a New World Order, a stage-managed democracy and Minossian financial control of all the world's Neptunian oil. The only reply to that will be a world religious revolution, almost certainly Muslim, also stage-managed by o, with the aid of “—presently in the assertive religious Sign c, material ruler of the 9th House of learning and spiritual doctrine. Do you think it will happen? Do you think it won't? A mutual reception between i and o only happens very rarely. In 1919 o was in g, not b, when i entered n, and that then intensified the emotional intransigence and assertiveness of France, a g nation. Even when o is in b (as in 1835) it may not be for the whole period that i is transiting n, as now. Religious fanaticism has admittedly existed on its own during many periods of world history, but this mutual reception era is an exceptional one because of the exceptionally advanced science that i has achieved during the years since his discovery or re-discovery. Such unimaginable science and industry has never been known before, unless possibly in some ancient time of which all accepted records have been lost. And now it opposes religious fanaticism. What can therefore happen will be equally exceptional—not just “anybody's guess”, for we can all guess and most guesses will be as wide off the mark as most Lotto entries are, but it will not be likely to be a complete victory for Science and the money-dominated modern world over the world which confronts it. Nor will any Muslim state, or even a whole, unified Muslim world, be able to achieve a victory over Science, at least at the present time. The future may be different, since the Western world as we know it is actually a dying world. Its birth rate is less than that required even for it to maintain itself at its present size and strength over the next fifty years, let alone a century. By contrast, the Muslim states have birth rates which could enable them to double or treble in size within fifty years. And though they lag scientifically far behind the West, at present they are striving desperately to catch up. Already Pakistan and Iran possess nuclear weapons and only a few of them are needed to cause mass destruc-
73
Wintle: Uranus Treading on Your Toes
tion or even world destruction. It may have happened before. Ancient Indian records such as the Mahabharata contain descriptions of wars that until recently were regarded as pure fantasy but can now be unmistakably seen to be sober accounts of nuclear conflict. And similarly, when the Chinese detonated their first nuclear device 25 years ago, it produced a field of molten glass beads over the sandy area where it exploded. They subsequently discovered an almost identical field nearby which had existed for thousands of years, unexplained. This is the trouble with i. He is absolutely brilliant but he is also a menace because he doesn't know when to stop. He is like adult giving toys that are far, far too dangerous to children; he doesn’t realize they will be hurt by them. It seems O.K. if only a few “responsible” nations have nuclear capability and there is a nuclear non-proliferation treaty that all the others must sign, but how many of them are really going to honor that? Why should Johnny have an air gun and not me? All nations resent others who lord it over them and tell them what they must or must not do, and at the present time hatred of America is worldwide because of her proclaimed superiority and arrogance. Russia and China hate her as much as the Muslims do, and are only restrained from saying so as openly as the Muslims do because they recognize that for the moment they are not quite as strong as she is, and also because they too dislike the Muslims. For the time being it is better to let America get on with dealing with them! But expect chaos to get worse and worse over the next seven years while i is in n. Today it looks bad but it is only a ripple. Soon it will be a wave. i can't stop. If he can't sail placidly on the surface of the sea he'll try a submarine or an aircraft that can fly just as well under the ocean as in the air. Always he thinks in terms of external equipment and techniques. He's really out of his depths in n but he'll never realize it or admit it to himself He's had enormous success changing the world since he was discovered or re-discovered in 1781, so he asks why he shouldn't go on doing it? And doing it his way, materially, intellectually and externally. The answer, of course, is that mutual reception with o. People are only just now beginning to understand what o is all about. It is like Freud, at the end of the 19th Century, discovering the unconscious mind that no one had ever realized existed before. For the first time psychologists were able to perceive that people say things and do things and live their whole lives for reasons quite different from those they understand and believe in consciously. And so do nations. It is all o working through the 12th House. i is in n, but o rules n. o is in b and i rules it, but actually he only rules it consciously—that is, the tip of the iceberg above the surface of the water which the word b really defines. o is like Noden, the ancient
74
Considerations XIX: 1
Celtic god who sleeps a perpetual sleep but dreams the fate of all mankind. Only psychics and mystics can intuit it, for he rules the 90% of the iceberg which is perpetually under the water. The future which we shall experience over the next seven years while i is in n will not be the future that i himself and the scientists and financiers who back him and rely on him in America and the modern Western and Far Eastern worlds generally imagine. It will be a future in which unconscious spiritual influences come to the surface and eventually take over. They will have a Plutonian backing and will be fanatical and cruel. It will be the beginning of a world so changed as to be unrecognizable by comparison with anything we have known over the past 2000 years.
Let’s Consider Nicole Girard comments on astrological time twins: Day twins have played a significant role in my life. I had two college friends born the same day as me (7th June) but not the same year. We were each married to men with the q in s. In 1981, during the very same week, the three s men left their three d wives to go away with three other women. At the time an astrologer explained that the transit of i in x had destabilized s. That was the stimulus for me to begin to study astrology. —Gonneville, France John Norris writes: You call people born on the same day time twins, John Addey called them astrological twins. He once quoted the following example in the Journal of the Astrological Association: “King Umberto I of Italy was introduced to the proprietor of a restaurant in which he was dining and expressed astonishment at their similarity of appearance. “The proprietor was able to tell King Umberto that they were both born on the same day and at the same time, had married a wife of the same name on the same day, that both had a son called Vittorio and that he (the proprietor) had gone into business on the day of the king’s accession. “After hearing of occasions when they had been in close contact in the army, King Umberto learned that his opposite number would be taking part in a shooting contest the next day at which he (the king) was to present the prizes. He expressed the wish to meet him again then. “But when the time came, news was brought to the king that his twin had accidentally shot himself dead while preparing his gun. Before the king could be taken to the scene of the accident he too was shot dead, by a foreign anarchist.” —London, England
75
Fate & Fixed Stars BARBARA KOVAL D.F. Astrol. S.
P
EOPLE SEEM to think that astrology is based on the notion that the planets and stars cause us to do things, and we have little or no control over our actions. Not only is that incorrect, it is totally unrealistic given the distance away of all celestial bodies especially the Fixed Stars. Because of their distance it takes years for even their light to reach us. Stars could be exploding and disappearing daily, but we would not know for hundreds of years. In truth we really do not know if we are looking at celestial bodies or old light. Astrology is about timing not about material influence. If you think about the natal chart and the timing of events in our lives, we can see that the current light of the planets that we do know exist is "affecting" the "old light" of the heavenly bodies on the natal chart. In such a case one cannot logically or materially influence the other. In terms of influence stars are like a calendar. In bygone days the housewife always did her wash on Monday. Monday did not cause her to do her laundry. It was the culturally chosen day. In the same manner, but on a longer range, similar to signs, Ramadan in Islam and Lent in Catholicism are times of fasting and prayer. These periods are not the causes of the fasting. They are the periods religiously designated to put the body under the control of the divine spirit. In the financial forecasting of the Stock and Futures' Market you can literally see a price change drastically when an angle is conjunct a fixed star. A futures' price can jump as much as four dollars, a great moneymaking rarity. The stars also time important turning points in the stock averages over the year, and often mark highs and lows for the year or for the contract period.
W
HILE the important daily contact is with the angles, especially the Midheaven, the longer periods and peaks are defined by the day of culmination. Culmination is when a planet or star transits the MC. The time of culmination is 9:00 PM for the location of the trading arena. Make sure this is Local Mean Time, not Standard Time or Daylight Savings Time. One of the easier ways to calculate a chart for this time is by erecting the chart for Greenwich Mean Time then subtracting the distance in time from the sidereal time of the event. Once you know the appropriate ST you can play with your locally reconstructed chart. Of course the easiest way on your computer is to use LMT. You will not always get your exact star placement because of two factors. One is the difficulty of knowing the exact star co-
76
Considerations XIX: 1
ordinates because of the precession of the equinoxes. Because the movement is quite slight over the period of a year it can be difficult to list the correct coordinates. The second problem is that of getting the coordinates exactly in right ascension. The right ascension of the MC (RAMC) is the true contact because it is a measurement on the Equator. The Equator is an earth measurement, and we are looking for an earth event. Celestial longitude is the path of the q; we do not trade and operate on the q. All Fixed Stars have some effect, but not all give the best results for timing. Fixed Stars also define some intense personality or physical problems. A young blind girl's chart had no serious bad aspects to either light, but her natal q was conjunct the Pleiades, which are associated with blindness because they are a nebula. The Pleiades did not cause her blindness, but they did describe it Each constellation has at least one star that is of major importance. The star often defines the start or end of the area the constellation covers. This is important because we really cannot draw specific lines to encompass the sidereal zodiac the way we can with the tropical. While there is no sense at all to having a physical kind of pressure from the stars, there is some sense to an effect from the planets. Since they revolve so close to us their effect can be similar to that of wind, that invisible movement of air, which we all can feel from a slight breeze all the way up to a hurricane. Einstein posited a theory of grooves in space in which the planets rolled as opposed to gravity and mutual attraction, but that would not account for the "influence" either. The stars don't move so they can't even stir up a breeze. One might argue the precession of the equinoxes and their slight movement each year. It is not the stars that are moving but the whole galaxy. It is revolving around a central point, currently about 24º c in tropical notation. In converting star positions into right ascension it is extremely difficult to get the exact position on a 360° scale, particularly if the position is given in angle hours. Although we know the movement in what might be called a distance measurement, its conversion to equatorial coordinates can be hard to assess. The table below gives the right ascension for the stars about six years ago. It is still useful to see how different that can be from celestial longitude.
T
HE FIXED STARS define not only the constellations, but both the Hindu and Arabian Mansions. The stars in this list are mainly those which define either decanates or mansions. Mansions are approximately thirteen degrees in length, the daily motion of the w. This is not to say that they are rigidly 13Âş, some are shorter, even by half. One is the degree of only one star, Wega. Many of the problems of defining mansions have been the attempt to put them in celestial longitude in either the tropical or sidereal system. When mansions were first defined, they were calculated on the equator not the ecliptic because they were viewed from the earth and relevant to
77
Koval: Fate & Fixed Stars
earthly matters. The biggest clue to dividing mansions by right ascension, measurement on the equator, is the one degree mansion between Spica and Arcturus. They are over 12ยบ apart in right ascension. The ancient Egyptians first defined the mansions. Their starting point was Alcyone, one of the Pleiades in 29ยบ s. This is significant on a couple of counts. Egypt flourished in the Age of Taurus, so it is not unlikely that the first star in the constellation, s, going backwards, should become the fiduciary star. Mansions have a tendency to define good periods and bad within the trading year. Important stars within that system often time tops and bottoms. The decanates also came from ancient Egypt. There are 36 of them. Sirius is the fiduciary star for the decanates. Though it is associated with decanate thirty-six, we may be seeing a backward motion here, too, as the end of the 36th decanate is also the beginning of the first. Sirius was, of course, an extremely important star in ancient Egypt. Its culmination signaled the flooding of the Nile. As to the question of influence, it did not cause the Nile to flood and fertilize the fields, it merely timed it. The decanates also have a tendency to define relatively short trends in trading averages. There is a discrepancy in the decanates. If you notice decanate 31 is followed by decanate 34. The stars given as decanate indicators did not quite fit the whole 36. It may be that the decanates follow ten-day periods because their right ascension degrees do not give an even number. There is no D5 either, although Regulus would fit the space, which makes that star even more important.
C
ERTAIN STARS seem to have a life and power of their own. In ancient times the four Royal Watchers of the heavens were Antares, Aldebaran, Fomalhaut, and Regulus. Each Great Age has its defining star and constellation. In the Age of Taurus the defining point was Alcyone, a star of the Pleiades in the constellation, s. Because Regulus' name means regulator it may well have been the fiduciary star in the Golden Age, when man and gods interacted as told in the Iliad and Odyssey. The fiduciary star of our age is Spica, which is the brightest star in the constellation, h, a sign of precision and opposite our defining constellation. The thought occurs that the fiduciary star is the one that should start the parade of Mansions and Decanates, but the complications and interpretations would be extremely difficult in the light of the question of the 1ยบ mansion and its rulership as well as subsequent ones. If you notice the sequence of rulerships you see that the nodes occupy a position of dividing the sequence into a balance. In the case of the Stock Market they often mark a mid-point in a price move. To move those mansions and rulerships could mean the loss of centuries of observation and experience. To go back to Regulus it is not only the brightest star in the constellation g, it is also on the ecliptic, so its function as a regulator is timeless. It may well become the fiduciary star of the b Age
78
Considerations XIX: 1
Star Deneb Kaitos Baten Kaitos Alrisha Menkar Algol Alcyone Aldebaran Rigel Al Hecka Betelgeuse Alhena Sirius Aludra Castor Naos Al Tarf North Asellus Acubens Alphard Regulus Foramen Zosma Labrum Denebola Zavijava Porrima Spica Arcturus Seginus South Scale Kochab Unukalhai Dschubba Antares Sabik Aculeus Acumen Kaus Medius Facies Wega Dheneb Rukbat Altair Terrebellum Deneb Adige Dorsum Sadalsuud Sadalmelek Homan Fomalhaut Kerb Markab Scheat
Mansion Decanate D27 D29 D30 r 28 D31 q1 w2 D34 t3 D35 l4 D36 D1 y5 D2 u6 D3 e7 D4 L 8 D5? D6 r9 D7 q 10 D8 w 11 t 12 l 13 D12 y 14 D13 D14 u 15 e 16 D16 L 17 D17 q 19 D18 20 r 18 D19 w 21 D20 D21 t 22 D22 l 23 D24 y 24 D25
Right Ascension 10:51 27:49 37:51 45:31 46:59 56:49 29:03 19:31 84:22 88:45 99:23 101:15 110:59 113:36 120:52 124:05 130:47 134:35 141:51 152:03 161:51 168:29 169:48 177:13 177:39 190:22 201:15 213:52 217:59 222:40 226:40 236:01 240:02 247:18 257:32 263:20 268:27 275:11 279:02 279:12 286:18 290:54 297:39 298:54 310:19 316:26 322:50 330.42 340:19 344:21 350:06 346:08 345.54
Tropical 2:31a 21:53 a 29:20 a 14:17 s 26:08 s 29:57 s 9:44 d 16:47 d 24:45 d 28:43 d 9:04 f 14:02 f 19:25 f 20:12 f 18:32 g 4:13 g 7:29 g 13:36 g 19:24 g 29:48 g 22:06 g 11:17 h 26:40 h 21:35 h 27:08 h 10:07 z 23:49 z 24:12 z 17:38 z 15:03 x 13:17 x 22:03 x 2:30 c 9:44 c 17:56 c 24:33 c 28:43 c 4:33 ¦ 8:16 ¦ 4:33 ¦ 19:46 ¦ 16:36 ¦ 1:45 b 25:49 ¦ 5:17 n 13:49 b 23:22 b 3:43 n 16:07 n 2:49 n 1:01 a 23:27 n 29:19 n
Sidereal 0a
10 s 20 s 0d 10 d 12 d 20 d 0f
20 f 0g 10 g 20 g 0h
20 z 0x 10 x 20 x 0c 10 c 20 c 0¦ 10 ¦ 20 ¦ 0b 10 b 20 b
Constellation η Cetus β Cetus αn α Cetus β Perseus ηs αs β Orion ζ Corvus α Orion γd α Canis Major η Canis Major αd ζ Puppis βf γf αf α Hydra αg η Carina δg δ Crater βg βh γh η Ursa Major α Bootes γ Bootes αz β Ursa Minor α Serpens δx αx β Ophiuchus M6 x M7 x δc M22 c α Lyra ζ Aquila αc α Aquila ωc α Cygnus θ¦ βb αb ζ Pegasus α n Australis τ Pegasus α Pegasus β Pegasus
79
Koval: Fate & Fixed Stars
since our equinox is now in the opposite constellation. Markab and Scheat in late tropical n are very powerful and quite malefic. Although they do not define a mansion or decanate they are worth keeping in mind when interpreting charts. The way to assess the brightness of a star is by the Greek letter associated with its placement in the constellation. The Îą stars are the brightest and on down through the alphabet. For instance, Kerb has a Ď„ designation well into the alphabet and thus rather dim, but it is in the right place for a decanate. While some of these stars appear to be small and insignificant, the ancients would use whatever stars were in the right place. One might argue that without telescopes it might be difficult to see some of these stars, but if you consider the clear desert atmosphere and the likely superior eyesight of our forebears who never had to read a book under the light of an electric bulb or deal with smog you can see why. It was said that Galileo's mother could see the Moons of t unassisted by his telescope. We have all sorts of support mechanisms because we need them. In the Hindu system, the star, Revati, has some importance. It is supposed to be in the constellation n, but a star at its coordinates is impossible to find on a celestial globe and does not appear in many star catalogues. The above mentioned Kerb is the closest to it placement. Although Kerb is in Pegasus, Pegasus has a position that straddles n and a, so it can be associated with either. We see in this list many stars that do not fit in the constellations on the ecliptic or equator, but they do line up in the right places for decanates and mansions. Some stars are known by different names in different systems or catalogues. The star, Unukalhai, is called Benetnash in the Ebertin Catalogue. The Hindu name for Spica is Chitra. Although the stars, a name commonly used in reference to astrology, do have an importance in astrological analysis, the fixed stars, like the wandering stars, the planets, do have a place as important timers and parallels to the life and times on our little planet, Earth. Bibliography: Allen, Richard Hinckley, Star Names, Their Lore and Meaning, Dover Publications, New York, 1963. This excellent book is one of the most comprehensive and detailed accountings of the stars anywhere. Robson, Vivian. The Fixed Stars and Constellations in Astrology, Samuel Weiser, Inc., New York, 1923. This is absolutely the best book on the different mansion systems, Hindu, western, and even Chinese. His star catalogue is very good, too, but not entirely comprehensive. DeVore, Nicholas. Encyclopedia of Astrology, Philosophical Library, New York, year unknown. If you want to know something about almost anything this is the book to get. Johndro, L. Edward. The Stars, Samuel Weiser, Inc., New York, 1970. While his work is excellent and comprehensive about all the stars and their "effects", his star tables are almost impossible to read because they give no names, only positions.
80
Random Notes on i & o MICHAEL ZIZIS This discussion of the roles of Uranus and Neptune in historical and personal evolution follows the interwoven threads of a trail into and a return from the underground of believing and knowing. From the mist to a journey underground and then stepping into the sharp breaking light of day, we emerge where we began, but with the adventure of experience enriching our lives. It is easier to believe than to know. Obviously both paths are honorable and necessary. Both issues blur in the act of creating and using brain, mind, and spirit.
O
VERVIEW OF THE UNDERGROUND In astrology o represents a formless boundless sense of the infinite, a Platonic view of Heaven and Earth—whereas i represents a Hykeetos, a Greek word for "thingness" the specific pattern recognition of stuff, an Aristotelian view of the nomenclature and ecological economy of objects. For the Astrologer, this divides into our warring selves of belief versus knowing. Getting This Way: Each culture and every era creates their own myths and blind spots by which the limits of experience are defined. In astrology we define these issues through external archetypes which indicate states of being. Of secondary importance, would be the predictive aspects of our craft. Western astrology focuses on "How do you feel about this?" or, "This is probably how you are going to feel y transiting your fifth house... " Astrology, seen in its historical context, seems to be about discovering or imposing some sense of order on events that appear to be chaotic and unconnected on the surface. Thus, we find Sumerian advice about planting new crops in the first fourteen days of the lunar cycle. t, when conjunct the q indicated acts of war between princes, etc. These events were perceived through gradual means, by patient observation. There was no thought of the separation of secular and sacred thought. All this sets the stage for the drama of "modern" astrology. Lets suppose that we are in a "great year". This is a fuzzy transition between the Age of n ("I believe") with its modern ruler o, shifting into the Age of b ("I know") with its equally modern ruler i. The Wars: We are all a battleground for beliefs, something culturally or personally unquestioned, and knowledge, an event or state that is simply and provably true. Here, we as individuals and counselors are on tricky plate tectonics. o can be characterized as a trance state, a submission to healing or suspending disbelief, or being inspired or deceived. Easy examples would be incense at church, music that lulls,
81
Zizis: Random Notes on i & o
fiction that does not risk, possibly the houses in astrology or astrology itself. Trance occurs when we hand over definitions to the group or processes seen as giving up our lives as separate and unique individuals; a merging or submerging of the self. The i suddenly-awakened eureka state often feels like the W. B. Yeats statement that, "Everything happens in a flash of light." This stuff occurs when we feel as if we "get it", meaning seeing the whole connected process. We conclude that we not only know, but we perceive how that "knowing" happens. This may begin a new cycle of believing, which in turn leads to a confrontation eventually with the erosion of that belief. This is a hand-over-hand swim to knowledge. The alchemists believed that death could be regenerated as life, and that lead could turn to gold. The lead/gold metaphysical metaphor aside, we gained immeasurably in chemistry and science, starting with the philosopher's stone. There are, however, "war" zones here that interfere with the most effective conduct of our craft. When science is used to expose or destroy astrology for example, then science becomes a belief system. When an astrologer proclaims that one must believe in astrology (rather than asking “is this of any use?�) in order for it to work, then we are in league with Tertullian, who proclaimed "I believe because it is absurd to believe!" But we could observe the work of Liz Greene, who predicted the demise of the Soviet Union through its chart. It works without the added burden of believing or the disbelief that it works. When I suspect that men or women or lawyers are the problem, or that my local astrology group is somehow a family, or its polar opposite, then I am believing without the test or the work of knowing. The examples are myriad. An Anecdotal Journey: We live under illusions and myths in every age. During an astrological lecture that examined this issue, I asked how many people in the room "believed" that we create our own reality. Twenty-three people put up their hands. Then I suggested that we are in a grip of belief, a "high", a current mythos called Relativity.1 The group agreed to test this Aquarian Age "trance". I requested that everyone raise their arm straight up above their heads, to "reach" the ceiling. Then I asked the group to tell themselves that every time they 1
Somewhat naively, this myth goes: anything can be anything else; red is green, up is down, sweet is sour, one woman's fish is another women's poisson. You know, the I am different from you stuff. Yes, okay, do you want food? No I am different from you. You see, I am a Serb, and I take baths- I wash. Those Muslims, they are different from us, they are filthy. So are their women. We rape them because they are animals. Its time to ask: who benefits from blowing this smoke in our faces? Who benefits from promulgating the cosmetic of difference ? So it goes.
82
Considerations XIX:1
were feeling pain that they should tell themselves that it was actually pleasure or joy. The results were astonishing. Immediately eight people in their 40's to 60's put down their arms. The rest of the group put down their arms more gradually until by the fourth minute into the experiment all of the astrologer's arms were in their resting positions. My expressed concern then and now leads me to conclude that we are gripped by myth-making at every turn, in every age. After all, if pain is absolute (all organisms try to escape from it) then we must see that difference is the soul's cosmetic. Everybody likes to be well fed and loved, and to be dry as opposed to being wet, hungry, and unconnected to others most of the time. As you like it.
O
RIGINS The ancients ascribed the rulership of n to y, and u was given authority over b. The quest for a force larger than the frightened and puny self can be found with Jovian swirling immensity. y, in representing the 9th house and 12th houses, reflects belief, the search for belief, and displacing belief with belief. Colors In The Dark: What is the real color of the sky? Blue says belief while knowledge tells us that the refractive properties of the atmosphere create the illusion of a blue sky. Our real sky is actually black. The real color of the sky is cosmos. y, the exaggeration balloon, the excess gadget, the ultimate swirl machine in these parts, The Big Huge, gives us the terracentric notion that we are big, sky is blue, and we are a centre and we are alone. Ain't that sky a nice Piscean blue, ocean blue too? u, ancient misery-meister, bringer of wisdom, served as our cold necessary mother of science through misfortune. u's turns the difficult in our lives to the impossible. Then the old way is shattered. So u had an Aquarian effect--Why are things this way?—through history. My w is A u so I tend to see that happiness is a nice place in history but it doesn't move invention forward. Your w is A y and you believe that things are just so damn swell that we'll invent the chariot or the sewing machine. i as ruler of b tells us something about the 11th house. As well as the nicey-nice groups, clubs and associations, it also stands for membership in the galaxies group of sentient space-faring beings that common sense says must be out there. Oh. And o... Well Poseidon, also known in Greek mythology as the Earth Shaker, gives us regeneration through illusions and seductive, or disturbing, dream journeys that take us through the inversion of consciousness. New material from the infinite deep is washed up on our moonlit shores. Can you say hard-to-diagnose symptoms? Proliferation anyone?
83
Zizis: Random Notes on i & o
Examine a cubic yard of sea water carefully. Science says there are viruses there that we can't even begin to classify. New bacteria and viruses abound on the planet. The accidental destruction of our ecosystems is enough to make AIDS appear benign by comparison. Zone of the Unknown: o, in astrology, disperses t's energy. Thus those with say a sextile or trine between t and o often display a curious Zen-like quality to their consciousness. They seem to always be in touch with the timing or tuning of the moment. Edwin Hubble astronomer (born 20th November 1889; Marshfield, Missouri; time unknown) was born with t F o. He was a pioneer in the study of extragalactic astronomy. He composed (1925) the classification scheme for the structure of galaxies that is still in use today and provided the conclusive observational evidence for the expansion of the universe. People with difficult aspects to o, have everything from allergies to asthma to skin problems. There is also mental instability, if indicated by house and sign. These people can suddenly be subjects of `miracle cures' when they channel energy to higher less ego-oriented goals. They can achieve an extraordinary wellness when they choose to serve the common good. Mohammed Ali (born 6:32 PM on 17th Janunary 1942 in Louisville Kentucky) was born with t V o. He had one of the boxing world's most meteoric successes. He converted to Islam and got brain deterioration after. He counsels ghetto children now, and seems to be in remission. Another famous ÂŚ, Stephen Hawking (born 8th January 1942, time unknown) is the twentieth century's most famous cosmologist. He has t V o and suffers from Lou Gehrig's disease (amyotrophic lateral sclerosis) and is also in miraculous remission. o can be seen as the illusive seed of renewal contained in selfsacrifice, in other words the ruler of the regenerative aspects of the 12th house. Thus belief becomes a kind of connectedness to all things that defy the known earthly concept of life as simply a long staggering to the grave. Whoda thunk it!?
A
RIADNE'S THREAD From an attack on astrology to the Oklahoma bombing, we can see that the Cave, the Plutonic Earth, leads us to dark journeys. And we return enriched by the experience. As we study condemnation versus utilization, we come across many of the structures of belief- and knowledge-based systems. If O. J. Simpson had been proved guilty, then some fundamental illusions about women, men, spousal abuse, and race, would have been brought to a sudden light. Similarly, we will find that the logical extension of a belief that the government is the enemy will lead to justifying murder. Replacing mystery and complexity with self-righteous angry outrage ends up with concerns like protecting Jesus and Motherhood and Islam.
84
Considerations XIX:1
In our charts we look to Mystery not to dismantle it or replace it with Cartesian "get real" sterility, but to communicate that life has a genuine sense of the boundless and the infinite about it. Astrologers while doing their work, also consult every person's true uniqueness as it manifests in the wholeness of the living wheel, a personal horoscope. Threading a Life: In life's weave, we find that the archetype of Ariadne's thread reveals much about our planetary consciousness. In the story, Ariadne falls in love with Theseus and conspires to release him from the labyrinth. o's inspiration and self deception inhabits her decision. Then Theseus slays the monster in the underground, a Plutonic journey. Theseus follows the thread that Ariadne's love and genius has provided out to the new day. He then abandons her on the island of Naxos, completing her illusions. Dionysis take pity on her, marries her, and her wedding gift is that of the grape, o's drunkenness. Dionysis also embodies i's theme of coming to sudden revelation, and the tale itself hinges on awakening, from the illusion of the Minos' power (y/u) to the surrender to love (o), to the underground struggle (“), to freedom and liberation from past conditions (i). The Great Year: As we, by tropical zodiac, move into the Great Year called the Age of b, we begin to leave behind two thousand years of Piscean dilemmas and scaffolding. The age of n was mythically characterized by the death of Christ. The ubiquitous Roman Empire—o rules invisible control, manipulation, coercion—and the established priesthood (n: I Believe) set the tone for the rise of Christianity, Islam, the persecution of women—witch hunts, etc.—and the construction of great institutions. These times set the stage for the exploitation of the Earth and the belief in the supremacy of humanity over the other `lesser' creatures that inhabit this space ship with us. Looking for the bad guy is also a Piscean era activity. I pray, then I kill. For instance, I believe that blacks and Jews and the people in the next valley are inferior so I will arrange for their elimination through male priests and warriors. I also might believe that I am not really harming Mother Earth with my factory until lead or dioxins proliferate. One day the Earth vomits poisons back in my face, and melanoma shows up in my child's body. I have taught other religions how to be enemies so that bombings and zealots are now everywhere. o's journey has been the exaggeration and multiplication of belief until something simply has got to be done. Curiously in this century, and for the first time on the planet, wars are fought for something other than religious reasons. This may be a manifestation of the age of b in its demonized g shadow form, the dictator or the dictatorial state. When o's energies are not conscious, we have damage done on behalf of a country's rights, corporation's rights, and "God's" protection. We have o the Earth Shaker. The Rainbow Warrior faces whaling boats. This is not the clash of belief systems. It is a nation or a corporation
85
Zizis: Random Notes on i & o
(Neptunian constructs) believing in the right to destroy, in the name of The Name versus a knowledge of the gift of cohabiting with the largest creature ever to grace the planet. In individual charts, we endeavor to recognize the charitable impulse, and what the subconscious is healing without the need to create awareness-healing by letting go, by house and sign, the placement of o, and n. And we stare at the bizarre glyph of Herschel, Humanity, the "H" of i, somewhere on the wheel, and we proceed to counsel our client or the patient on the operating table—ourselves—to make this one journey, with the eyes open. b by house and sign tells us the surgeon is also us.
i: Sir William Herschel first observed the planet between 10 and 11 in the th evening of 13 March 1781, seeing a featureles bluish green disk that he nevertheless recognized as a highly unusual object (Bath, England). th o: In Berlin at 00”14 AM LMT on 24 September 1846 Johann Gottfried Galle and Heinrich Louis d'Arrest found the new planet. th “: The discovery of Pluto by Clyde Tombaugh at 4 PM on 18 February 1930 in Flagstaff, Arizona. The three charts all have the Moon in Scorpio! Moon in Scorpio concerns itself with the desire for focused curiosity and the dissection of powerful or hidden truth. In other words a search for knowledge, as well as the will to power.
86
April in Paris & Elsewhere KEN PAONE
F
OR CENTURIES, the ever-changing face of the weather has been the object of man’s attention and study. His continual observation of the shifting seasons, the winds and rains, the storms and frosts, soon led him to conclude that local weather conditions were synchronized with the movements of the planets. Each planet became known for its special influence on the weather. u was equated with cold; y with moderate temperatures. t ruled over heat and dryness; r over warmth and gentle showers. e stirred the winds while the w governed the tides and the distribution of moisture. The q brought heat and marked the general character of the seasons. At some point, it became apparent that when these planets were found on the angles of key charts their influences were manifested on the weather. The strongest angle is that of the Midheaven but the IC as well as the Ascendant and Descendent are effective in these regards. This April will offer us a number of opportunities to see important planetary aspects at work both on the angles and on the weather. Some of these celestial configurations assume angular positions over Paris but, of course, affect France in general as well. We’ll also find them at work over different areas of the United States and we’ll note the most likely weather patterns that will result. 6th April 2004 brings a square between t and y. Since both these planets raise temperatures, this combination usually produces warmer conditions. The square aspect has a disruptive effect on weather patterns and therefore t-y at 90º angles is also considered an acute storm breeder. The 6th of April brings e’s retrograde station as well. We usually expect increased wind velocities and cold fronts at these stations due to e’s affinity with air currents. The long-range weather forecaster’s arsenal also includes solar eclipse charts, which at times are spurred to life by the transits of the outer planets. Two such previous eclipses are triggered around this time. The solar eclipse of 31st May 2003 is set off by the conjunction of t on 4th April, while on 8th April the solar eclipse degree of 4th December 2002 will receive an opposition by t. In order to localize the influence of the t D y and e’s retrograde station on the earth’s surface, we refer to the Spring Solar Ingress chart and the New Moon chart, both of which begin on 30th March 2004.
87
Paone: April in Paris & Elsewhere
The Solar Ingress reveals an ascending e over eastern France while the New Moon chart shows y on the MC through Paris. The May 2003 eclipse chart activated on the 4th places t ascending over western France and the December 2002 eclipse triggered on the 8th has t running across northern France and through Paris. With t, y and e dominating the astro-meteorological scene over France, we can conclude that the classic clash between hot and cold air masses should erupt in storms producing gusty winds. Meanwhile, across the Atlantic, these same planetary combinations are affecting atmospheric conditions over the United States. Both the Solar Ingress and the Full Moon chart of 5th April place e over Boston. This double whammy should result in very strong wind velocities over the New England area.
Aries Ingress
6:48:36 AM UT, Paris: 48N52, 2E20 Relocated to
Boston New York Chicago Denver Los Angeles
88
MC
1º26’x 28º22’z 13º42’z S e 25º01’h A w q 10º22’z A y
ASC
6º26’¦ S u 4º11’¦ S u 20º56’c A “ 7º56’c 29º44’x S t
Considerations XIX: 1
The t D y is placed mostly over the western US when observed from these same charts. This points to stormy conditions over the Great Basin and Rocky Mountains around the 6th. Just a few days later on the 8th when t activates the solar eclipse of 4th December 2002 this same area is under the gun. t will be on the Midheaven through southern California, Nevada and Idaho. On 16th and 17th April, the q will conjoin retrograde e. This combination correlates with high barometer and strong winds. The Solar Ingress chart shows the q and e ascending through Paris and central France. The Last Quarter Moon chart that starts on the 11th may help set up this weather event since the r D y of the 14th places y setting through Paris only four degrees from the opposition of i. There is more happening than meets the eye with this r-y combination. They exactly conjoin and square the solar eclipse degree of 31st May 2003. At the time of the eclipse, the q and w were rising through western France. r and y are warm in nature so the beginning of this period around the 14th may be marked by an increase in temperature before the q A e takes effect around the 16th. Stateside, the Northeast looks like it will bear the brunt of the q A e. The Solar Ingress places the windy combo on the IC between Washington DC and New York. Their influence is again doubled since the Full Moon chart has them rising over New York. Even the Last Quarter Moon gets into the act by placing r on the descendent through the Northeast as is semi-squares e on the 17th. q as science sometimes leaves us feeling. Instead.”
89
Astrology & Quantum Physics MARTIN PIECHOTA
T
WO THEORIES that contain similarities in their breadth and scope are the quantum theory and the theory of astrology. Both are abstract studies that rely on both the scientific approach and the imagination. Drawing on scientific journals and astrological research, I will attempt to compare the two theories. The modern description of quantum theory: “…objects exist in a twilight state of all positions and velocities; particles of matter are waves of energy; and one particle can indeed, exert a ghostly influence on another at the other side of the universe.”1 Astrologers declare that planets exert an immediate influence on earthly subjects across great distances. These influences originate in imaginary points of the sky known as astrological sun signs. According to Occam inferred to exist even though they cannot be perceived directly via sense perception, nor indirectly via sophisticated scientific apparatus, because only by postulating their existence can certain known and observable phenomena be explained. Even so, those subatomic particles and black holes are inferred to be part and parcel of the material, physical world, so their existence does nothing to establish the existence of anything supernatural. While many scientists resolutely reject the notion of supernatural phenomena, their own discoveries seem to reveal what they refuse to acknowledge.2 There are two sources from which knowledge of any kind is received: one is subjective, the other objective. The former gives us knowledge of the spiritual or causal side of the cosmos. The latter gives us knowledge of the material side, which is the world of effects, which evolve out of the former. Every religion has an astrological foundation, and every sci1
Colin Bennett, “Politics of the Imagination”, Critical Vision, 2002, p. 98, quoting article by Roger Hatfield in The Sunday Telegraph, UK, August 2001. 2 John Paul Jones, “What Evil Is and Why It Matters” in Paranoia, issue 33, Fall 2003, p. 62.
90
Considerations XIX: 1
ence the human mind is capable of elaborating, springs from, returns to, and ultimately becomes lost within the starry realms.3 Astrology is the science of cause and effect. Nature and society, it is believed, can both be understood by reducing complex problems to more simple elements. The result has been a science that has made considerable strides in explaining, predicting, and controlling the natural world. On the other hand, these scientific explanations sometimes fail to capture the essence of the actual experience, for they are unable to address our subjective reactions to nature. Therefore, despite its power to shape the modern world, science has little to say about the way in which people live their daily lives; enter into relationships; experience love, birth and death; give value to the world; and respond to new situations in creative ways.4 Astrology attempts to capture the essence of actual experiences by prediction, along with observations made in the past to help explain situations that are currently underway, both on an individual and a global scale. Practitioners of astrology claim that the planets influence people, and since the planets are physical objects operating within the framework of natural laws, the clear indication is that the subject should be within the laws of physics. Furthermore, people are biological organisms so any effects would have to be within the framework of biology. Together they indicate that there should be explanations in terms of biophysics. The aim is to show that material forces operating via the planets can affect people; in technical terms, this would be "astrobiophysics."5 Throughout the world, a basic uniformity can be observed among astrological interpretations. According to astrologers, this occurs because they work. Modern quantum physics implies an instantaneous linkage between two separated quantum systems that had once been together. Assuming one rules out faster-than-light signaling, the result implies that once two particles have interacted with one another they remain linked in some way, effectively parts of the same individual system. This property of "non-locality" has sweeping implications. We can think of the universe as a vast network of interacting particles, and each linkage binds the participating particles into a single quantum system. Although in practice the complexity of the cosmos is too great for us to notice this subtle con3
Thomas H. Burgoyne, The Light of Egypt, Vol. 1, The Book Tree, California, 1999, pp.15 & 70. 4 E. David Peat, Synchronicity - The Bridge Between Matter and Mind, Bantam, NY, 1988, p. 113. 5 Frank Glasby, Planets, Sunspots and Earthquakes, Universe, Nebraska, 2002, pp. 145-146.
91
Piechota: Astrology & Quantum Physics
necting except in special experiments, nevertheless there is a strong holistic flavor to the quantum description of the universe.6 Astrologers attempt to put two birth charts together and combine them into a theory of what will happen to the individuals involved if they should get together for an endeavor in life. The 19th Century philosopher Johan Wolfgang von Goethe once declared, "Experiments are not designed to prove the truth, nor is it their intention. The only point professors prove is their own opinion. They conceal all experiments that would reveal the truth and show their theories untenable."7 In 1937, the mathematical physicist Paul Dirac pointed out that the characteristic strength of gravity in our universe is roughly equal to dividing the time needed for a ray of light to cross a sub-atomic particle by the age of the universe.8 No age of the universe was given by Dirac. This is an example of how scientific theories are exclusionary, selective and prejudicial because they ignore all data that contradicts them. Anyone trying to figure out the strength in gravity would have to make their own estimate of the universe's age in a number that would contain may zeros and would have to be rounded off to the nearest millionth of a year. The superposition principle is one of the basic axioms of quantum theory. It says that if a quantum system can be found in one of two "states," with different properties, it may also be found in a combination of them. Each combination is called a superposition, and each is physically different. The word "state" contains almost the full mystery of the quantum theory: the state of its configuration at a particular moment. This state consists of all the information needed to completely describe a system at an instant in time. Much depends on the context the system finds itself in; how it is connected to, or correlated with, other things in the universe.9 The superposition principle is similar to an astrologers' casting of a birth chart. It must be evaluated as to the positioning of planets within it and how they relate to modalities, conjunctions of planets to imaginary lines like the l’s position and house positions. The clue to an individual birth chart appears to be in the positioning 6
Paul Davies & John Gribbin, The Matter Myth, Viking, London, 1991, pp. 217-218. 7 Daniel Ross quotes Goethe in UFOs and the Complete Evidence from Space, Pintado Pub., California, 1987, p. 89. 8 Colin Bennett op. cit., p.152. 9 Lee Smolin, The Road To Quantum Gravity, Basic Books, UK, 2000, pp. 35 & 37
92
Considerations XIX: 1
of the planets that affect the birth field. This is the individual's bio-field. By this process, the earth field is continually changing and these changes would cause different effects on the biological organisms living within it. Each individual has a slightly different field and would therefore react differently. At birth, the individual acquires an individual field which forms from the prevailing earth field, an imprint of the magnetic field of the earth at that point in time, at that place.10 Protons and neutrons make up the nucleus of the atom, which electrons orbit around. But if we want to understand how atoms work we need to investigate which forces operate between its building blocks. We know that since the proton and electrons have opposite electric charges, they are attracted to each other by the electro-magnetic force. Within the nucleus all protons repel each other, and yet something keeps the nucleus together. This something is called the "strong force," which is roughly one hundred times stronger than the electro-magnetic repulsion the protons feel. The reason we don't know about the strong force in everyday life is because it is a very short-range force that only operates within nuclear distances.11 Old alchemic texts affirm that the keys to the secrets of matter are to be found in u. By a strange coincidence, everything we know today in nuclear physics is based on a definition of the "Saturnian" atom. According to Nagasoka and Rutherford, the atom is a central mass exercising an attraction surrounded by rings of revolving electrons. It is the "Saturnian" conception of the atom which is accepted today by scientists all over the world, not as an absolute truth, but as the most fruitful working hypothesis.12 Quantum physicists believe that portals may exist between our world and other worlds. Cutting-edge physicists have proposed the existence of alternate dimensions or parallel universes.13 Astrologers rarely go as far as to bring in parallel universes to explain their theory. Some astrologers, however, bring in past lives to explain the unexplainable facets of an astrological interpretation. Both theories have extremist factions that veer off course in their studies of the unknown. Some people have ideas that are revolutionary but do not know it. And if they know it, they may not completely believe them. Torn between exposing themselves to outside opinion and keeping a defensive low profile, they opt for the latter. History is full of these reluctant fig10
Frank Glasby, op. cit., p. 150. Marcello Gleiser, The Dancing Universe, Penguin, NY, 1998, pp. 292-293. 12 Louis Pauwels & Jacques Bergier, The Morning of the Magicians, Dorset Press, NY, 1988, p. 89. 13 George Knapp, “Path of the Skinwalker� in UFO Magazine, Vol. 23, No. 2, February 2003. 11
93
Piechota: Astrology & Quantum Physics
ures. Johannes Kepler, for example, believed some hidden mystical cause behind the supposed success of astrology. He said that the sky does something to man is obvious enough; but what it does specifically remains hidden.14 There is an inherent uncertainty in the sub-atomic world which is variously characterized as murky, fuzzy, blurred, irrational—“a maelstrom of fleeting ghostly images."15 Nowhere is empty; ever the spaces in atoms are thought to be full of "virtual particles" which appear out of nowhere, interact and vanish. Their presence is only inferred from their effects on other particles.16 The above description of influences by imaginary forces reminds us that astrology, too, depends on imaginary lines and points. The twelve astrological signs are nothing more than equal sectors of the ecliptic plane marked by the passage of the q. The signs are not stellar patterns that can be observed in the night sky. They are not patterns at all, but timed phases of the seasonal or tropical cycle. No stars except the sun are involved in defining the sectors. The entire frame-work of the signs floats in a starless void, independent of the surrounding heavens. Each sign is a uniform sector of thirty degrees on the ecliptic rim, the apparent path of the sun. The proper nomenclature for the signs should be the ecliptic signs. The ecliptic sectors are empty slices of an abstract field, like global time zones. No one has ever seen, or will ever see, a sign, because the signs are non-observable sectors on the invisible plane of the ecliptic.17 We cannot say whether electrons are waves or particles, only that it depends on the observer. Whatever experiment we conduct to observe either waves or particles observes only that aspect of the electron, as if the experiment—the act of observation—determines what we really observe. At the same time, we cannot know for sure what any electron will do; identical electrons in identical experiments may do different things. Werner Heisenberg formulated his Uncertainty Principle to describe the sub-atomic world; everything we measure is subject to random fluctuations; we can either measure a particle's position or its speed, but not both. Worse still, a particle simply does not possess a definite position and momentum simultaneously - only by measuring one or the other does the fuzziness, as the physicists say, clear to a result.18 Astrologers experience the same problems in chart interpretations. 14
Marcello Gleiser, op. cit., pp. 68 & 78. Paul Davies, God and the New Physics, Touchstone, London, 1992, p. 102. 16 Paul Davies & John Gribbin, op. cit., p. 139. 17 John Lash, “Decoding the Stars” in Phenomena, Issue No. 1, Nov-Dec. 2003, pp. 23-25. 18 Paul Davies & John Gribbin, op. cit., p. 210. 15
94
Considerations XIX: 1
The exact time of birth of the subject may be inaccurate by a few minutes. The person may be born in Chicago, where the stockyards are comparable to the size of Rhode Island, making timed charts for that area difficult. Then the interpretation becomes fuzzy because the ascendant is wrong. Physicists conceive the fundamental forces of nature to be: Gravitational force; Electro-magnetic force; the strong nuclear force; and the weak nuclear force. The electromagnetic force acts on any particle with an electric charge; the strong nuclear force is a building force; the weak nuclear force promotes radioactive decay of unstable nuclei. The gravitational and electromagnetic forces operate at long ranges, while the strong and weak nuclear forces operate within nuclear distances. The fundamental forces are also described in terms of particles, messengers carrying information about the interactions between different matter particles. Quarks are the proposed constituents of protons, neutrons and all other particles that interact via the strong force. They are called up, down, strange, charm, bottom and top. Quarks and gluons are the mediators of the strong force that binds quarks to form hedrons. In medical studies the genetic code is classified as quanine, adenine, cytosine and thymine. Education has four stages of institutional learning: primary, secondary, collegiate and doctorate. The Jungian types are feeling, thinking, intuitive and sensual. Four categories seem to be the normal numerical group that forms the basis for classifying data in all types of studies. The table below indicates that science is following astrology in its classification of elementary particles. The Three Families of Elementary Particles Electron E-neutrino Up Down Muon M-neutrino Charm Strange Tau T-neutrino Bottom Top Science is building a model that appears to be based on astrological research models. Astrologer Carl Payne Tobey designed a similar but more elaborate table to describe the dynamics of the astrological q signs. Interpreting Tobey's table provides some interesting observations. For example, a is the sign that attempts to change the family structure by marrying into another race to build a new blood line of its own. n, the social survivor is involved in politics preserving the social status quo from being destroyed by the conservative forces set out to destroy social programs by cutting off funding. In the future, scientists, when their table is complete and accurate, will have the ability to compare reactions between their dynamics and what's happening in the real world. To date, their discoveries are mim-
95
Piechota: Astrology & Quantum Physics
icking astrological studies when they were in their infancy. Like astrology of days gone by, the best minds are involved in studying quantum theory. As they progress, it wouldn't be unusual if they discovered angular positions to be imperative in their work. Carl Payne Tobey's Astrological Dynamics Survivors Reactors Changers Guides f ¦ g b Individual x s a z Family n h c d Social The 120º angle between the q and y occurs every 243 days. According to astrologers this is a peaceful time. A wealth of evidence exists to document the validity of astrological theory concerning the trine aspect. The table below shows that trines are dominant aspects when peace is at the forefront of human activity, the trine between the q and y being most relevant. 27th Nov 1895
qFy
11th Nov 1918 7th May 1945 1st March 1961 22nd Sept 1961
qFy qFy qFo qFy
14th Sept 1973 31st March 1979
eFy rFu yF“ tFy
4th Nov 1989
qFy
13th Sept 1993
qFi
Peaceful Days Alfred Nobel signs his final will and testament, creating the Peace Prize. World War I ends World War II ends in Europe Peace Corp proposed by President J. F. Kennedy Peace Corp founded by JFK signing legislation enacted by Congress Laos Peace Treaty signed at 9:40 AM. President Jimmy Carter, Egyptian Anwar Sadat and Israeli Menachem Begin sign Camp David peace accord. East Germany opens its borders to Czechoslovakia, ending the Cold War. President Bill Clinton, Israeli Itzhak Rabin and Palestinian Yasser Arafat sign peace accord.
The Laos peace treaty was signed at 9:40 AM precisely on September 14, 1973 because an astrologer told Prime Minister Souvanna this gave the pact the best chance of success. The treaty signed by the Laos government and the communist Pathet Lao ended ten years of war. Diplomats from the United States, China, North and South Vietnam witnessed the ceremony which ended in a champagne toast.19 19
Fortean Times, Vol. 1, No. 2, January 1974, quoting Daily Mail, 9/15/1973.
96
Who ? Ruth Baker, a regular. John Gross is currently working to complete Noel Tyl's Master Degree course in counseling astrology. He is otherwise obsessed by the study of history & zeitgeist by way of cycles, waves and astrological signatures. Besides being a fine astrologer and teacher, Charles Jayne was a prolific and insightful writer mainly on technical matters. In addition to his many books and articles he was the editor of In Search (1958-62), the first international journal of astrology, and later of the Cosmecology Bulletin. Charles died on the last day of 1985. Barbara Koval is the author of The Lively Circle and Time & Money. She lives in Cambridge, Massachusetts.
Ken Paone has been actively studying planetary influences on the weather for about 13 years. His analyses of important past weather patterns have appeared in several astrological magazines. The August 1998 edition of The Guadalajara Colony Reporter carried his prediction of Hurricane Isis that formed off the west coast of Mexico, and the April/May 2003 issue of The Mountain Astrologer featured his predictions of Hurricanes Jimena and Fabian, as well as Tropical Storm Grace. Ken can be contacted at kensweather@msn.com. Martin Piechota researches sun-sign astrology from his home in Pennsylvania. Prier Wintle is a consulting astrologer with many years’ experience. His writings have appeared in the leading astrological journals. Prier has worked in England, New Zealand and South Africa. He currently lives in Cape Town.
Michael Zizis is a past vice-president & editor of Midheaven, Astrology Toronto's newsletter. He has been a practicing astrologer for the last 31 years and can be contacted at michael.zizisPsympatico.ca
97
Considerations Magazine
|
https://issuu.com/considerations/docs/19-1
|
CC-MAIN-2020-45
|
refinedweb
| 37,209
| 59.84
|
Persistent torrent client for web browsers and web workersPersistent torrent client for web browsers and web workers
Perma-Torrent persists everything to IndexedDB so the torrent data does not disappear when the web page is closed. This also means that the torrent data is shared across all web pages so only one page has to do the downloading while any number of web pages can stream the downloaded data. The torrent data is even accessible from within web workers!
WhyWhy
Perma-Torrent makes extensive use of WebTorrent so why not just use WebTorrent? The main reason Perma-Torrent was created was because Service Workers and other Web Workers do not have access to the WebRTC api so they cannot use WebTorrent to download torrents. Perma-Torrent bridges this gap by allowing the downloaded torrent data to be streamed from all web workers and web pages.
UsageUsage
Add a torrent then stream a file
var pt =ptreturn pt
Add a torrent then stream a file in a web worker
In a web page:
var pt =// Only web pages can download torrents since// web workers do not have access to the WebRTC apipt
In a web worker:
var pt =return pt
APIAPI
var pt = new PermaTorrent([opts])
All instances by default share the same underlying storage so adding a torrent to one instance adds it to all instances. To separate instances from one other define
opts.namespace. The
opts argument can take the following properties:
opts.namespace- Unique string used to insulate instances from each other. Defaults to 'permatorrent'
pt.add(torrentBuffer).then(torrent => {})
Adds the given torrent to the instance and all other instances within the same namespace.
torrentBuffer must be a buffer of the .torrent file.
torrent is an instance of
Torrent which is documented below.
pt.getAll().then(torrents => {})
Returns all the torrents that PermaTorrent holds.
torrents is an array of
Torrent instances.
pt.startSeeder()
Starts the seeder which does the actual downloading and uploading of torrents. This method can only be called in web pages and not in web workers.
pt.remove(infoHash).then(() => {})
Removes the given torrent with the given
infoHash
pt.destroy()
Frees the internal resources of the instance.
API - TorrentAPI - Torrent
torrent.name
The name of the torrent
torrent.length
The size of the torrent in bytes
torrent.infoHash
The content based hash of the torrent that uniquely identifies it
torrent.files
An array of
File instances that allow for the streaming of the file's data.
var file = torrent.getFile(filePath)
Returns the
File instance whose path in the torrent matches the given
filePath. If no file is found then
undefined is returned.
API - FileAPI - File
file.name
The file name
file.path
The path of the fileInfo
file.length
The size of the file
var stream = file.getStream([opts])
Returns a NodeJS Readable stream for the file. The file can be streamed while it is being downloaded and be sure to call
pt.startSeeder() in at least one instance.
opts can take the following properties:
opts.start- The byte offset to start streaming from within the file
opts.end- The byte offset to end the streaming
var webStream = file.getWebStream([opts])
Returns a WhatWG Readable stream for the file. This type of stream is new and only available in a few browser; this method throws if it is called and the browser does not support this type of stream.
opts can take the following properties:
opts.start- The byte offset to start streaming from within the file
opts.end- The byte offset to end the streaming
`file.getBlob([opts]).then(blob => {})```file.getBlob([opts]).then(blob => {})``
Returns a Blob of the file's data.
opts can take the following properties:
opts.start- The byte offset to start the blob from
opts.end- The byte offset to end at
|
https://www.npmjs.com/package/perma-torrent
|
CC-MAIN-2020-16
|
refinedweb
| 635
| 67.25
|
.commons.compress.compressors; 20 21 import java.io.InputStream; 22 23 public abstract class CompressorInputStream extends InputStream { 24 private long bytesRead = 0; 25 26 /** 27 * Increments the counter of already read bytes. 28 * Doesn't increment if the EOF has been hit (read == -1) 29 * 30 * @param read the number of bytes read 31 * 32 * @since 1.1 33 */ 34 protected void count(int read) { 35 count((long) read); 36 } 37 38 /** 39 * Increments the counter of already read bytes. 40 * Doesn't increment if the EOF has been hit (read == -1) 41 * 42 * @param read the number of bytes read 43 */ 44 protected void count(long read) { 45 if (read != -1) { 46 bytesRead = bytesRead + read; 47 } 48 } 49 50 /** 51 * Decrements the counter of already read bytes. 52 * 53 * @param pushedBack the number of bytes pushed back. 54 * @since 1.7 55 */ 56 protected void pushedBackBytes(long pushedBack) { 57 bytesRead -= pushedBack; 58 } 59 60 /** 61 * Returns the current number of bytes read from this stream. 62 * @return the number of read bytes 63 * @deprecated this method may yield wrong results for large 64 * archives, use #getBytesRead instead 65 */ 66 @Deprecated 67 public int getCount() { 68 return (int) bytesRead; 69 } 70 71 /** 72 * Returns the current number of bytes read from this stream. 73 * @return the number of read bytes 74 * 75 * @since 1.1 76 */ 77 public long getBytesRead() { 78 return bytesRead; 79 } 80 }
|
http://commons.apache.org/proper/commons-compress/xref/org/apache/commons/compress/compressors/CompressorInputStream.html
|
CC-MAIN-2014-52
|
refinedweb
| 235
| 69.62
|
Today's lab will introduce you to how we could implement OOP using only functions and dictionaries, which means that Python's class and object syntax isn't necessary, but is rather just a syntactic convenience.
Note: If you're working on your own machine, instead of the lab machines, then you'll need to first copy the following file into your current working directory on your class account, and then transfer this file to your laptop/desktop:
$ cp ~cs61a/lib/python_modules/oop.py .However, if you're working on a lab machine (or SSH'd into your class account), then you don't need to copy these files over.
To start off, let's cover message passing.
Observe the following interactive session:
>>> christine = make_person("Christine", 20) >>> christine("name") 'Christine' >>> christine("age") 20 >>> christine("want to go out for drinks?") "I don't understand that message." >>> christine("well, i like your shoes") "I don't understand that message."
christine is a function that accepts two messages, "name" and "age". Thus, we are able to pass messages to christine, and christine will respond accordingly. When we pass a message to christine that she doesn't understand, the string "I don't understand that message." is returned.
Question 1
cp ~cs61a/lib/lab/lab11/lab11.py .
Implement a make_person function that would make the above interactive session work.
def make_person1(name, age):
The general format of a dispatch function is as follows:
def local_scope_creating_function(): def dispatch(message): if message == <message1>: ... elif message == <message2>: ... ... else: <how to handle other messages> return dispatch
Hopefully, your solution to number 1 looks something like:
def make_person(name, age): def dispatch(message): if message == "name": return name elif message == "age": return age else: return "I don't understand that message." return dispatch
We use the frame created by the call to make_person to create the state that the dispatch function can refer to. In this example, the dispatch function represents a person.
Question 2
>>> christine = make_person("Christine", 20) >>> christine("name") 'Christine' >>> christine("buy", "porsche") 'I just bought a porsche.' >>> christine("inventory") ['porsche'] >>> christine("change name", "Steven") >>> christine("name") 'Steven'
In the lab11.py file, complete the make_person2 function such that the above interactive session works. Notice that christine now takes an optional additional argument: you can implement this by creating another parameter that has a default value, making it optional.
def make_person2(name, age):
christine now functions quite similarly to an object. Hopefully you used nonlocal to allow for the dispatch function to accept the message "change name". By using nonlocal and the dispatch function, we have successfully created persistent state. This is starting to look like an instance of a class. However, we're still missing a few important pieces, such as class variables.
To summarize, we just used the idea of message passing to create persistent state along with other behaviors, such as changing names or buying things. The dispatch function is a function that has some local state (created by the function that encompasses it). The idea of message passing is to organize computation by passing "messages" to each of these dispatch functions. The messages are strings that correspond to particular behaviors, which, if you think about it, is quite similar to how Python OOP functionality works.
Let's look at a new variation of message passing. We're going to create something called a dispatch dictionary. A dispatch dictionary is a Python dictionary whose keys are considered as messages, and whose values are functions that correspond to the messages.
Take a few minutes to read over and understand the following code:
def make_person(name, age): attributes = {'name': name, 'age': age, 'inventory': []} def get_name(): return attributes['name'] def get_age(): return attributes['age'] def get_inventory(): return attributes['inventory'] person = {'name': get_name, 'age': get_age, 'inventory': get_inventory, } return person
Instead of returning a dispatch function, this version of make_person returns a dispatch dictionary. In essence, dispatch functions and dispatch dictionaries both respond to messages being passed to them. In this case, the person variable is the dispatch dictionary, and it responds to the messages 'name', 'age', and 'inventory' by returning a corresponding function.
State is created by using another dictionary, called attributes in this example.
Question 3.0
Take a look at the following interactive session, and verify in your head how each line works:
>>> christine = make_person("Christine", 20) >>> christine["name"]() 'Christine' >>> blah = christine["age"] >>> blah() 20 >>> christine["inventory"]() []
Notice how christine is now a dictionary rather than a function, and when we look up a value in christine, a function is returned, which we have to call.
Continuing the interactive session:
>>> christine["buy"]("porsche") 'I just bought a porsche.' >>> christine["inventory"]() ['porsche'] >>> christine["change name"]("Steven") >>> christine["name"]() 'Steven'
Question 3
def make_person3(name, age):
The rest of this lab covers below the line OOP. We refer to functions in oop.py, which you can view here or copy to your current directory by the method outlined at the beginning. If there's a piece of code that's confusing you, don't hesitate to ask your TA for help!
The Python object system uses special syntax such as the class statement and the use of dot notation. We will soon see, however, that it is possible to implement classes and objects using only functions and dictionaries.
In order to implement objects, we will abandon dot notation (which does require built-in language support) and create dispatch dictionaries that behave in much of.
If you think about it, there are really only two things you can do with an object: get values and set values. Thus, we will represent an instance as a dispatch dictionary that accepts only two messages: 'get' and 'set'.
Take a few moments to read over the following code.
def make_instance(cls): """Return a new object instance.""" def get_value(name): if name in attributes: return attributes[name] else: value = cls['get'](name) return bind_method(value, instance) def set_value(name, value): attributes[name] = value attributes = {} instance = {'get': get_value, 'set': set_value} return instance def bind_method(value, instance): """Return a bound method if value is callable, or value otherwise.""" if callable(value): def method(*args): return value(instance, *args) return method else: return value
That seems like a lot to take in! Remember, a bound method is simply a function that is "bound" to an object, meaning that when the method is called, the object is automatically passed in as the first argument self. That's what the function bind_method is doing.
Note: callable is a built-in function that, given an argument thing, returns True if and only if thing is a function object (i.e. can be called).
Let's take a closer look at get_value.
def get_value(name): if name in attributes: return attributes[name] else: value = cls['get'](name) return bind_method(value, instance)
From this definition, we can infer that the attributes dictionary only contains instance variables, meaning it does not include bound methods. Methods are stored in the class, and thus we must 'get' the value of the method from the class. Note that we might also be getting a class variable instead, so bind_method does a check to see if the value is callable or not and reacts accordingly.
Let's look at classes:.
Notice that make_class takes in two parameters, attributes and base_class. The parameter attributes is a dispatch dictionary of methods and class variables, and base_class would refer to a parent class (if specified).
Thus, to create a class, you would need to call make_class with a dispatch dictionary of attributes..
Question 4
An account should have the following attributes:
The account class has the following class variables:
Fill in the missing parts in your lab11.py file.
def make_account_class(): """Return the Account class, which has deposit and withdraw methods.""" def __init__(self, account_holder): self['set']('holder', account_holder) ***FINISH INIT DEFINITION HERE*** def deposit(self, amount): """Increase the account balance by amount and return the new balance.""" ***YOUR CODE HERE*** def withdraw(self, amount): """Decrease the account balance by amount and return the new balance.""" ***YOUR CODE HERE*** # Finish the return statement below. # Note that there should be a class variable named interest, whose value should be 0.02 return make_class({*** COMPLETE THE DICTIONARY HERE ***})
Interactive session:
>>> Account = make_account_class() >>> jim_acct = Account['new']('Jim') >>> jim_acct['get']('holder') 'Jim' >>> jim_acct['get']('interest') 0.02 >>> jim_acct['get']('deposit')(20) 20 >>> jim_acct['get']('withdraw')(5) 15 >>> jim_acct['set']('interest', 0.04) >>> Account['get']('interest') 0.02
To verify your solution works, be sure to type in the code from the interactive session or run the doctests to see if you get the same results.
Question 5
Define a function make_checking_account_class that makes a class that inherits from the Account class from Question 4. There are two differences between a checking account and a regular account:
def make_checking_account_class():
You should not re-write any unnecessary code.
Example interactive session that should work after you implement question 5:
>>> CheckingAccount = make_checking_account_class() >>> jack_acct = CheckingAccount['new']('Jack') >>> jack_acct['get']('interest') 0.01 >>> jack_acct['get']('deposit')(20) 20 >>> jack_acct['get']('withdraw')(5) 14
Note that the withdrawal incurred a $1 fee, as this is a checking account.
If you finished this lab, CONGRATULATIONS! You've been in a computer science course for 5 weeks, and you just implemented an object oriented programming system using only functions and dictionaries!
|
http://inst.eecs.berkeley.edu/~cs61a/su12/lab/lab11/lab11.php
|
CC-MAIN-2018-17
|
refinedweb
| 1,551
| 53.41
|
Your Account
by Daniel Berger
I find Ruby’s current warning system, if you can call it that, lacking. Warnings are controlled by the -W flag on the command line, and are generated via the Kernel#warn method within code. There are a host of problems with this approach to warnings.
First, warnings aren’t currently testable. With Test::Unit, for example, I can ensure that specific errors are raised in certain conditions via the assert_raise method. There is no analogue for warnings. It would be nice if there were so I could test them.
Second, there is no backtrace information provided with warnings. If I discover a warning I have to wade through the source and figure out where it was generated, because a Kernel#warn call does not provide a line number or method name that I can refer back to.1 For large code bases that can be problematic and generally annoys me.
Third, and most significantly, with warning flags it’s all or nothing. I cannot enable or disable specific kinds of warnings. Perl, on the other hand, implements warning control through pragmas. So, for example, I can specify “no warnings uninitialized” in a Perl program and warnings about uninitialized variables go away.2 With Ruby it’s off, on, or even-more-on (-W0, -W1 or -W2).
One of the things I’ve pushed for in the past in Ruby is structured warnings.3 By ’structured warnings’ I mean a system analogous to the Error class, except that a warning would only emit text to STDERR, not cause the interpreter to exit. In our hypothetical Warning class you still have backtrace information available. And, like Exceptions, there would be a standard hierarchy, with Warning at the top, StandardWarning, UninitializedWarning, RedefinedMethodWarning, DeprecatedMethodWarning, etc. Whatever we can think of.
Such a system would allow you to raise specific warnings within your code:
class Foo
def old_method
warn DeprecatedMethodWarning, 'This method is deprecated. Use new_method instead'
# Do stuff
end
end
The ability to explicitly raise specific types of warnings then makes them testable:
require 'test/unit'
class TC_Foo_Tests < Test::Unit::TestCase
def setup
@foo = Foo.new
end
# Assume we've added an assert_warn method to Test::Unit
def test_old_method
assert_warn(DeprecatedMethodWarning){ @foo.old_method }
end
end
And, for sake of backwards compatibility and convenience, a call to Kernel#warn without an explicit warning type would simply raise a StandardWarning in the same way that raise without an explicit error type raises a StandardWarning.4
Unlike Exceptions you could permanately or temporarily disable warnings to suit your particular preferences in the system I have in mind. For example, in the win32-file library I'm well aware that I've gone and redefined some core File methods. When I run any code that uses win32-file with the -w flag, I get "method redefined" warnings. I don't want to see those because I neither need nor want to be reminded about them.
So, using our hypothetical RedefinedMethodWarning class, I could disable them like so:
RedefinedMethodWarning.disable # No more warnings about method redefinitions!
Or, with block syntax, we could disable a particular warning temporarily:
# Don't bug me about deprecated method warnings within this block, I know
# what I'm doing.
#
DeprecatedMethodWarning.disable{
[1,2,3,4,5].indexes(1,3) # Array#indexes is a deprecated method
}
# But here I would get a warning since it's outside the block:
[1,2,3,4,5].indexes(1,3)
Unlike the current warning system, this would allow users to still receive other types of warnings, instead of the on/off switch we have now.5And, in case you were wondering why I don’t just create a ‘warnings’ library that defines a bunch of warning classes and redefines Kernel#warn, the answer is that I still can’t hook into the existing warnings being raised in core Ruby via rb_warn(), like uninitialized variables or redefined methods.
With our warning system in place we could use it for other nefarious purposes down the road, like implementing an advisory typing system. But, I’ll save that for the next post. :)
See you next Wednesday!
1Curiously, a line number is provided by the rb_warn() function, but Kernel#warn itself does not.
2However, I do not remember if you can disable them temporarily in Perl, or if there’s a way to explicitly test them. Someone feel free to fill me in.
3
4At this point I’m sure one of you is wondering about rescue/retry semantics. My opinion on the matter is that warnings should not be rescuable. They are meant to be informational. They are not meant to control program flow. This also lets us avoid having to worry about retry semantics. Not that anyone would retry based on a warning in practice.
5Yes, there would have to be a Warning.enable method as well, for those times you want to trump some third party library that has them disabled, in a “last call wins” arrangement.
use warnings;
...
{
no warnings 'uninitialized';
print $may_be_undef + 0;
}
...
{
no warnings 'uninitialized';
print $may_be_undef + 0;
}
At the end of the block, all warnings are once again enabled. As far as testing them, of course you can, because you can "catch" warnings with $SIG{__WARN__}, and since this is Perl, there's a module on CPAN to help you, Test::Warn.
$SIG{__WARN__}
I just release a gem called "structured_warnings", that provides this functionality. I hope I did not step on your toe with this.
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://archive.oreilly.com/pub/post/structured_warnings_now.html
|
CC-MAIN-2016-36
|
refinedweb
| 944
| 64
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.