Document
stringlengths
395
24.5k
Source
stringclasses
6 values
|[ To Contents ]| Locating New Polyhedral Systems The first new polyhedron consists simply of one octahedron with a tetrahedron on two opposite sides (Fig. 9-8). The result, a rhombohedron, can be seen as a partially flattened cube. A toothpick-marshmallow model demonstrates the transition effectively, because the marshmallow joints have sufficient stiffness to hold either inherently unstable shape. The rhombohedron's direct relationship to the cube suggests a space-filling capability, which we shall explore in greater depth in Chapter 12. The next candidate, the VE, is too familiar to warrant further description at this point. Twelve cuboctahedral vertices can be located around every point in the IVM, thereby embracing eight tetrahedra and six half octahedra. Furthermore, higher-frequency versions of any of the above polyhedratetrahedron, octahedron, rhombohedron, and VEcan be easily located within the matrix, thus establishing the foundation for truncated polyhedra. Subtract a half octahedron from each of the six corners of a three-frequency octahedron to yield a symmetrical "truncated octahedron" with fourteen faces: six squares and eight regular hexagons (Fig. 9-9b). A "truncated tetrahedron," with four hexagons and four triangles, is left after a single-frequency tetrahedron is removed from each corner of a three-frequency tetrahedron (Fig. 9-9a). The same procedure applies to higher-frequency versions of any of the above shapes, as well as further truncations of truncated shapes. Such transformations can be plotted indefinitely. Duality and the IVM We now introduce a new level of flexibility with the addition of a new set of verticesin the exact center of each tetrahedron and octahedron. These new vertices are connected to the original IVM vertices, thereby introducing radial vectors into each of the original cells. Figure 9-10 shows a single octahedron and tetrahedron with these central nodes. The central angles of a tetrahedron is approximately 109° 28', exactly equal to the octahedron's dihedral angle. No longer surprised by such relationships, we go on to look inside the octahedron and note its central angle of 90 degrees, which is the surface angle of a cube. The octahedron's three body diagonals, or six radil, thus form the XYZ axes. Figure 9-10 highlights these central angles by showing these two polyhedra with central nodes and radli. Right angles are thus integrated into the IVM system as by-products of the (stable) triangulated octahedron, rather than by an arbitrary initial choice of a network of unstable cubes. Since the IVM complex of octahedra and tetrahedra emerges automatically as a consequence of its unique property of spatial omnisymmetry, the array is not the product of an arbitrary choice. We now observe considerable expansion of our inventory of generated shapes. Starting with the most familiar, we isolate the minimum cube. Formed by a single tetrahedron embraced by four neighboring eighth-octahedral pyramids, or octants, the cube is once again based on the tetrahedron. We first encountered this relationship in "Structure and Pattern Integrity" (using the tetrahedron to establish the minimum stable cube), and now we have determined the exact shape of the leftover space: four eighth-octahedra. This observation indicates that "degenerate stellation" of the tetrahedron forms a cube. The four vertices of the tetrahedron, together with the four centers of neighboring octahedra, provide the eight comers of this basic building block. Its six square faces are created by two adjacent quarters of the square cross-sections of single-frequency octahedra (Fig. 9-11). As with other IVM systems, larger and larger cubes will be outlined by more remote octahedron centers. Next, we embrace a single octahedron by eight quarter tetrahedra, thereby outlining the rhombic dodecahedron, whose twelve diamond faces have obtuse angles of 109° 28' and acute angles of 70° 32'generated by the tetrahedral central angle and two adjacent axial angles, respectively. Its eight three-valent vertices are the centers of embracing tetrahedra, while its six four-valent vertices are the original octahedron vertices (Fig. 9-12). Once again, we observe the relationship of duality between the VE and rhombic dodecahedron. The former has fourteen faces (six four-sided and eight three-sided) corresponding to the four-valent vertices and three-valent vertices of the latter. Likewise, the twelve four-valent VE vertices line up with the twelve rhombic faces. (Refer to Fig. 4-12.) The duality between VE and rhombic dodecahedron illustrates the relationship of duality and domain. Having already seen that spheres in closest packing outline the vertices of the VE, we now turn our attention to the domain of individual spheres. (2) The domain of a sphere is defined as the region closer to a given sphere's center than to the center of any other sphere. This necessarily includes the sphere itself, as well as the portion of its surrounding gap that is closer to that sphere than to any other. Imagine a point at the exact center of an interstitial gap; this will be the dividing point between neighboring domains, that is, a vertex of the polyhedron outlined by the sphere's domain. This domain polyhedron happens to be the rhombic dodecahedron. As each sphere in cubic packing is by definition identically situated, each domain must be the same. Therefore, the shape of this region consistently fits together to fill space. Fuller's term for the rhombic dodecahedron is "spheric" because of this relationship to spheres in closest packing. We now have an experiential basis for the VErhombic-dodecahedron duality. Twelve vectors emanate from any point in the IVM, locating the vertices of the VE, while poking through the center of the twelve diamond faces which frame the point's domain. We were introduced to duality as exact face-to-vertex correspondence, and now we see how duals can be instrumental in locating a system's domain. Our investigation of space-filling in Chapter 12 will explore this relationship more fully. Returning to the IVM, we observe that four rhombic dodecahedra, or "spherics," come together at the center of each tetrahedron, such that the tetrahedron central angle becomes the obtuse surface angle of the spheric. In the same way, eight cubes meet at the center of each octahedron, as allowed by the shared 90-degree surface and central angles, respectively. For clarity, we shall refer to the new network, interconnecting the centers of all octahedral and tetrahedral cells, as IVM', and we can draw the following conclusion. If the vertices of a given polyhedron are located in the IVM, then that system's dual will be outlined by the IVM', and vice versa. Similarly, if a polyhedron is centered on a vertex of the IVM, its dual will be centered on a vertex in IVM'. For example, we recall our first case of duality: the vertices of the octahedron's dual, the cube, are supplied by octahedron centers, which are nodes of IVM'. This discovery leads us to another assumption. As truncation of our familiar polyhedra yields shapes contained within the IVM, the dual operation, stellation, should produce polyhedra outlined by IVM'. The assumption is valid: the additional IVM' vertices provide the loci for the vertices of stellated versions of these basic shapes. Actually, this observation is not new, for we have already seen that quarter-tetrahedral pyramids affixed to octahedron faces produce Fuller's spheric, or, in other words, that a degenerately stellated octahedron becomes a rhombic dodecahedron. The three-valent vertices of this diamond faceted shape are tetrahedron centers, by definition nodes of IVM'. |[ To Contents ]|
OPCFW_CODE
Are the No Free Lunch theorems useful for anything? I have been thinking about the No Free Lunch (NFL) theorems lately, and I have a question which probably every one who has ever thought of the NFL theorems has also had. I am asking this question here, because I have not found a good discussion of it anywhere else. The NFL theorems are very interesting theoretical results which do not hold in most practical circumstances, because a key assumption of the NFL theorems is rather strong. This assumption is, roughly speaking, that the performance of an algorithm is averaged over all problem instances drawn from a uniform probability distribution. In realistic applications the problems an algorithm typically encounters are NOT drawn from a uniform distribution, and are instead drawn from what is likely a very interesting and complicated distribution specific to the general problem setting. So, while the NFL theorems are quite interesting results, do they have any practical implications? Or are they merely theoretical results? EDIT: By practical implications, I mean novel or improvements over existing algorithms, improved hyper-parameter selection, and things of that nature. I would even be interested to learn of NFL-inspired theorems that do apply to realistic search/optimization/learning problems. See this question: https://cs.stackexchange.com/questions/21758/what-is-the-no-free-lunch-theorem?rq=1 , the answers go into the usage of NFL. Also, note that the NFL theorems are 'negative' results, so all practical use of them is limited to saying we cannot achieve something. Some call this inherently theoretical. You should specify what you mean by 'theoretical result'. @Discretelizard none of the responses go into applications, they are concerned with interpretations. As for the distinction between a theoretical and a practical result, I am relying on colloquial understandings of the terms. @D.W. It is admittedly a broad question, but I would be happy to have ANY implication pointed out to me for ANY of the NFL theorems. After spending some time on searching myself, I am starting to doubt whether there are any practical implications whatsoever. I feel like you have essentially already answered your own question. Indeed, real-world inputs are arguably not arbitrary but drawn from some specific distribution since they model human interaction, a physical process, or so on. @SurgicalCommander Well, I most certainly don't know it's precise colloquial meaning in this context. Given that your background is in Physics, while most people here have background in CS or math, the meaning could very well differ between you and the potential answer-er. @juho Right, the NFL theorems do not apply to realistic settings. I want to know if the NFL theorems have led to other research that does apply. @SurgicalCommander This is just a hunch, but my guess is that NFLs hold for any non-trivial realistic setting. It's just that we can't express them (formally) let alone prove them, since those settings are not formally defined. This is not useful for anything. Even the statement "there is no silver bullet" is not really true, simply because there are many problems we are not interested in but we can't really name them. For example, let's say we have a classification problem and have the constant classifier which classifies everything as class 0. Surely, every reasonable non-constant classifier is better than that? A practical implication is that there is no silver bullet: we shouldn't expect any single optimization method to be perfect for all problems. Rather, we should try to design optimization methods that are tailored to the problem we're trying to solve. For instance, if you want to use local search, you'll probably need to define a neighborhood relation (a set of moves that makes "small" changes to the current solution) that is informed by the problem domain. See, e.g., https://cs.stackexchange.com/a/88016/755 for a recent example of this here on this site. A practical implication is that machine learning won't work if there is no structure at all on the space of possible models/hypotheses. Instead, we need some kind of prior that makes some models more likely than others. Often, we assume a prior that assume "simpler" models are more likely than complex one's (Occam's razor: all else being equal, the simpler explanation is more likely to be true). This leads to use of regularization in machine learning, as it effectively applies Occam's razor to candidate models. So, you can think of the NFL theorems as providing some kind of theoretical justification for regularization or theoretical understanding that helps us see what the role of regularization is and provides some partial explanation for the empirical observation that it seems to often be effective. You could characterize these as "NFL theorems tell us some directions that won't work", which arguably has value as it helps us avoid wasting time on something that won't work and helps point us towards directions that are more likely to work.
STACK_EXCHANGE
Re: Newbie comments & queries On Sun, Nov 04, 2001 at 10:24:13AM +0200, Ian Balchin wrote: | dman et al | .za stands for Zuid Afrika which is from the Dutch who founded the | colony at the Cape Of Good Hope. | Last night I went through the info documentation system which i | thought would be a good start. Then read and played with the ls | command. I uncommented the lines in the .bashrc in root to give me | colours for ls. I must also do that for my normal login shell. In | directories where there are lots of files they whizz off the top of | the screen but piping thru less or more strips off the color. If | you add in a --color switch then more is OK, but less gives out | hidden codes. As Karsten said, use the '--color=auto' option so that no terminal escape codes are output when using a pipe or file redirection. Also the console has some nice keyboard shortcuts : Shift-PgUp and Shift-PgDown scroll back and forth a little. | Have gots lots of paper, and printer ribbons. Found a whole box of | ribbons for my printer free (what a luck) which the clothing chain | store just down the road dumped when they bought a new printer. Cool. Those line printers are very cheap to print on and the ribbons last a long time. | So have printed out a couple of HOWTO files - the Config-HOWTO and | the Printing-Usage-HOWTO. Some good bedtime reading there. Yep :-). The printing HOWTO is a bit dated though -- it doesn't mention CUPS at all, which I think is the best spooling system (and is the one I use). | I print these out via the lpr command, having done gzip -d to undo | them. I see the .gz file is not there anymore, so did a gzip to zip | up the .txt file again after printing. Lots of work but zcat | redirected to /dev/lp0 does nothing, either from root or my normal By default gzip uncompresses the file and removes the .gz to indicate that. Unlike windows the extension has no real meaning, but only serves to assist you in identifying them. There are some programs that use extensions to mean something, though (ex gcc, java, python). gunzip -stdout <file> | lpr zcat <file> | lpr will do what you want. Suppose you wanted to print all the files named *.txt.gz in the current directory : for FILE in `ls *.txt.gz` ; do zcat $FILE | lpr ; done (this is explained in the bash manpage, but it is very long and | Several of you have been positively eulogic about emacs so have been having a | look at the beginners guide. heh. I used emacs for a while. It was too complicated for me :-). Do you know what it stands for? Eighteen Megs and Constanly Swapping <wink>. (maybe not really, but it is quite heavyweight)
OPCFW_CODE
How did the humans make "Transformium" come to life? I don't know if this was explained in Transformers 4: Age of Extinction or I missed it, but how did Joshua Joyce and his organisation make "Transformium" come to life? From previous Michael Bay Transformer movies, all technology and transformers not alive (active?) required the allspark/Energon to give them life. Basically what power source was making the Transformium come to life? All of the transformers are powered by bad writing. You mean, Transformers 4 contains a plot hole?! All the transformer movies are, as Richard said, bad writing. The first movie they get their life from the All-Spark. The second and third, it's energon. The fourth, it's transformium. Honestly, Bay butchered his own series of movies by not sticking to his own cannon. Joshua is not actually giving power to the Transformium. He finds that Transformium can be manipulated by programing the Transformium matter. We write computer programs which controls the electrical signals inside devices of a computer to make them do tasks we need. As such, we can program Transformium matter and make them do whatever we want; like to change the shape. Any and every Transformer has a Spark. This contains all the memory of a Transformer (as Optimus Prime describes in Transformers 4 movie) and it is also the controlling power of the Transformium within a Transformers body. Spark of a Transformer can be thought as the soul/mind (whether it is the soul or the mind is another question to be asked and we do not need to follow on it here) of a human which controls and powers the body. We believe the soul goes away or mind disappears when a human dies. As such, a Ttansformer will not live without a Spark. Without a Spark, Transformium is not controlled and they will be divided into parts and lay on the ground since there is no authority to bind them together and make a form of a Transformer. This thing can be seen very well in the seen where the Jetfire dies. So in the nature, the Spark powers up a Transformer and controls the Transformium. What Joshua does is controlling the Transformium using computer programs. So computer program does the controlling part of the Spark. But it doesn't powers up the man made Transformer. I'm sorry but I couldn't find any reliable source which describes the power source of a man made Transformer. AllSpark in the other hand is a power source which can create, repair or re-energize a spark. It is not exactly coming to life. Joshua is led to believe that he is the one controlling the transformers he created by using computers, by Megatron who is manipulating Joshua to build himself a new body. There is no life source which is shown that gives life to those transformers built. So we can assume that it is just like an electrical appliance that uses electricity to function, since it is Joshua who is creating these transformers. Galvatron, the first Transformium-bodied transformer doesn't have a spark at all. It even tells Optimus Prime this fact when the latter stabs into the former's chest.I would believe that its powered by electricity since its essentially a robot, although the mechanics producing electricity inside it would be made up of Transformium itself so as to make those special transformations possible as any other explanation is not given in the movie. In Transformers: Age of Extinction, I believe it was implied that the creators used Transforium to build Cybertron and the cube has a super sophisticated programming tool which turned the Transforium into Cybertronians and organized it into a planet - Cybertron. Michael Bay destroyed his own continuity. Joshua Joyce, the ceo of KSI, states that transformers are made of transformium. This is complete nonsense. If they're made of this substance how is Cade Yeager able to repair Optimus Prime? How is the Allspark able to create transformers out of ordinary machines as we see in the first movie? transformium is a rare earth metal. That means it's rare! None of the machines turned into miniature transformers in the first movie were made out of transformium. It's complete and utter nonsense.
STACK_EXCHANGE
Most of us have heard about 3-D printing, but I didn’t realize how accessible and how useful the technology actually is. There’s a thriving open source community innovating on this, and applications that really make a difference. Amazing. Check out these links on the subject: Printrbot, a kit to assemble your own 3-D printer – one capable of printing out a copy of itself (courtesy of Kickstarter) There’s a growing community of hackers advancing the state of the art. Check out the wiki. And finally, an example of inspired application: check out this video on 3D Printed Magic Arms. Image courtesy of Core77.com One of my favorite roles as a parent is to be the one who explains phenomena to my son – how stuff works, why things are the way they are. In support of this I’ve often looked for good online resources for science education. Here’s one of the best: the San Francisco Exploratorium’s list of Ten Cool Sites. Don’t be fooled by the tagline “Bringing you the coolest since 1995”, they have links to some really good content. Among the great resources at PerceptualEdge, consultant Stephen Few offers some very good discussions of individual data visualization problems, and proposes solutions for each of them. This isn’t a comprehensive treatise, but his commentary on individual cases is highly educational in itself. Here’s a sample of a poorly done chart. Stephen rightly points out two major issues that make this misleading: a non-zero baseline, and discontinuous time frame on the X axis. I would probably add that the side-by-side column format makes aggregate comparisons of the two data series hard to digest. Now here is Steve’s redesign of the chart. The problems identified have all been fixed in an elegant and highly functional chart. My only *small* nit with the reworked version is that the years only show up on the bottom-most chart of the three, forcing the viewer to scan down to the bottom of a long graphic in order to understand the time scale of data points at the top. For numerous additional examples of Stephen’s great work and sound commentary, check out Stephen’s Examples page here. Since human beings relate to the world spatially, maps are a powerful tool for analysis and sense making.They can also be beautiful works of art in their own right. Here’s a wonderful resource: Places and Spaces: Mapping Science, a 10-year effort to build a collection of maps to encourage a “cross-disciplinary discussion on how to best track and communicate human activity and scientific progress on a global scale.” The maps are physical artifacts but the online gallery is deep and very well done. The variety of representation modes in the collection is very broad, ranging from things we would recognize as maps, to Mignard’s “Napoleon’s March to Moscow” chart made famous by Tufte, to some visualizations whose beauty may outstrip their explanatory power such as this one by Ingo Günther. For a good book on the subject of maps and science, check out Atlas of Science: Visualizing What We Know by Katy Borner (link goes to Amazon). Here are a couple of interesting resources for connecting technologists to matters of public interest: - Code for America – a group that puts developers in touch with cities to help accelerate change. - Sunlight Labs – an open source community dedicated to making government and public data available and accessible online If you’re looking build a data-access capability, these folks can probably help. I just discovered a great blog for data geeks and fans of visual thinking: FlowingData, by Nathan Yau, the author of Visualize This: The FlowingData Guide to Design, Visualization, and Statistics (link to Amazon). Among other things, Nathan has built a list of Data and Visualization Blogs Worth Following, which is a great resource in its own right. While Nathan shares a wide variety of serious tools, there’s humor in there too. Check out this one reblogged from Doghouse Diaries: A few months ago I was listening to another Seminars About Long Term Thinking podcast from the Long Now Foundation: “Mapping Time“, by David Rumsey. In his written summary, Stewart Brand introduces Rumsey’s talk as follows: “Once an artist, long a real estate success, now one of the world’s leading historic map collectors and THE leading online map innovator, David Rumsey gives an exceptionally deft graphic talk. Complex and elegant things kept happening with his images, always on cue with never a hesitation or false move. I’ve never seen a tighter weaving of ideas, words, and persuasive images.” It’s a great lecture, and offers just a taste of the incredibly powerful resource Rumsey has built through years of passionate collecting, visionary sharing, and financial support. Check out Rumsey’s webste: http://www.davidrumsey.com/
OPCFW_CODE
Unable to read back bits set in pilosa I 'm running a pilosa docker container: https://hub.docker.com/u/pilosa/ on my OSX host. When running the code on the your github page from the quick overview I do not retrieve any results back. However, I can see in the web-ui of pilosa that the bit 42 is set on row 5 of myindex, myframe. The code itself is not capable of reading the bit back out. It does not return an error but rather empty result arrays: go run cmd/test2/main.go Got bits: [] [] [] I have the same behaviour in the real application i'm developing. I can set bits, retrieve them from the webui, but simple Bitmap queries from code do not work. @maartenheremans Were you using the Go client from the master branch? There's a pretty big change coming to the Pilosa server, and master of Go client was modified to work with that. We've just merged a change to master that makes the Go client work with both the current Pilosa and the new. Can you give it a try with the latest Go client on master? @maartenheremans Thank you for the report. It appears that gopilosa master is not compatible with v0.8.8 of Pilosa (which you get with the docker image). This unfortunately happens occasionally as we evolve APIs in our pre-1.0 state. You have two options to work around this: Use the master branch of Pilosa (building from source is fairly straightforward), and running the built binary is as easy as pilosa server. checkout version 0.8.0 of go-pilosa and use that. git checkout v0.8.0. Depending on what you're using for dependency management, you may wish to pin go-pilosa to this revision in Gopkg.toml or similar. If you're just experimenting with Pilosa and want to track the latest and greatest features, I recommend route 1. If you need to depend on Pilosa in production, I recommend the second route, and you can wait to upgrade until we have tagged 0.9 releases of both Pilosa and the clients. Ah I didn't see @yuce's message before I posted. You're welcome to try pulling the latest go-pilosa and using that, but I would actually recommend you follow one of the two paths I outlined as go-pilosa:master and pilosa:v0.8 (or whatever the last release is) are not guaranteed to stay compatible in general. Thanks for the quick reply. Since we are building production code I assume path 2 is the best. Our build pipeline however relies on 'go get'. Is there a versioned package available via gopkg.in ? We'd recommend using dep, but assuming that's not possible we can look into supporting gopkg.in. @yuce It now gives a proper error indicating a version mismatch. I believe this is a good solution. 2018/02/27 21:21:40 Pilosa server's version is 0.8.8, does not meet the minimum required for this version of the client: >=0.9.0. @maartenheremans We probably need to change/remove that message. It just tells that the client is now working in "legacy" mode (Pilosa <= 0.9). But everything should still work. You shouldn't be getting empty results now. @maartenheremans Does using the master branch of the client solve the empty results problem? Can we close this issue? @yuce No it does not solve the problem. The following is with the 0.8.8 pilosa version: 2018/02/28 21:12:08 Pilosa server's version is 0.8.8, does not meet the minimum required for this version of the client: >=0.9.0. Got bits: [] [] [] However, with the latest build of pilosa (build manually, ignore the 0.0.0 version): 2018/02/28 21:14:02 Pilosa server's version is 0.0.0, does not meet the minimum required for this version of the client: >=0.9.0. Got bits: [42] [42] [] That log is just a warning, it doesn't affect your results which I believe are correct. @yuce we should probably remove that log now that both versions are supported, right? @maartenheremans The protobuf definitions in Pilosa server changed after v0.8.8 was released so current versions of Pilosa clients are not compatible with that version. We test our clients against both Pilosa master and Pilosa cluster-resize branch (which is going to become v0.9) and missed testing against v0.8.8 which as I mentoned has different protobuf definitions. The web UI uses JSON, so that change doesn't affect it. As you mentioned, Go client on master is compatible with Pilosa on master. So you can use those together; or use Go client v0.8.0 and Pilosa v0.8.8 together. In that case a small change is required in your code: you will need to change result.Bitmap() to result.Bitmap. Please refer to the README of the Go client on v0.8.0 tag. Thank you for bringing this issue to our attention and I am sorry for this error. I will remove Pilosa v0.8 compatibility badge and restore the original "incompatibility" message for the Go client. @yuce Thanks again for picking this up so quickly. I can indeed continue for now with both a latest build of pilosa and the go-client.
GITHUB_ARCHIVE
Wine 5.0 Stable Released. Wine team has announced the latest stable release 5.0 on Jan 21, 2020. Its source code is available for download from its official site. You may also use the package manager to install wine. Wine is an Open Source implementation of the Windows API and will always be free software. Approximately half of the source code is written by its volunteers, and remaining effort sponsored by commercial interests, especially CodeWeavers. An official PPA is available to install Wine on Ubuntu systems. You just need to enable the PPA in your Ubuntu system and install latest Wine packages using apt-get. This tutorial described you to how to install Wine on Ubuntu 18.04 LTS Linux systems. Step 1 – Setup PPA First of all, If you are running with a 64-bit system enable 32-bit architecture. Also, install the key which was used to sign packages of wine. sudo dpkg --add-architecture i386 wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add - Use one of the following commands to enable the Wine apt repository in your system based on your operating system and version. ### On Ubuntu 19.10 sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ eoan main' ### On Ubuntu 18.04 sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport ### On Ubuntu 16.04 sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main' Step 2 – Install Wine on Ubuntu Use below commands to install Wine packages from the apt repository. The –install-recommends option will install all the recommended packages by winehq stable versions on your Ubuntu system. sudo apt update sudo apt install --install-recommends winehq-stable If you face unmet dependencies error during installation, use the following commands to install winehq using aptitude. sudo apt install aptitude sudo aptitude install winehq-stable Step 3 – Check Wine Version Wine installation successfully completed. Use the following command to check the version of wine installed on your system. wine --version wine-5.0 How to Use Wine (Optional)? To use wine we need to login to the GUI desktop of your Ubuntu system. After that Download a windows .exe file like PuTTY on your system and open it with Wine as below screenshot or use following command. You can also launch by right click on the application and click Open With Wine Windows Program as shown in the below screenshot. This tutorial helped you to install Wine on Ubuntu systems.
OPCFW_CODE
Entity framework code first migration not always working well I have created a migration for my recent changes to my model that resulted in the following script, DropIndex("dbo.InboundActions", new[] { "ReferredFrom_Id1" }); DropIndex("dbo.InboundCopyActionLogs", new[] { "InboundReferToEmployee_Id" }); DropIndex("dbo.InboundCopyActions", new[] { "ReferredFrom_Id1" }); DropColumn("dbo.InboundActions", "ReferredFrom_Id"); DropColumn("dbo.InboundCopyActions", "ReferredFrom_Id"); RenameColumn(table: "dbo.InboundCopyActionLogs", name: "InboundReferToDivision_Id", newName: "InboundRefer_Id"); RenameColumn(table: "dbo.InboundCopyActions", name: "ReferredFrom_Id1", newName: "ReferredFrom_Id"); RenameColumn(table: "dbo.InboundCopyActionLogs", name: "InboundReferToEmployee_Id", newName: "InboundRefer_Id"); RenameColumn(table: "dbo.InboundActions", name: "ReferredFrom_Id1", newName: "ReferredFrom_Id"); RenameIndex(table: "dbo.InboundCopyActionLogs", name: "IX_InboundReferToDivision_Id", newName: "IX_InboundRefer_Id"); Now when I try to update the database I get the following errors, The index 'IX_ReferredFrom_Id' is dependent on column 'ReferredFrom_Id'. The object 'FK_dbo.InboundActions_dbo.Divisions_ReferredFrom_Id' is dependent on column 'ReferredFrom_Id'. ALTER TABLE DROP COLUMN ReferredFrom_Id failed because one or more objects access this column. This is not the first time I find this kind of errors with migrations, which forces to me drop all migrations and database and start with brand new migrations which of course isn't practical, is there a problem with EF or me?! You are running into this issue. If you are in a stage of development where you are frequently renaming columns and drastically changing the models you might consider holding off on migrations and using the database initializers that drop and recreate your database. Use seeding to keep your essential data. But this model change I have just did might very easily happen after deployment, in this case I will be in great trouble since I will not be able to drop and create database, any solutions? Yes, as the link mentions you are running into this because there are constraints that must be dropped before the rename of the FK can take place. This is a problem with EF, so you will need to code your own `ALTER TABLE ... DROP CONSTRAINT ..." ahead of the rename(s). This is what I wanted to hear. It is another Microsoft flaw! Just wanted to make sure I am not doing anything wrong.
STACK_EXCHANGE
folder is commonly used by different memory cards for digital cameras. I think it stands for D Sounds like pictures were unloaded from a memory card to your hard drive. Here's instructions for attaching multiple pictures to a Yahoo email:Yahoo! Mail > Yahoo! Mail > Emailing: The Basics How do I add an attachment to an email?http://help.yahoo.com/l/us/yahoo/mail/yaho.../basics-41.html Don't know which version of Outlook you're using, so here are a few tutorials for Outlook :Step-by-Step: Sending E-Mail Attachmentshttp://www.learnthenet.com/english/html/94attach.htmAttach a file or message to an e-mail message Applies to: Microsoft Office Outlook 2003http://office.microsoft.com/en-us/outlook/...2746711033.aspx Attach a file or other item to an e-mail message Applies to: Microsoft Office Outlook 2007http://office.microsoft.com/en-us/outlook/...2319501033.aspx For me, adding multiple attachments (photos) to an email, it requires that I flip back/forth between a graphics viewer (so I can see which picture is which), and the email that I am attaching the pictures to. You can use My Computer or Windows Explorer (Thumbnail view) to see tiny "thumbnail images", or you can use a graphics viewer program. In the menus across the top (File, Edit, View, Favorites, Tools, Help), click View, Thumbnails. A free graphics viewer program called IrfanView is listed here:Post # 3 , under Graphics Design & Editin gFreeware Replacements For Common Commercial Appshttp://www.bleepingcomputer.com/forums/topic3616.htmlIrfanView home page Here are screenshots so you can see what the IrfanView program looks like:http://www.snapfiles.com/screenshots/irfan.htm As far as organizing your photos, you can decide whether you want them all on your hard drive, or all on your external hard drive. (Important ones could be backed up/burned to a CD in case of hard drive failure.) When I helped a friend of mine organize her photos (elderly lady in her late 70's), we created a main folder called Photos, and then created sub-folders inside the Photos folder like this:Photos - Home Inventory You can Drag/Drop the photos to the appropriate folder, or you can select multiple files at once, either by using the Ctrl key or the Shift key is for non-sequential files, like the first one, the third one, the fifth one, etc.Shift is to select the first file you click, the last file you click, and all files in between. Here's a little more info on selecting multiple files:How do I select or highlight multiple files?http://www.computerhope.com/issues/ch000771.htm Some people are comfortable with Drag/Drop. Some people end up frustrated with Drag/Drop, because if your finger "stutters" on the mouse, you can accidentally drop your files in an unintended location, and you might not know where the files got dropped. If that happens, click Edit, Undo Move and it puts the files back where they were. Some people prefer to use Cut/Paste , to avoid the finger stuttering on the mouse issue.To arrange files by Date , when you are in a My Computer or Windows Explorer screen, click View, Arrange Icons by, Modified Another quick way, when you see the column header (that says Name, Size, Type, Date Modified), you can Right click on the column header, and select "Date Created". It adds a "Date Created" column. You can then left click/drag the Date Created column where you want it (perhaps before Date Modified). Then, left click (one time) on the column header that says Date Created, and it puts the files in date order. If you click it again, it puts them in date order in the other direction (ascending order vs descending order). Hope that gives you what you need Just curious, have you considered getting the Windows Update for Windows XP SP3 (Service Pack 3) ?Information about Windows XP Service Pack 3http://support.microsoft.com/kb/936929 Steps before you install Windows XP SP3 To view some recommended steps to take before you install Windows XP SP3, click the following article number to view the article in the Microsoft Knowledge Base: ) Steps to take before you install Windows XP Service Pack 3 List of fixes that are included in Windows XP Service Pack 3http://support.microsoft.com/kb/946480/
OPCFW_CODE
The term “provably fair” gaming has become a buzz word in blockchain-based gaming applications. While it is a catchy term, it actually doesn’t fully depict what goes on “under the hood” in applications. Many gaming applications use on-chain RNG (random-number generators), and extol their virtues, and most of the time don’t fully explain the underlying mechanics. In this post (content taken from our white paper), the Virtue Poker team describes how card shuffling and gameplay works in our application. We’d like to move away from the terminology of “provably fair” and instead show the underlying mechanics of how our game client works. Random Number Generator Certification Practices Online poker is different from live games in a key domain: in a live game, players can see the dealer shuffle the deck of cards, whereas in the online sphere players must trust that the RNG of the operator is operating sufficiently. Nearly every online operator has their RNG certified by a pre-approved third-party. RNG testing companies include iTech Labs, and Gaming Laboratories International. The integrity of these companies rarely comes into question, and generally speaking these auditors complete their jobs sufficiently. More interestingly is the lack of oversight after an operator receives their certificate. The Malta Gaming Authority uses the following language on their website: “After the certification process required for issue of the full five year licence, the gaming system need not be tested regularly, but there will be follow up audits by the Gaming Authority when deemed prudent.” The Isle of Man uses the following language in their Guidance for Online Gambling: “While many operators may have their games’ RNG checked on a more frequent periodic basis, the GSC will have an operator’s RNG checked at least twice in a licence’s 5 year lifespan.” This lack of oversight has contributed to a prevalent belief among online poker that the games in certain instances may not be entirely fair. In 1978 cryptographers Adi Shamir, Ron Rivest, and Leonard Adleman published a paper in response to a question that had been posed by the computer scientist Robert W. Floyd: “Is it possible to play a fair game of ‘Mental Poker’?” This paper proposes an encryption scheme and communications protocol that allows two people at different locations to shuffle and deal virtual “cards” in a way that allows a game to be played without the need of a trusted third-party. Over the ensuing years there have been numerous other papers published on the subject, expanding upon the ideas, offering alternative methods, and providing analysis and critique. There have been, however, very few practical software applications employing Mental Poker techniques. In large part, this is because the cryptography involved can require enormous amounts computational power and communications resources, and software using them simply runs too slow for consumer use. In addition, the inherent “peer to peer” nature of Mental Poker can be difficult to manage and doesn’t blend well with traditional server-based online game models. The Virtue Poker team has spent the past two years examining how the use of blockchain and distributed storage technologies, in concert with cooperative peer-to-peer networking, can address some of these difficulties. The result is a downloadable application that can play a 6-handed game at speed and manage real player stakes using the Ethereum Blockchain. Mental Poker ensures the decks are unreadable to any single player by encrypting and shuffling the cards cooperatively in a way that lets each card be “opened” by one, some, or the entire group. The protocol uses communicate encryption: cards can be encrypted or decrypted in any order. The basic algorithm is outlined below: Mental Poker Algorithm, The Two Pass Shuffle The basic algorithm is as follows: Three players, Bob, Alice and Ted are seated at a table and are playing a game of Texas Hold’em. Bob is the dealer, and he generates a deck of 52 cards on his machine, only he can view the cards. Using Fisher-Yates /dev/urandom he shuffles the deck of cards, and then encrypts the deck with the same encryption key on each card, making the deck unreadable to anyone but himself. He then passes the now encrypted deck to Alice, who does the same thing: shuffles the deck of cards and then encrypts them. Finally, Alice passes the deck to Ted who goes through the same process. The deck is now in its final ordered state, 1 through 52, and this order does not change throughout the course of the hand. Ted passes the now 3x encrypted deck of cards back to Bob, who takes off his “shuffle lock” and now encrypts each individual card with a different encryption key: B1, B2….B52. He passes the deck to Alice, who does the same thing: removes her “shuffle-key” and encrypts the deck with a unique encryption key A1, A2….A52. Alice then passes the deck back to Ted, who completes this same process. Bob is assigned the first and second card in the deck, but he only possesses his encryption keys that correspond to these cards. Alice and Ted therefore share their encryption keys that correspond to the first two cards, A1 and A2, and T1 and T2 respectively, so that Bob holds all three decryption keys for his private cards. This enables Bob to view his private cards but no one else. This process is repeated for each player at the table, so each player can only view their own private cards. All players call and the hand goes to the flop. The flop is denoted by cards 7, 8 and 9 in the deck. All players must share their encryption keys that correspond to the community cards, so that everyone can see these shared cards. This process continues until the end of the hand, where the winning player is awarded the pot, and all players reach consensus (described in detail in Section 4.2) by signing the end result of the hand which is sent to the Ethereum blockchain to update the game state (chip totals) for all players seated at the table. See Figures 6 through 9 that depict this process: Two Rounds of Encryption: Shuffling the Deck and Indexing the Deck “Multi-party shuffling” only requires that one of the peers perform a “proper” random shuffle in order to ensure that the entire deck is randomly ordered. If a player trusts that his own machine shuffled the deck properly then he can have confidence that the game is fair. Figure 1: Shuffling and Encrypting the Deck Figure 2: Indexing the Deck Decryption and Gameplay Figure 3: Player Key Sharing Figure 4: Community Cards The look and feel of a hand on Virtue Poker is similar to the experience players have come to expect with online platforms. As long as one player shuffles the deck correctly, the shuffle is sufficiently random. Therefore, as long as you trust your own machine, you can trust the game is fair.
OPCFW_CODE
They can help and guide you step by step in what to do to clean it up. Object: CN=Administrator,CN=Users,DC=bbcoxgate,DC=local Network address: 63cbee44-35c3-43d9-8c1f-d318c7988d3d._msdcs.bbcoxgate.local ----------------------------------------------------------------------------------- Under the gc folder we have the IP addresses of our two DC's. At the same time, maybe 4 months ago, I did a metadata cleanup on the current main DC to get rid of references to the obsolete DC. They allow users to access apps from just about any device, including smartphones and ... Check This Out Hot Scripts offers tens of thousands of scripts you can use. This posting is provided "AS IS" with no warranties, and confers no rights. We use Threadmaster to throttle it so we can work, or procexp.exe to suspend it for a while sometimes. As for the SOA: With Active Directory integrated zones, the SOA will change to any DC/DNS server in the infrastructure that receives a change, such as a DNS registration attempt, then https://social.technet.microsoft.com/Forums/en-US/a035fe04-f857-4d27-ad8d-84b7f67c347b/cannot-delete-computer-from-active-directory-and-pc-cannot-join-domain?forum=winserverDS The second DC seems to have no forwarders configured, and has itself down as SOA - not sure if that is correct. The main DC also has itself as SOA. To fix it, do this... Martin. The AD was replicated from the bad DC to the new one... This posting is provided "AS IS" with no warranties, and confers no rights. Thanks!Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. Note that it contains a count of how many DCs have not replicated in a day, week, month, two months, or the tombstone lifetime. If you don't have the support tools installed, install them from your server install disk. Ace Fekay MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003 Microsoft Certified Trainer Microsoft MVP The script is located on my website at http://www.pbbergs.com/windows/downloads.htm Just select both dcdiag and netdiag make sure verbose is set. (Leave the default settings for dcdiag as set when selected) When http://www.urtech.ca/2011/05/solved-cannot-view-or-make-changes-in-active-directory-because-an-internal-error-occured-467-database-corrupt/ The resolution for WTEC-DC1 is to remove it from the network, manually demote it, clean up the server object in Active Directory, wait for replication and re-promote it. OK, demoting and cleaning up the metadata may do something, or wiping the machine and re-installing Windows may do it, but it is really a last resort on a production machine... http://windowsitpro.com/windows/how-can-i-delete-active-directory-ad-object-unknown-type There was one reference to the rogue BBC-15 in the reverse lookup zone, so I deleted that. The remaining "bad" DC, our main one, was cleaned up to get rid of the references to the wiped DC. Get 1:1 Help Now Advertise Here Enjoyed your answer? Tuesday, September 27, 2011 12:11 PM Reply | Quote 0 Sign in to vote Uninstall all antivirus and other security apps to eliminate them as a possible cause. his comment is here In this scenario, you'll notice the delete option doesn't even appear on the right-click menu. Could you provide the following: ipconfig /all from your dc's and your workstation that is failing. -- Paul Bergson MVP - Directory Services MCITP: Enterprise Administrator MCTS, MCT, MCSE, MCSA, Security+, Share This With Your Friends Now: Related Tags: active directory, active directory restore mode, database corrupt, esentutl, jet, patrick bergen, repair, repair active directory, sbs, SBS 2008, small business server Leave The only problem now is this bogus entry in AD... If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. DNS request timed out. http://knowaretech.com/active-directory/active-directory-the-dsa-object-cannot-be-deleted.html Re: AD restored incorrectly. I rebooted the server in DS Restore Mode and did the following: 1) Took a backup of the C:\Windows\NTDS. 2) From the command prompt: ntdsutil files info 3) The database files If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.Would you like to participate? Join our community for more solutions or to ask questions. While there are already some good articles out there describing lingering objects, I'd like to put my own spin on the issue based on experiences I've had with them. Problem solved. :) 0 LVL 74 Overall: Level 74 SBS 64 Message Active 6 days ago Expert Comment by:Jeffrey Kane - TechSoEasy2007-01-27 Hard repair? LEARN MORE Join & Write a Comment Already a member? From the main Security tab, grant Full Control permission to your account or group, then click OK. Check the error log at C:\Program Files\Microsoft Windows Small Business Server\Support\delusr.log And the error listed in the error log: -------------------------------------------------------------------------------- Remove user on date: Fri Jan 26 09:03:04 2007 User account I'll be away all next week. You may have deleted an account called RBrown several months ago and now another person joins the company with a similar name. http://knowaretech.com/active-directory/active-directory-services-cannot-find-the-web-server-windows-7.html If you look under the bbcoxgate.local zone in DNS, do you see an _msdcs.bbcoxgate.local and all the SRV records, including a 'gc' folder under it? I think in researching why Net Logon doesn't start automatically after a reboot I found that AD having been restored incorrectly was given as the reason. I'm not usre what it is that is not working but don't believe it has anything to do with your environment. http://explore.live.com/windows-live-skydrive -- Paul Bergson MVP - Directory Services MCITP: Enterprise Administrator MCTS, MCT, MCSE, MCSA, Security+, BS CSci 2008, Vista, 2003, 2000 (Early Achiever), NT4 http://www.pbbergs.com Twitter @pbbergs http://blogs.dirteam.com/blogs/paulbergson Please no If one DC or DNS goes down, does a client logon to another DC? Dnslint crashed however. SearchExchange Boost Exchange email security with Critical Security Controls The SANS Institute's 20 critical security controls all apply to Exchange Server, and many IT organizations are falling behind on ... In a large forest with multiple domains, however, it isn't so easy. Is the object in Sites and Services, too? I still get the same error message: 'Windows cannot delete object whatever because: The specified directory service attribute or value does not exist.' Martin Alphatucana http://www.alphatucana.co.uk/ http://www.websitetavern.com/ Edited by Alphatucana Wednesday,
OPCFW_CODE
Many common problems (beyond those addressed by J2EE application servers) have beensolved well by open source or commercial packages and frameworks. In such cases, designing and implementing a proprietary solution may be wasted effort. By adopting an existing solution,we are free to devote all our effort to meeting business requirements. After commenting that existing frameworks can mean a slightly steeper learning curve, Rod later motivates why this trade-off isworthwhile to gain a strong application infrastructure. On page 395, he clearly explains the benefits: Using a strong standard infrastructure can deliver better applications, faster. A strong infrastructure makes this possible by achieving the following goals: Allowing application code to concentrate on implementing business logic and other application functionality with a minimum of distraction. This reduces time to market by reducing development effort, and reduces costs throughout the project lifecycle by making application code more maintainable (because it is simpler and focused on the problem domain). This is the ultimate goal, which many of the following goals help us toachieve. Separating configuration from Java code Facilitating the use of OO design by eliminating the need for common compromises. Eliminating code duplication, by solving each problem only once. Once we have a good solution for a problem such as a complex API we should always use that solution, inwhatever components or classes that encounter the problem Concealing the complexity of J2EE APIs. We've already seen this with JDBC; other APIs that are candidate for a higher-level of abstraction include JNDI and EJB access Ensuring correct error handling. We saw the importance of this when working withJDBC in Chapter 9. Facilitating internationalization if required. Enhancing productivity without compromising architectural principles. Without adequateinfrastructure, it is tempting to cut corners by adopting quick, hacky solutions that will cause ongoing problems. Appropriate infrastructure should encourage and facilitate theapplication of sound design principles. Achieving consistency between applications within an organization. If all applicationsuse the same infrastructure as well as the same application server and underlying technologies, productivity will be maximized, teamwork more effective, and risk reduced. Ensuring that applications are easy to test. Where possible, a framework should allow application code to be tested without deployment on an application server. Several existing application frameworks provide ready-to-use implementations of the kind of strong application infrastructure thatRod recommends. If you use frameworks, you won't have to design, code, debug, and maintain your own infrastructurecode.In this whitepaper, we examine two existing J2EE frameworks by studying a working sample application. By patterning thesample application after the "classic" Java Pet Store Demo, we've made it easier for readers familiar with the original demo tocompare the developer productivity that a framework-based J2EE development approach can provide. Rebuilding a Web Storefront with Struts and ADF The ADF Toy Store demo is a simple web storefront application adhering to the Model/View/Controller (MVC) design pattern. Itis implemented using two existing J2EE application frameworks: Apache Struts
OPCFW_CODE
Swift cannot use -enable-experimental-cxx-interop on Linux with Foundation Description Foundation can't be used on Linux when -enable-experimental-cxx-interop is enabled. Steps to reproduce import Foundation from a Swift file compiled with -enable-experimental-cxx-interop on Linux. Environment Swift compiler version info: Swift version 5.7.3 (swift-5.7.3-RELEASE), Target: aarch64-unknown-linux-gnu Xcode version info: N/A Deployment target: Ubuntu Linux I'm also getting wild erorrs here. I don't want to flood this issue with another batch of erors, but apparently in my case I get a segfault when compiling for linux (and a humongouse error output, which I will share if asked for); and also get a quick fail on mac which I can share since it's fairly quick and small: Rocska-MBP:xxxxx-xxxxxxxxxx rocskaadam$ swift build -c release --product xxxxxxxxxx --static-swift-stdlib -Xswiftc -enable-experimental-cxx-interop Building for production... remark: Incremental compilation has been disabled: it is not compatible with whole module optimization remark: Incremental compilation has been disabled: it is not compatible with whole module optimization remark: Incremental compilation has been disabled: it is not compatible with whole module optimization remark: Incremental compilation has been disabled: it is not compatible with whole module optimization remark: Incremental compilation has been disabled: it is not compatible with whole module optimization remark: Incremental compilation has been disabled: it is not compatible with whole module optimization /Users/rocskaadam/xxxxx/src/xxxxx-xxxxxxxxxx/.build/checkouts/swift-log/Sources/Logging/Locks.swift:65:39: error: missing arguments for parameters '__sig', '__opaque' in call var attr = pthread_mutexattr_t() ^ __sig: <#Int#>, __opaque: <#(CChar, CChar, CChar, CChar, CChar, CChar, CChar, CChar)#> Darwin._opaque_pthread_mutexattr_t:2:12: note: 'init(__sig:__opaque:)' declared here public init(__sig: Int, __opaque: (CChar, CChar, CChar, CChar, CChar, CChar, CChar, CChar)) ^ [1/9] Compiling CStbImage image_io.c Sure, it may look like it's an issue with swift-log, and it may be the case, but I only get this failure when I try to enable the experimental cxx interop. If this comment was merely a noise, please accept my apologies. @adam-rocska This is great feedback. We will look into it. CC @etcwilde similar error here /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_pair.h:442:40: error: redeclaration of deduction guide template<typename _T1, typename _T2> pair(_T1, _T2) -> pair<_T1, _T2>; ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_pair.h:442:40: note: previous declaration is here template<typename _T1, typename _T2> pair(_T1, _T2) -> pair<_T1, _T2>; ^ <module-includes>:2:10: note: in file included from <module-includes>:2: #include "CoreFoundation.h" ^ /usr/lib/swift/CoreFoundation/CoreFoundation.h:33:10: note: in file included from /usr/lib/swift/CoreFoundation/CoreFoundation.h:33: #include <math.h> ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/math.h:36:11: note: in file included from /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/math.h:36: # include <cmath> ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/cmath:1927:12: note: in file included from /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/cmath:1927: # include <bits/specfun.h> ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/specfun.h:45:10: note: in file included from /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/specfun.h:45: #include <bits/stl_algobase.h> ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_algobase.h:64:10: note: in file included from /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_algobase.h:64: #include <bits/stl_pair.h> ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_pair.h:448:5: error: redefinition of 'operator==' operator==(const pair<_T1, _T2>& __x, const pair<_T1, _T2>& __y) ^ /usr/bin/../lib/gcc/aarch64-linux-gnu/9/../../../../include/c++/9/bits/stl_pair.h:448:5: note: previous definition is here operator==(const pair<_T1, _T2>& __x, const pair<_T1, _T2>& __y) ^ <unknown>:0: error: too many errors emitted, stopping now <unknown>:0: error: could not build C module 'CoreFoundation' Similar errors here: Building for debugging... error: emit-module command failed with exit code 1 (use -v to see invocation) <module-includes>:2:10: note: in file included from <module-includes>:2: #include "CoreFoundation.h" ^ /opt/swift-5.9-DEVELOPMENT-SNAPSHOT-2023-07-10-a-ubuntu22.04-aarch64/usr/lib/swift/CoreFoundation/CoreFoundation.h:33:10: note: in file included from /opt/swift-5.9-DEVELOPMENT-SNAPSHOT-2023-07-10-a-ubuntu22.04-aarch64/usr/lib/swift/CoreFoundation/CoreFoundation.h:33: #include <math.h> ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/math.h:36:11: note: in file included from /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/math.h:36: # include <cmath> ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/cmath:1935:12: note: in file included from /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/cmath:1935: # include <bits/specfun.h> ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/specfun.h:45:10: note: in file included from /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/specfun.h:45: #include <bits/stl_algobase.h> ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/stl_algobase.h:64:10: note: in file included from /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/stl_algobase.h:64: #include <bits/stl_pair.h> ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/stl_pair.h:59:10: note: in file included from /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/stl_pair.h:59: #include <bits/move.h> // for std::move / std::forward, and std::swap ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/move.h:77:5: error: redefinition of 'forward' forward(typename std::remove_reference<_Tp>::type& __t) noexcept ^ /usr/lib/gcc/aarch64-linux-gnu/11/../../../../include/c++/11/bits/move.h:77:5: note: previous definition is here forward(typename std::remove_reference<_Tp>::type& __t) noexcept ^ If I remove cxxLanguageStandard: .cxx17 then I get a bit further, except the C++ headers I'm importing need things in C++17, so I don't get too much further. yeah it appears it's only breaking starting with C++17 and above. Investigating. @adam-rocska fwiw, your issue has been fixed, we now provide (deprecated) default constructors again. I have a potential fix. Thank you, will this make it into 5.9?
GITHUB_ARCHIVE
6.5.0 on NCSA systems (JYC, Blue Waters) Original issue: https://charm.cs.illinois.edu/redmine/issues/98 No body. Original date: 2013-03-28 03:24:23 The known issues blocking some of these builds (#119, #131) have been dealt with. We want to get everything from the 6.5 release wrapped up so we can announce the installed paths for users. If I don't see an issue complete and a path recorded tomorrow, I'm going to start nagging. Original date: 2013-04-01 17:07:03 Ping: this was supposed to get done last week. Please do the builds and post a path to them, or note that you can't so the issue can be reassigned. Original date: 2013-04-02 17:58:52 was on vacation last week. Revising build script for bluewaters now. Original date: 2013-04-05 18:14:36 Hacked the script to express suitable options on gemini. Note: there is still some kind race condition which can make parallel builds fail. This crops up in optimized craycc builds due to the incredible slowness of the cray compiler. Results in h2ologin:~bohm/stable/charm Remaining issues: would be to verify functionality of the builds tweak per compiler optimization flags. copy to JYC move to more public directory space #!/usr/bin/perl 1. Script to build a range of Charm++ targets, as specified by the variables below 1. - Most recent system libraries/drivers/network stack 1. - Builds for each supported compiler (e.g. xlc, gcc, icc, pgcc, 1. craycc, clang) on the system 1. - Builds for different purposes (debug -g, devel -optimize 1. --disable-error-checking, production --with-production) 1. ".$thiscompilermodule.";"; $previouscomp=$thiscompilermodule; foreach my $THISSUBARCH (`SUBARCH) { print "subarch: $THISSUBARCH\n"; foreach my $PURPOSE_NAME (keys %PURPOSE) { print "purpose $PURPOSE_NAME\n"; my $PURPOSE_ARG=$PURPOSE{$PURPOSE_NAME}; my $PURPOSE_PRE=$$PURPOSE_ARG{"pre"}; my $PURPOSE_POST=$$PURPOSE_ARG{"post"}; my `command = ($moduleloadstring.$compileloadstring.'./build', $TARGET, $ARCH); 1. push(`command, $comp) if ($comp); push(`command, $THISSUBARCH) if (`SUBARCH); push(`command, $PAR); push(`command, "--suffix=".$thiscompilermodule."-".$PURPOSE_NAME); push(`command, split(' ', $PURPOSE_PRE)); push(`command, split(' ', $PURPOSE_POST)); print "`command\n"; ``command`; $compileloadstring=''; } } } Original date: 2013-04-08 21:56:00 At what path can the builds be found?
GITHUB_ARCHIVE
When we create any windows based application then we need to deploy it on target machine. For deploying windows based application on target machine we have so many option such as xcopy command utility provided by MSDOS or click once method. In same way we have one more method for deploying application on target application is Creating Windows Installer which is provided by Microsoft Visual Studio IDE. In this session I will tell u that how to create a setup for window based application. Steps for Creating Windows Installer - Open your windows projects in a solution project for which you want to create installer. - Click File menu option in VS IDE. - Followed by click AddàNew Project sub menu in VS IDE. - By performing these steps Add New Project dialog box appears which is as follows. - From project types click other project types and then click Setup and Deployment and then choose Visual Studio Installer. - From project templates select Setup Wizard. - Provide name of your project. - Choose location of your setup project where you want to store it. - Then click ok button and follow the wizards that will appear. - After clicking Ok button Setup wizard open which looks like as follows. - Click Next Button and choose the project type that you want to create. For this example I had chosen create a setup for a window application. - Click next button and choose project outputs to be included. For this example I had chosen two options Primary Output and Content Files from Tree View Control Demo. - Click next button and add any additional files that you want for your project. For this demonstration did not add any additional files. - Click next button which shows summary of your projects and then click finish button. - After clicking finish button a new window named File System on target machine is open. This is known as registry editor. Visual studio setup installer provides 6 editors which have a different task. According to the requirements choose editors. Creating Shortcuts on User Document - Click on View menu followed by Editors sub menu then click File System editor. By performing these steps file system editor is open. - From right pane of File system editor select Primary Output from tree view control. - Then click Action menu and click Create Shortcut to primary output from Tree view control. - Then finally drag the shortcut that recently you had created and drop there where you want to create shortcut such as User’s Desktop or User’s. Modifying User Interface of Setup - Click on View menu followed by Editor Sub menu and then select User Interface Editor. A user interface editor is opened. In this diagram you had seen two section one Install section and another one is Administrative Install section. If you want to disable certain options from user and provide some extra privilege to Administrators then disable those options from Install section and provide additional option to Administrative install section. Adding License Dialog Box For adding License dialog box follow these steps. - Right click on the start section and then click Add dialog. After performing these steps Add Dialog box opens which contains several dialog boxes such as License Agreement or Register User Dialog box. Select License Agreement dialog box and click Ok button to add dialog box. - Select License Agreement dialog box and move it up where you want that it appears or not. - Right click on the License Agreement Dialog box and then go to the properties window. - From properties window click License File option and browse the license file that you have created. Make sure that license file that you have created have .rtf extension. When you click browse then Add new Item dialog box opens. Click Application Folder option then click Add button and select license file. Adding Register User Information For adding Register User dialog box follow these steps. - Right click on End section and click Add dialog. After performing these steps Add dialog box opens from which select Register User Dialog box and then click Ok. - Right click on Register User dialog box and then go to properties windows - From properties window click Executable option and then choose exe of Register User form that you have created as a separate project. Building your application Open solution explorer and then right click on your setup project and then click Build option and build your application. After building your application the setup of your project is created and you can install it. Output of setup
OPCFW_CODE
During discussions in SIMPLE meetings, a concern that came up early in the Fall 2015 semester was how prerequisite mathematics classes are not homogeneous, which means students have varied levels of preparation, and in some unfortunate circumstances, students are not completely ready for their current course. Since instructors can get behind, or that classes are sometimes canceled (e.g., snow days), we considered which topics could be listed as optional in Calculus 1 and 2, and we discussed the possibility of providing instructors of these courses with target schedules. Little discussion happened in the SIMPLE meetings regarding the course which many students place when they enter GMU: Precalculus Mathematics. Consequently, I decided to collect, create, refine, and organize resources for future instructors of Precalculus. The first time I taught Precalculus was in Fall 2015, and the only reason I managed to cover all of the required topics was because colleagues with experience teaching this course provided me with a suggested schedule. In the fall, I made minor adjustments to the schedule I had been given, but before the Spring 2016 semester, I was worried because Trigonometry, the topic with which students struggle the most, was covered at the very end of the semester. Since trigonometry cannot be cut short, in the Spring 2016 semester, I experimented with teaching trigonometry in the middle of the semester as Unit 2 (after linear functions and quadratics, but before rational, exponential, and logarithmic functions). This change meant that I had to introduce some topics differently than laid out in the textbook. For example, vertical asymptotes are first encountered in detail with the tangent, secant, cotangent, and cosecant functions, not rational functions. While it would be difficult to define, and even more so measure, success of this idea quantitatively, there are two benefits. I realized that in addition to getting through all of the required sections on trigonometry, this new concept map meant that I also covered trigonometry at a time in the semester when students had more mental energy than they do in the last two weeks of the semester. Because students are expected to be familiar with how technology may aid in solving mathematical problems, I wrote assignments for my precalculus students to complete in Mathematica, a popular computer algebra system. These assignments were also used by at least one colleague. This semester, I rewrote most of the assignments to expose students to more commands in Mathematica and applications of the mathematical concepts. A colleague and I also put together a suggested syllabus to help encourage uniform policies (e.g., whether calculators are allowed on assessments) so students have similar levels of expectations and preparations in prerequisite courses. To bring all of the above together, I consulted with yet another colleague about a good way to establish a repository for the materials we now have for precalculus instructors. For now, we decided to use a Blackboard group to host the materials. As this semester draws to a close, I will place the administrative resources I have noted along with the exams and quizzes I used for the last two semesters in this Blackboard group. I hope that these recourses will be beneficial to future instructors of precalculus who are daunted by the volume of material that needs to be covered and the need to have a technology component. My organization is already continuing to other courses: I have drafted a target schedule for my section of Calculus 1 in the Fall 2016 semester.
OPCFW_CODE
Tuesday, August 30, 2005 Q: For a web part that displays Outlook contacts in a SharePoint site, will it require Exchange? A: If you use "My Inbox" and tweak it a bit to point to any mail folder in your Exchange mailbox. (e.g. https://exchange.mycompany.com/exchange/1602/contacts/) In this case, it does require Exchange. On the other hand, there are special list types called Contacts Lists built-in to SharePoint. These are not integrated directly with Outlook. The data sits in the SharePoint SQL database. However, there is an option to "Link to Outlook". This gives you a link to the SharePoint Contacts list from your Outlook client. You can also import contacts from Outlook into the SharePoint contact list. In both cases, Exchange is probably not required, but this has not been tested. Q: How do you deploy an application from QA to production? A: First, build a CAB file to contain assemblies(.DLL), content files(.XML, .DWP), etc. Second, use STSADM.EXE in WSS to backup an existing site, if needed. For more information on STSADM.EXE see http://support.microsoft.com/?kbid=889236. Third, unzip the CAB file to install all the required files on the SharePoint server. Q: Are there any issues with installing VS.Net Team systems and SharePoint 2003 on the same machine? A: As of now, VS.Net Team Systems is available only in a beta version, and anytime a beta version is installed, issues maybe encountered such as ... VS.Net 2005 Team System requires SharePoint, and the installation notes that come with it explain how to do this. If you’re doing a clean install you will need to install SharePoint with the MSDE database, because SharePoint doesn’t work with SQL 2005 yet. The SQL 2005 DB and MSDE can coexist, because MSDE is installed to a different instance. If you’re going to install VS.Net 2005 on an existing system with SharePoint, you need to make sure the SQL 2005 install doesn’t update the existing SQL 2000 environment. Thanks to Nate B. and other members of the Berbee team for helping answer some of these questions. Friday, August 26, 2005 Visual Studio .Net has detected that the specified web server is not running asp.net version 1.1, You will be unable to run ASP.Net Web Applications "Visual Studio .Net has detected that the specified web server is not running asp.net version 1.1, You will be unable to run ASP.Net Web Applications or services" There are several ways to go about fixing this error. From "C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322", run "aspnet_regiis.exe -r". The following messages will appear Start replacing ASP.NET DLL in all Scriptmaps with current version (1.1.4322.0). Finished replacing ASP.NET DLL in all Scriptmaps with current version (1.1.4322.0). When execution has completed, you can start VS.Net and load a web application without any problems. Wednesday, August 24, 2005 In the receiving page (newpage.aspx), use the Request.QueryString() function to extract the variable desired (Dim strPassed As String = Request.QueryString("variablex")). When executed, the value of strPassed will be "newvalue". Tuesday, August 23, 2005 The article is complete and easy to understand, but misses to mention 2 crucial points. Within Login.aspx, you need to specify "Imports MyApp.FormsAuth". This will import the FormsAuthAD Namespace, otherwise declaring a variable of type "LDAPAuthentication" will result in a compile time error. Also, if you cut & paste the code, be sure that any references to "FormsAuthAD" are changed to the name of your project. This will ensure you import the correct class from the correct assembly. Aside from these points, the article is nicely written and easy to understand. Monday, August 22, 2005 A: SQL Server does not support storing a timezone in a datetime field. However, to overcome that issue, you can convert all date/time values to the UTC timezone so you have a standard timezone to work with. When displaying the data to the user, you would then convert the time to the client’s timezone. In regards to storing the value ‘12/1/2005 12:31:23 -5:00’ into SQL Server, try using the format '2005-08-09 00:00:00.000'. For more information or to download LibCheck, go to http://www.microsoft.com/downloads/details.aspx?familyid=4B5B7F29-1939-4E5B-A780-70E887964165&displaylang=en Friday, August 19, 2005 To take it a step further and interface with .Net code, Tony Northrup wrote "Home Hacking Projects for Geeks" (http://www.homehacking.com). This book discusses coding .Net projects for home control. Links to sample chapters are available on the left margin of the page. Wednesday, August 17, 2005 This month's SIG meeting will be discussing SharePoint 2003 and it's many uses. The meeting is scheduled for Tuesday, August 23, 2005, 6:30 PM - 8:30 PM. It will be held at Cuyahoga Community College (Corporate College Campus - 25425 Center Ridge Road, Westlake, OH 44145) room 211. Thursday, August 11, 2005 The speech is very fluid and character movement is very realistic. Absolutely no programming knowledge is required to use SitePal. However, SitePal does interface with various programming languages for those who are technically inclined. For more information, check out http://www.sitepal.com Wednesday, August 10, 2005 The easiest way to create a Linked Server is through Enterprise Manager. - In the left pane, expand the objects down to the desired server, where the Linked Server will be created. - Expand Databases of the desired server and click on the "Security" Folder - Right Click on "Linked Servers" and select "New Linked Server". Specify all the parameters for the server to be linked to. Monday, August 8, 2005 Tuesday, August 2, 2005 Listed below are the benefits of EAP. The Early Access Program www.intel.com/ids is Intel Developer Service's comprehensive web-based resource, helping developer’s take advantage of the processing power available from cutting-edge Intel processors. Some of the highlights include the use of a next generation development system, software optimization tools, training courses, technical support, business development, marketing and promotional activities. Please look over the website as it goes into greater detail about the opportunities, as well as the advantages of being in the program. Costs associated with the Early Access Program: There is an annual membership fee of $500.00. ISV’s can register for EAP by following these steps below: These agreements require a Director or higher signature Go to www.intel.com/ids/EAP. Click on the Blue “Enroll Now” on the Right hand side of the screen. In the Members Options Box You will be directed to the Intel Login page. Click on the blue “click here to register.” Put your information in the appropriate fields (the password is case sensitive and requires at least one number and special character...Example: * / + etc.) Fill in all the fields marked with a blue asterisk. To gain access to the program, the contact person must have a job title of Director or higher. Examples of job titles are VP, President, CEO, CTO, CIO, etc. Due to sensitive information that will be traded, this requirement protects your company and Intel. Once your company is a member of the Early Access Program, a different contact person within your company can work with the Intel account manager. When all three pages are completed, click on “Finished”. Check the boxes of all the programs you are interested in participating and click "Continue". This page will ask if you already have a CNDA and IPLA agreement with Intel. Please click “NO” for both, since these online agreements are newer and updated. It will ask you “How do you expect to perform your optimization/porting” put Unsure at this Point. Then click "Continue". This next page gives you two options: Choose “Option 1 - Online click through” or “Option 2 - Download, Print, and Mail” by checking the box next to the desired option. Click the “Continue” button after choosing one of the options. Option 1 is preferred and will expedite your membership process. This is where the VP (or higher) title is required. If you choose the click through membership agreements, you must check the “I accept” box located directly below the text. After completing all membership agreement pages, you will find yourself on a “Membership Agreements” page thanking you and a bunch of other great stuff. At the bottom of this page you will have three options to choose from and a continue button. Choose the “Go to My Early Access Program Home Page” and click “Continue”. At this point it will give you the option to download the agreements that you just signed. You have now completed the Registration process! Please email me so that I may inform our Administrator that your application has been submitted. Please contact me with any questions/comments/concerns. I look forward to hearing from you. Monday, August 1, 2005 A: Yes, it's definitely possible. Usually SharePoint is setup with Integrated authentication in IIS. If your web app uses the same login/password to access the app, then these credentials can be "passed through" to the app from SharePoint. Just setup your app virtual directory to have integrated security. Also, here are some things to consider: 1. Do you want the "application" to run within a SharePoint page? Maybe consider writing a .NET web part then. 2. Do you want the "application to run within the context of SharePoint (access to the SharePoint object model)? Then you have to put the app in a specific vdir location (_layouts folder) in order for it to work with SharePoint. This is documented in the SharePoint SDK. 3. Is the app running on the same server as SharePoint? If so, then you either need to exclude the virtual directory for the app from the managed paths or create a new web site and put it there. Thanks to Nate B. for his help with this issue.
OPCFW_CODE
How do gaseous molecules arrange themselves in liquids when they are "dissolved"? To provide some context, I am trying to understand in better detail how oxygen from the lungs passes through a membrane and enters the vasculature in order to be transported via blood. I have generalized the question so that it is purely a physics conceptual question and does not require any sort of biological background...please ignore hemoglobin and red blood cells. I am trying to picture at the molecular level what it means for a gas to be soluble in a liquid (on a scale from 0% solubility to 100% solubility). I will be relying on pictures to offer clarity. Imagine we have a large planar membrane that is permeable to gaseous oxygen but impermeable to “liquid”. On the one side of the membrane is a purely gaseous-oxygen rich environment. On the other side of the membrane is a flowing-in-one-direction “liquid”. (I’m not really looking for a fluid dynamics analysis...so you can largely dismiss the “flowing” behavior of the liquid; I only included the flowing part to try and somewhat recreate the blood vessel description) What I am trying to understand is the following: how does dissolved oxygen arrange itself within a “liquid” depending on the oxygen’s solubility in the “liquid”. For example, what would the molecular arrangement of O2 within a liquid that exhibits 25% solubility for O2 (called Liquid A) look like as compared to a liquid that exhibits 50% solubility for O2 (called Liquid B)? Is it as straight forward as depicted in the following two pictures? i.e. for a given cross section, the number of O2 is simply more densely distributed throughout? In the event that this is true, I then wonder what exactly is happening that allows more oxygen to enter. Is solubility of a gas in a liquid largely determined by surface effects (i.e. solubility can be viewed as the probability of a gaseous molecule passing through the surface of the liquid)? Or is solubility of a gas in a liquid largely determined by “inside” effects (i.e. solubility can be viewed as the probability of a gaseous molecule remaining inside a liquid after it has passed through the surface)? I think (some of them) dissociate. I could be wrong, but Na-Cl has a dissociation energy of 769 kJ, while O2 is 498, so it could dissolve the same way as NaCl I’m unfamiliar with the physical meaning of ‘disassociate’. Are you saying that O2 becomes 2 oxygen ions and then these ions subsequently equally distribute amongst the solvent?
STACK_EXCHANGE
How to Create GitHub Draft Pull Requests Hello, everybody! We'll talk about Git, GitHub, and how to make a draft pull request today. I was told to create a "draft PR," but I had no idea what that meant because I didn't realize the feature was available on GitHub. I'll go over some git tips and how to make a draft pull request. Table of Content - Introduction to Git - what is GitHub - How to make a draft pull request Introduction to Git Git is a distributed version control system that is open source. Let me dissect it and clarify the terminology: Control System: This means that Git is a content tracker. So Git can be used to store content — it is mainly used to store code due to the other features. Version Control System: The code stored in Git keeps changing as more code is added. Also, many developers can add code in parallel. So Version Control System helps handle this by maintaining a history of what changes have happened. Also, Git provides features like branches and merges, which I will be covering later. Distributed Version Control System: Git has a remote repository stored in a server and a local repository stored in the developer's computer. This means that the code is stored in a central server, but the full copy of the code is present in all the developers’ computers. Git is a Distributed Version Control System since the code is currently in every developer’s computer. I will explain the concept of remote and local repositories later in this article. by "Aditya Sridhar." What is GitHub According to Wiki, GitHub provides Internet hosting for software development and version control using Git. It offers the distributed version control and source code management (SCM) functionality of Git, plus its features. It provides access control and several collaboration features such as bug tracking, feature requests, task management, continuous integration, and wikis for every project. I hope this has given you a good understanding of Git and GitHub. You should be familiar with git, GitHub, and PRs to get the most out of this post. How to make a draft pull request Pull requests allow you to notify others about improvements you've made to a branch of a GitHub repository. a) Go to the repository where you want to make a pull request. I will be using 👆 the repository. b) Navigate to Pull Request Tab, then select the options shown below That is how simple it is. Draft pull requests are ready for your code in public and open source repositories, as well as in private repositories for groups using GitHub Team and Enterprise Cloud. Yes, indeed! I hope you now have a better understanding of how to create a draft pull request. Isn't that a short procedure? See you in my next blog article. Take care!!!
OPCFW_CODE
Distribution of Salesforce Platform License for AppExchange App We currently started selling an app through the AppExchange and I couldn't find any information as to what needs to be done in order to get Salesforce Platform Licenses. My understanding is that when submitting an order using the Channel Order App, the licenses, if any, are automatically added to the target client Org based on the products in the order. Unfortunately, we cannot test this without submitting an order. I was wondering if there was any in depth information about how this work (I couldn't find any a part from quick mention of those elements in some DreamForce video on YouTube)? Also, is there any difference between submitting an order for license for a customer that is already with Salesforce but looking for Platform licenses only for our App and a customer who is completely new to Salesforce and looking only for our licenses? The difference seems to be simply the concept of ISV vs OEM deployment, but OEM only state that its ideal for new customer but there is no mention that an app initial developed for an OEM deployment cannot be simply deployed to an existing functioning Org. Closing this question as I forgot about it but got the answer. Hope it might help some other people maybe later. My understanding is that when submitting an order using the Channel Order App, the licenses, if any, are automatically added to the target client Org based on the products in the order. Unfortunately, we cannot test this without submitting an order. Correct. This is exactly how it works. Basically, when making an ISV/OEM contract and linking it to the Channel Order App, products will be associated on Salesforce side to a type of product (if any) to deliver to the targeted org of the order. This cannot be seen on the Product objects of the Channel Order App (unless the Representative who processed the contract added it to the description) so you need to know your contract or ask someone at Salesforce if no one knows the technical details. Also, is there any difference between submitting an order for license for a customer that is already with Salesforce but looking for Platform licenses only for our App and a customer who is completely new to Salesforce and looking only for our licenses? The difference seems to be simply the concept of ISV vs OEM deployment, but OEM only state that its ideal for new customer but there is no mention that an app initial developed for an OEM deployment cannot be simply deployed to an existing functioning Org. No, there is no difference on the deliver-ability of the products between an OEM and ISV contract. Like mentioned above, its all about a product to deliver being associated to one of your products in the Channel Order App. The only difference is the terms of the contracts, for things like products and the share Salesforce take for the service provided (usually lower in ISV contracts compared to OEM one). All this information was acquired by talking to technical people on Salesforce side and by trying it.
STACK_EXCHANGE
How does want NES puts up go? There is at want before me, Seek NES IC, Its IP at goes use procedure DUMP out, Does not know he IC is what, Too has no DUMP procedure, Can someone help I? Sorry, My English not very good, This is use translate software turn over become English When I'll have finished my studies in electronics engineering I could definitely do something like that. Now that the pattent have expired for the NES I guess this is legal (even if the official patents are inacurade, which is cheat from Nintendo).Gonna make your own Famiclone and make millions? Wink Anyway Kevin Horton was the first to sucessfully did something like that. The NES architecture assumes 240p output. I can see a few advantages of 480p or bigger output:kyuusaku wrote:One can dream But there is still a lot of $$$ to be made from Famiclones, I'm guessing especially ones with 720p/1080p output or other trendy features. - emulating more than one NES at once, - a hint screen displayed above the game window like in PlayChoice, - less lag from typical entry-level flat TVs' upscalers, and - the possibility of Scale#x or hq#x output. It will be VERY hard for me to get that accurate, at least for a long time. The clones actually are clones afterall, they just have a few sloppy parts probably from mistakes viewing the mask.MottZilla wrote:Any Famiclone that actually does sound and video and many other things right that all the Famiclones FAIL at would be nice. It's a shame the market is stuffed full of shitty NES clones as many average users will think there is nothing wrong when there is so much wrong. Right now my clone doesn't really even have a CPU yet, just someone else's HDL core I haven't tested as a placeholder. Currently I'm working on the PPU so I can hit the ground running. I've got a prototype for all the components designed except for sprite evaluation, I just need to put it all together and design a basic upscanner to see some results.. I think also if things work out I can use my design to write a visual tech document, like a more accurate update to Brad Taylor's and integrate newer info from the forum. Code: Select all Scanline: Odd Even Odd Frames: 0 180 Even Frames: 180 0 Also, it should have the luminance and chrominance signals filtered before they are combined to form the composite output. If you don't understand what I'm getting at, let me know. That's actually the point of 20th scanline being short by one pixel. (I'd point you at the wiki but it's down for me).CartCollector wrote:The S-Video/composite board should have the color carrier rotate 180 degrees every scanline and switch off every frame It doesn't always look good, though -- especially with scrolling content. (The last time I played with a modern ATI card's composite/s-video out, it included a bunch of different color modulation options because different pictures look better with different characteristics). I recently started working on it again. I'm trying to pipeline the PPU, and implement audio. kyuusaku: I am also using someone else's CPU. I got mine from www.birdcomputer.ca. After through testing, I only found one small bug involving the RTI instruction. I've been meaning to e-mail them to get it fixed and just haven't gotten around to it yet. I like your idea about creating a new tech document. All of the documents I've found have been incomplete and a few even contradict each other. Actually, being short by one pixel only rotates the phase by 240 degrees, not 180, since the CPU and the color carrier oscillate at a ratio of 2 color carrier cycles to 3 PPU cycles. 3.58/5.37 = 2/3. (2/3)*360 = 240. Visually:That's actually the point of 20th scanline being short by one pixel. (I'd point you at the wiki but it's down for me).The S-Video/composite board should have the color carrier rotate 180 degrees every scanline and switch off every frame Code: Select all Original color carrier: --- --- --- --- ___ ___ ___ ___ CPU clock: -- -- -- -- -- -- __ __ __ __ __ __ Color carrier -1 CPU cycle: (4 dashes) - --- --- --- -- ___ ___ ___ ___ Original color carrier: (again) --- --- --- --- ___ ___ ___ ___ Look at Nestopia with the NTSC filter on and you'll notice that color and luminance decoding errors are still noticeable on still images, which they wouldn't be if the phase rotated by 180 degrees. For instance, if one white pixel surrounded by black pixels created interference in the chroma bandwidth that caused it to be yellow tinted, it would be blue tinted in the next frame if the color carrier shifted 180 degrees. Since yellow and blue are exactly opposite each other, when averaged out by flickering, the pixel looks colorless (white). However, if the color carrier shifts 240 degrees, the pixel would be cyan tinted in the next frame, and would look light green tinted when averaged out. Good to hear because that's the one I'm going to test my PPU with! ;)Stief21774 wrote:I got mine from www.birdcomputer.ca. After through testing, I only found one small bug involving the RTI instruction. I've been meaning to e-mail them to get it fixed and just haven't gotten around to it yet. BTW, I'm using a Digilent Nexys 2 + breadboard for I/O.
OPCFW_CODE
My closet display Posted 20 February 2010 - 12:30 PM Where do you get your display heads? Posted 20 February 2010 - 07:33 PM Posted 20 February 2010 - 07:40 PM I bought the display heads at www.buystorefixtures.com it is their male mannequin head #7. It can be hard to find, you have to click on "related products" below the picture of another style of head. The price is$27.00 pluse shipping. That is why I only have 3 of them. I have seen them on ebay for about the same price. Thanks Steve, and again, super nice collection. Posted 22 February 2010 - 08:06 AM Posted 04 May 2010 - 12:43 PM Posted 04 May 2010 - 12:53 PM Posted 04 May 2010 - 12:59 PM Posted 04 May 2010 - 01:50 PM You not make the bed up like a 1940s USAAF mans bed ready for inspection?? Posted 04 May 2010 - 05:00 PM Watch out for the direct sunlight. I learned the hard way that the sun is an icky thing and will fade your stuff. You can get GREAT (custom-ordered) blinds from Home Depot or Lowes which will block almost all light from the room. The good thing is that they're custom-fit to the window(s), so don't let much around them, either. GREAT collection, AND display! Posted 04 May 2010 - 09:02 PM Posted 06 May 2010 - 06:02 AM Fortunately, or Unfortunately, depending on your views, I am divorced, so my stuff is spread all over the house. I am even thinking of taking Sgt Rocks bathroom idea and running with it too! :think: Posted 06 May 2010 - 07:39 AM Posted 07 May 2010 - 12:37 PM Posted 07 May 2010 - 03:03 PM Posted 05 June 2010 - 09:49 AM Posted 05 June 2010 - 09:57 AM Posted 05 June 2010 - 10:00 AM 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
@docusaurus/core is a modern static website generator that allows you to create documentation websites easily. It is built on top of React and supports Markdown files, which makes creating and maintaining documentation a breeze. @docusaurus/core comes with a built-in search functionality and is highly customizable with themes and plugins. Nuxt is a powerful framework for building server-side rendered (SSR) Vue.js applications. It provides a seamless development experience by abstracting away the complex configuration required for SSR and pre-rendering. With Nuxt, you can easily create universal Vue applications that can be rendered both on the server and the client side. Nuxt is a popular framework for building server-side rendered and static websites in Vue.js. It has a large and active community with extensive support and resources. The popularity of Nuxt has been steadily increasing in recent years. @docusaurus/core, on the other hand, is a popular static site generator specifically designed for documentation websites. It has gained popularity within the documentation community, especially for open-source projects. Both @docusaurus/core and Nuxt can scale well for medium to large-sized projects. Nuxt provides a more flexible architecture and can be used for various types of applications, including static and server-rendered websites. @docusaurus/core is optimized for building documentation websites and may not provide as much flexibility for other types of projects. However, it can handle larger documentation sites with ease. In terms of performance, Nuxt is known for its excellent server-side rendering capabilities, which can lead to faster initial page load times and better search engine optimization (SEO). @docusaurus/core also prioritizes performance and provides features like static site generation to ensure fast and efficient rendering of documentation pages. Nuxt offers a smooth developer experience with its powerful CLI, extensive documentation, and a wide range of plugins and modules available from the Nuxt ecosystem. It provides great tooling for Vue.js and has good support for routing, state management, and other common features. @docusaurus/core, being focused on documentation websites, offers an intuitive configuration system and sensible defaults, making it easy to get started with building documentation sites with minimal effort. Extensibility and Customization Both @docusaurus/core and Nuxt offer extensibility and customization options. Nuxt has a robust plugin system and supports a vast ecosystem of community-created modules. This allows developers to easily add functionality and customize their projects. @docusaurus/core also provides a plugin system and allows for customization through themes and configuration options. However, the focus of @docusaurus/core is primarily on building documentation websites, which may limit the extensibility for other non-documentation project requirements. Nuxt has comprehensive documentation with detailed guides, examples, and references, making it easy for developers to learn and use the framework effectively. @docusaurus/core also has well-documented guides and references specifically tailored for building documentation websites. Both packages have active communities with forums and support channels to help developers with any questions or issues.
OPCFW_CODE
In this post, I propose that elections in large democracies will give better outcomes by changing the voting rules as follows: ONLY A SMALL RANDOM SUBSET OF THE POPULATION GETS A VOTE. For each election, only a random subset of citizens, notified far in advance, can vote. Nothing else is changed. Why would this help? First, we need to discuss the problem with voting in large democracies. THE ECONOMIST’S CRITICISM OF LARGE DEMOCRACIES Why should I spend the effort to become informed before voting? My vote has an extraordinarily low probability of changing the winner of a national election. Suppose that one candidate is much better for the country. Numerically, imagine that candidate A would be $100,000 “better” per citizen. The probability that my vote changes the election is very small. By this estimate, voters in New Mexico in 2004 had approximately a 1 in 10,000,000 chance of their vote determining the election. All other states had much smaller probabilities. Thus, if I make a large investment of time to carefully evaluate the candidate’s positions, decide who is better for the country, and then vote, I will receive an expected return of $0.01! Of course the issue with the above is that I am only comparing the costs and benefits to me. If one candidate is really $100,000 better per citizen, the total expected benefit to informing myself should be multiplied by the number of citizens (~300 million). In this case the total expected benefit to figuring out which candidate is better and then voting should be $3 million! (This number is surprisingly high because New Mexicans had the highest chance of influencing the election in 2004.) The problem is that almost none of that $3 million goes the person who put in the work. In economist lingo, educating oneself before voting has large “positive externalities”. I think the central problem with large democracies is that voters are uninformed. Not because they are stupid or uneducated (1)– because they are rational. (1) I strongly reject the idea that people are stupid. THE COMMON-SENSE REBUTTAL OF THE ECONOMIST’S CRITICISM The above assumes, of course, that people are strictly selfish beings only seeking to maximize their own utility. Most standard economics is built on this assumption, but in this case, it doesn’t look so good. The rebuttal is: If the above was true, wouldn’t society descend into anarchy? I find this rebuttal pretty compelling. Democracy does work fairly well, much better than the above argument would suggest. There are two explanations for this: First, people may choose to learn about candidates simply because they find it fun. Secondly, people simply choose to learn about candidates because they feel a sense of responsibility. They do care about how their vote affects other people, and act accordingly. That is altruism. Still, I think we can’t dismiss the economist. Why? Ask yourself this: Suppose you had the only vote. You get the choice, yourself, who should be the next US president. Would you learn more about the candidates than you have bothered to learn now? I think any honest person would answer, “yes– a great deal more.” THE ARGUMENT FOR A RANDOM SUBSET The argument for leaving the decision up to a small number of people is clear. Those people will have a greater stake in the election, so they will take their choices more seriously. Would this really happen? I think so. IOWA and NEW HAMPSHIRE If you are not familiar, the US political parties choose their nominees through a staggered process. Different states do not vote simultaneously. Iowa and New Hampshire vote first, and thus wield huge influence on which candidate is chosen. As a result, the people in those states have developed a culture that takes the primaries very seriously. It is fair to say that the average Iowa primary voter is better informed than the average voter in a state that votes on Super Tuesday. INFLUENCE OF MONEY Of course, is that it is totally unfair that people in two states have so large a choice in who is president! The obvious solution to this would be to move to a simultaneous, national primary. Ignoring the fact that we will take New Hampshire’s primary from their cold, dead hands, would we want this? Candidates with little money can compete and win in Iowa and New Hampshire through “retail politics”, launching them to a national campaign. With a national primary, it would be even more difficult for a poorly funded candidate to win. Now, imagine that votes were given to a random subset of the population, say 10,000 people. What would happen? - The candidates would personally meet each voter. Those voters would demand and get meetings with the candidates in small groups. The candidates would be forced to actually answer the question asked of them. In a year, to meet each voter, the candidate would need to talk to only 27.39 voters per day. - Each voter would both have a personal interest in the election feel a strong sense of personal responsibility for the outcome. With only 10,000 voters, every vote counts. If you have a real chance of changing the winner, you have a real interest in making your choice correctly. - The influence of money would be reduced. Initially, I thought it might be reduced almost to zero, but upon reflection this probably wouldn’t happen. The reason is that the voters would be subject to intense lobbying from their friends and neighbors. Thus, candidates might still choose to run TV ads in the hopes of indirect influence on voters. - Unfair negative attacks would lose impact, while fair negative attacks would gain impact. Think of how negative attacks work now: Suppose lying candidate A runs an 30 second TV ad falsely accusing honest candidate B of shooting penguins for fun. B then runs a 30 second ad pointing out that this is nonsense. I, the disengaged voter, am left with a cloudy doubt about the truth– perhaps B likes to kill penguins, perhaps not. I am also left more cynical about politics, leading to my further disengagement. Under my proposal, imagine that A makes the same attack. I then meet with B, who personally explains their deep love for the penguin species. B’s staff sends me information on all the pro-penguin legislation B has supported. I am now left only with a distaste for A’s dishonesty. The biggest problems with this suggestion are: It is undemocratic. - It will never happen. “Disenfranchise 99.9% of the population– are you insane?” Well, maybe. However, I believe that the trouble with democracy is that people already feel and are disenfranchised. You are a fool to spend hours and days educating yourself, when your vote will just be a drop in a sea of uninformed votes. 10,000 highly enfranchised voters would do a better job than 200,000,000 distracted. Most discussion of voting systems focuses on game theoretic issues. Assuming the whole population knows the candidate they want, and voters vote strategically, how can the system be structured to pick the candidate with the highest appeal? There are mathematical results that no voting system exists that satisfy some intuitive criteria, in the case of more than two candidates. I doubt the importance of this. That model ignores the large cost to a voter of choosing to inform themselves about the candidates. I think instead, voting systems should be designed to give voters an incentive to pay that cost. Postscript: The ancient Greeks apparently often filled offices through the Sortition method– randomly assign the office to a citizen. My proposal here could be seen as these two steps: Expanding the electoral college to 10,000 members. Choosing those member by Sortition.
OPCFW_CODE
Tracing .Net Framework 1.0 usage In order to check whether i can safely uninstall the .Net Framework 1.0, i was wondering if i could track the framework usage. 1) Would it be a good idea to scan every "exe" and "dll" and check if they force the loading of .Net Framework 1.0? I'm afraid it's time expensive and Hard Disk stressing. 2) Would it be a good idea to monitor (in real time), for a week, the loading of .Net Applications and log those who load .Net 1.0? How to monitor, hook the loader or what? 3) Is there already a tool that does what i'm looking for? I'll have to take a look at: Assembly Binding Log Viewer (Fuslogvw.exe) Regards Maybe i could write some software that periodically scans the active processes and checks which of them is linked against mscorlib.dll. If mscorlib's version is 1.0.xxxx that's a hit in .Net 1.0 usage. I don't know if this catches all the cases in which the framework is involved, though. @MystereMan - Repartition is an option i was considering, but i don't feel very comfortable with playing with full partitions. Just out of curiosity, why are you trying to uninstall the .NET 1.0 Framework? It isn't "harmful" to have it there. This sounds to me like a strange variant of "premature optimization". Do you really have to uninstall it? I have to install .Net 4.0. It seems it takes 600-850 Mb. I need to find some space. I see .Net 1.0 is using 865 Mb! There are lots of ways to clear up space, but maybe you should just get a bigger hard drive.. they're not that expensive. Still, have you tried turning on disk compression for folders you don't use often? Have you cleared all your temporary folders? Run the disk space optimization wizard? It's Win XP, installed years ago. I thought a 10GB partition would have been enough. And, at the time, it was. @BlueMoon - so repartition. There's lots of tools to do that. Or you can span two partitions to make a single one with NTFS. I agree with others, there is no reason to uninstall it, unless you're short on disk space. Even then, it's better to upgrade your disk in my opinion. However, doesn't it seem obvious that the answer is, just uninstall it and see if anything breaks. You can always reinstall it if something does.
STACK_EXCHANGE
(function(glob) { // Timeline index chunk size (in milliseconds) var TIMELINE_CHUNK_SIZE = 100; /** * Timeline Logic */ var TimelineLogic = glob.TimelineLogic = function( clock, fps ) { // Prepare variables this.clock = clock; this.fps = fps || clock.fps; this.frameWidth = 1000 / this.fps; // Bind clock events this.clock.onTick( (function(time,delta) { this.handleTick(time,delta) } ).bind(this) ); // Register a notification when the clock starts $(this.clock).on('clockStart', (function(){ // Let all timeline objects know that we started for (var i=0; i<this.timelineObjects.length; i++) { this.timelineObjects[i].onPlaying( this ); } }).bind(this)); // Register a notification when the clock stops $(this.clock).on('clockStop', (function(){ // Let all timeline objects know that we are paused for (var i=0; i<this.timelineObjects.length; i++) { this.timelineObjects[i].onPaused( this ); } }).bind(this)); // The timeline objects and the lookup table for them this.timelineObjects = [ ]; this.timelineFrames = [ ]; // Timeline info this.frameCount = 0; this.currentFrame = 0; this.loop = false; } /** * Convert time (in milliseconds) into a frame index */ TimelineLogic.prototype.frameOf = function( time ) { return parseInt(Math.floor( time / this.frameWidth )); } /** * Snap time (in milliseconds) in frame-sized chunks */ TimelineLogic.prototype.frameSnap = function( time ) { return parseInt(Math.floor( time / this.frameWidth ) * this.frameWidth); } /** * Clock ticks */ TimelineLogic.prototype.handleTick = function( time, delta ) { // If we have no frames, do nothing if (this.timelineFrames.length == 0) return; // Get the frame we are going to enter var nextFrame = this.frameOf(time); // Check for out-of-bounds if (nextFrame > this.frameCount) { if (!this.loop) { this.clock.stop(); } else { this.clock.stop(); this.clock.set(0); this.clock.start(); } return; } // Check if we really changed frame if (this.currentFrame != nextFrame) { // Let everybody know that the frame is changed $(this).trigger('frameChanged', nextFrame, this.currentFrame ); // Find the objects removed for (var i=0; i<this.timelineFrames[this.currentFrame].length; i++) { var o = this.timelineFrames[this.currentFrame][i]; if (this.timelineFrames[nextFrame].indexOf(o) == -1) { // Object deleted this.timelineObjects[o].onExit(); $(this).trigger('objectHidden', this.timelineObjects[o], o ); } } // Find the objects added for (var i=0; i<this.timelineFrames[nextFrame].length; i++) { var o = this.timelineFrames[nextFrame][i]; if (this.timelineFrames[this.currentFrame].indexOf(o) == -1) { // Object added this.timelineObjects[o].onEnter(); $(this).trigger('objectShown', this.timelineObjects[o], o ); } } // Change frame this.currentFrame = nextFrame; } // Update every active timeline object, on every tick for (var i=0; i<this.timelineFrames[nextFrame].length; i++) { var o = this.timelineFrames[nextFrame][i]; // Update timeline object values in order to match the new values this.timelineObjects[o].onUpdate( time - this.timelineObjects[o].beginTime(), nextFrame, time ); // Let everybody know that an object has changed $(this).trigger('objectChanged', this.timelineObjects[o], o ); } } /** * Re-index the given timeline object */ TimelineLogic.prototype.reIndex = function( object, lastIndex ) { var index = object; if (typeof(index) != "number") { index = this.timelineObjects.indexOf(object); } else { object = this.timelineObjects[index]; } // Remove from previous indices if (object.__timelineBounds !== undefined) { var past = object.__timelineBounds; for (var i=past[0]; i<past[1]; i++) { var j = this.timelineFrames[i].indexOf(index); if (j != -1) this.timelineFrames[i].splice(j,1); } } // Find first and last frame from the timeline object var firstFrame = this.frameOf(object.beginTime()), lastFrame = this.frameOf(object.endTime()); // Stretch timeline if (lastFrame > this.frameCount) { for (var i=this.frameCount; i<=lastFrame; i++) { this.timelineFrames.push([]); } this.frameCount = lastFrame; } // Register it on timeline for (var i=firstFrame; i<=lastFrame; i++) { this.timelineFrames[i].push( index ); } // Update bounds object.__timelineBounds = [ firstFrame, lastFrame ]; // Fire object changed for this object $(this).trigger('objectChanged', object, index ); } /** * Re-build the timeline index */ TimelineLogic.prototype.rebuildIndex = function() { // Reset timeline frames this.timelineFrames.splice(0); this.frameCount = 0; // Update frames for (var index=0; index<this.timelineObjects.length; index++) { var object = this.timelineObjects[index]; // Find first and last frame from the timeline object var firstFrame = parseInt(Math.round(object.beginTime() / this.frameWidth)), lastFrame = parseInt(Math.round(object.endTime() / this.frameWidth)); // Stretch timeline if (lastFrame > this.frameCount) { for (var i=this.frameCount; i<=lastFrame; i++) { this.timelineFrames.push([]); } this.frameCount = lastFrame; } // Register it on timeline for (var i=firstFrame; i<=lastFrame; i++) { this.timelineFrames[i].push( index ); } // Update bounds object.__timelineBounds = [ firstFrame, lastFrame ]; } // Timeline changed $(this).trigger('timelineChange'); } /** * Add an object on timeline */ TimelineLogic.prototype.add = function( object ) { // Store object on timeline var objectIndex = this.timelineObjects.length; this.timelineObjects.push(object); object.setTimeline( this ); // Editable objects need to provide a way of reporting back to the timeline. // If they have the updateTimeline placeholder, the timeline is going to // register a callback function, appropriate for re-indexing the timeline object. if (object.updateTimeline) { object.updateTimeline = (function() { // Rebuild index this.rebuildIndex(); //this.reIndex( object, objectIndex ); }).bind(this); } // Let the object know that it was placed object.onPlace(); // Rebuild index this.rebuildIndex(); // Check if the object exists in the current frame and show it. var fBegin = this.frameOf( object.beginTime() ), fEnd = this.frameOf( object.endTime() ); if ((this.currentFrame >= fBegin) && (this.currentFrame <= fEnd)) { object.onEnter(); $(this).trigger('objectShown', object, objectIndex ); } // Let people know that we have added an object $(this).trigger('objectAdded', object, objectIndex ); // If the clock is already running, let the object know if (this.clock.running) { object.onPlaying(); } else { object.onPaused(); } } })(window);
STACK_EDU
javah 'Could not find class file' for mulitple paths I have a class file with a native method on a path while its dependencies live on a separate path in a different package. My tree looks something like: [build/classes]$ tree -L 3 . ├── main │ └── com │ └── foo └── test └── com └── foo My dependencies live on the main path while the class file I'm trying to build the header for is on the test path. The files look something like: // FooTest.java: class file will go to build/classes/test/com/foo/ package com.foo; import com.foo.bar.Depend; public class FooTest { private native void baz(int i); public FooTest() { Depend depend = new Depend(); baz(depend.get()); } } // Depend.java: class file will go to build/classes/main/com/foo/bar package com.foo.bar; public class Depend { public int get() { return 3; } } Now back to the build/classes dir. Let's invoke our javah command: [build/classes]$ javah -classpath "test/" com.foo.FooTest Error: Class com.foo.bar.Depend could not be found. Darn. It couldn't find the dependency. Shouldn't be surprised: it's not on the path! We'll use classpath separator ; to send multiple searchpaths. [build/classes]$ javah -classpath "test/;main/" com.foo.FooTest Error: Could not find class file for 'com.foo.FooTest'. What? It can't find the class file in the test dir now. Flip the order of the paths? Turn on verbose? Write the full path? [build/classes]$ javah -verbose -classpath "main/;test/" com.foo.FooTest Error: Could not find class file for 'com.foo.FooTest'. [build/classes]$ javah -verbose -classpath "/the_full_path/main/;/the_full_path/test/" com.foo.FooTest Error: Could not find class file for 'com.foo.FooTest'. Blast! I tried all the combinations! Verbose gives me nothing and I'm getting the same error. I've read quite a few of the similar questions including this highly voted answer but have not found a solution that works for me. you have to include all the paths with compiled classes/jars in classpath I suggest using a build tool like maven @user85421 you're right. I think that should be so. I really thought that may work but using : yields the same error. I even tried exporting the fullpath to CLASSPATH without any luck. @LazarPetrovic I'm using gradle.
STACK_EXCHANGE
Original source:Tuo end data tribal official account In this article, I will use Gibbs sampling of block for multiple linear regression to obtain the conditional posterior distribution required for Gibbs sampling of block. Then, the sampler is coded and tested with simulated data. Suppose we have a sample size of the subject. Bayesian multiple regression assumes that the vector is extracted from multivariate normal distribution. By using identity matrix, we assume independent observations. So far, this has nothing to do withMultivariate normal regressionSame. The following solution can be obtained by maximizing the probability: The Bayesian model is obtained by specifying a priori distribution. In this example, I will use a priori values in the following cases Before coding the sampler, we need to derive the a posteriori conditional distribution of each parameter of the Gibbs sampler. Conditional posteriorTake more linear algebra. This is a very beautiful and intuitive result. The covariance matrix of conditional a posteriori is the of covariance matrixestimate, Also note that conditional posterior is aMultivariate distribution。 Therefore, in each iteration of Gibbs sampler, we start fromA posteriori plots a complete vector 。 The result vector of my simulation。 Running the Gibbs sampler generates estimates of true coefficients and variance parameters. 500000 iterations were run.cycleIt is 100000 times and 10 iterations. The following is a graph of MCMC chain, where the true values are represented by red lines. # Calculate a posteriori summary statistics post_dist %>% group_by(para) %>% summarise(median=median(draw), lwr=quantile(.025), upr=quantile(.975)) %>% # Consolidated summary statistics post\_dist <- post\_dist %>% left\_join(post\_sum_stats, by='param') # Draw MCMC chain ggplot(post_dist,aes(x=iter,y=dra)) + geom_line() + geom\_hline(aes(yintercept=true\_vals)) This is the posterior distribution of the trimmed parameters: ggplot(post_dist,aes(x=draw)) + geom_histogram(aes(x=draw),bins=50) + geom\_vline(aes(xintercept = true\_vals)) It seems possible to obtain the parametersReasonable a posteriori estimation。 To ensure that the Bayesian estimator works properly, I repeated this for 1000 simulated data setsprocess。 This will produce 1000 groups of a posteriori mean and 1000 groups of 95%confidence interval。 On average, these 1000 a posteriori means should be expressed inTrue valueAs the center. On average, the real parameter value should be within 95% of the timeconfidence intervalInside. The following is a summary of these assessments. The “estimated average” column is the average of all 1000 simulationsMean a posteriori mean。 The deviation percentage is less than 5%. For all parameters, the coverage of 95% CI is about 95%. We can make many extensions to this model. For example, a distribution other than the normal distribution can be used tofittingDifferent types of results. For example, if we have secondary metadata, we can model it as: Then put a priori distribution on the. This idea extends Bayesian linear regression to Bayesian GLM. In the linear case outlined in this paper, the covariance matrix can be modeled more flexibly. Instead, it is assumed that the covariance matrix is diagonal and has a single common variance. This is the same variance hypothesis in multiple linear regression. If the data are classified (for example, there are multiple observations per subject), we can use the anti Wishart distribution to model the entire covariance matrix. Most popular insights
OPCFW_CODE
What needs to happen for one to ingest radioactive particles and how likely is this? There are many stories about radioactivity and the relative danger of it in the news lately, but very little actual information. The radioactivity levels around Fukushima Daiichi are high, but seem negligible in just somewhat removed locations. The real danger seems to stem from ingesting radioactive particles. Just how likely is it for that to happen in any considerable distance from the reactor, say in Tokyo, and how dangerous for the human body is it really? How far can these particles travel in any dangerous concentration? Right, eating or inhaling is bad. Inhaling plutonium - which appeared in Fukushima a day ago - is a lung cancer risk. See the other 4 or so major radioactive isotopes and their impact on http://motls.blogspot.com/2011/03/radioactivity-sieverts-and-other-units.html I would like to add that radioactive isotopes are a constant background in all natural environments, and more so in stone and concrete buildings. The relevant question is how much more than the natural background is the artificial radiation induced by human activities. In a recent viewpoint in BBC news professor Alison gives relevant numbers. For example, the human body has about 50 becquerel per kilogram in the natural state. An interesting chart that puts radiation in perspective is here, and this is a graph from the scientist who provided the numbers for the chart. So when you see animated maps of how the radiation is spread from Japan, read the scales. You will see that the numbers are within natural variations. The immediate danger of death come for huge doses, look at the chart. Even for people next to the reactors there has been no such exposure. There are long term effects of ingesting or breathing in isotopes for the people in the region, which have to do with a larger cancer rate over twenty or thirty years. The rest of the world is nowhere close to such levels of exposure. Ingesting alpha emitters is bad, but fortunately, we don't do that very often. Ingesting beta-emitters at LOW levels is something we have evolved to cope with, since even as necessary an element as potassium has naturally occurring beta-emitter isotopes. But both beta and alpha emitters are more harmful to us once ingested or inhaled. Fortunately, in every day life, we get exposed to very little of that. Even after Fukushima and Chernobyl, most of us are more exposed to such emitters left over from atmospheric testing than from these two accidents. But even that source is swamped by naturally occurring isotopes (again, at low levels) in our food. Now true, Cs-137 is being found in fish from the Tohoku area, but still at low levels, too low to keep me from enjoying sushi! Besides: the biological half-life of these elements (I and Cs) in the body is low enough (1-4 months) to keep it from accumulating in humans at a dangerous rate. As for the reports of plutonium appearing after Fukushima, that was another initial, unreliable report: later reports showed that trace amount was from earlier fallout, most likely from that dark period of atmospheric nuclear bomb tests. How far they travel depends mostly on weather. How much you ingest also depends on chemistry - you are more likely to ingest something that is water soluble then a heavy metal atom. Then how much damage it does depends on biochemistry, a heavy metal atom that doesn't have any biological role may just go straight through you while something like iodine or strontium that are used in the body will be absorbed and may stay in you long enough to do some damage. Remember a few particles don't do much harm, we can detect radioactivity at amazingly low concentrations, down to single atoms, so simply detecting radioactivity on the other side of the pacific doesn't mean much from a health perspective. The most dangerous radioisotope for health is Radon (in terms of killing most people/year). It's a gas that comes out of the ground and gets trapped in houses, and because it's a gas you breathe it in so it's easily ingested. As a Tokyo resident I'm actually more concerned about this side of the Pacific… ;-) @deceze - wasn't a comment on the danger level, just on the ability to detect radio-isotopes at very low levels. As in the media OMG we have detected radiation!! So what, I have detected quasars and those are lot more dangerous! Source for the Radon claim? AFAIK the biological risk of low levels of radiation is quite uncertain. In particular, we don't even know if there is such a level below which there's no risk, and Radon exposure might very well be below such a limit. @MSalters Well, living is a risk, and what more, we all die :). There are natural levels of radiation with which humans have evolved over millenia. Both from ground and building materials and from cosmic rays. I have given some links in my answer to put radiation dangers in perspective. @anna v: I'm familiar with those, thank you. They're either legal limits, actual exposures, or the medical effects of very high doses. I was specically questioning the medical effects of a low dose. @MSalters The US EPA reckon 25,000/year (http://www.epa.gov/radon/risk_assessment.html) of course it's very dependent on where you live. Most models assume a zero threshold linear risk although there is pretty good evidence that very low levels DECREASE your cancer risk - possibly by boosting your body's ability to spot damage EPA indeed assumes zero threshold: "Since the risk is presumed to be proportional to dose", but points out that (Cohen 1995) "lung cancer rates are negatively correlated with average radon concentrations across U.S. counties". That's what I refered to as the difference between legal and medical risks. @MSalters - Yes environmental studies with such low incidents are hard to unwrap from other effects. A farming/outdoors/mountain state with lots of granite+radon might well have lower lung cancer than a rust-belt state with lots of smokers. The big danger is less from eating radioactive substances than from breathing them. The gut is a continuous “tube,” or in other words our body plans are a torus. As a result you are more likely to pass it out within 24 hours or so. The respiratory system is an “in and out” system, where the lungs form a fractal branching pattern that reaches to the cellular level. So a piece of dust with radioactive substances can get lodged in the lung. So an alpha emitter can sit there and do lots of damage. This is an overgeneralization. The effects depend on the chemistry of the radioactive substance. You really don't want to breathe radioactive plutonium, but ingesting it isn't anywhere near as bad. However, you really don't want to ingest radioactive iodine. There are a myriad of answers to your question available on every TV and website, but most have vague mass suggestions on how to stay safe. Its better to focus on the size of the particulates of contamination. See a video referring to a form of radiation here: http://www.youtube.com/watch?v=WgQ79-oDX2o&feature=related Also not to let rain contact your skin directly when theres a report of radioactivity in an area. Radioactive substances are in our everyday life and some reports may be false positives, but always better to practice caution. Also the length of time of exposure is just as critical. With the Japan leakage, the exposure to the rest of the world is minimal so far. Keep in mind that its an airborne particulate breathed in that is worrisome and anything to reduce that exposure is best. As for food, we irradiate it already with small doses. Voluminous literature is available on Effects of ionizing radiations on atomic bomb survivors in Japan, and Chernobyl reactor accident held in April 1986. The harmful effects take place on the whole body, and even at cellular level. Some exposed people have developed cancer. The detailed reports are available in Internet. Take the instance of 131-I accumulated in thyroid gland immediately after Chernobyl reactor accident in 1986. Since it has half life of 8.0207 days, and excreted through urine it is not considered as dangerous at low activity levels.
STACK_EXCHANGE
using System; using System.Collections.Generic; using LLVMSharp; using System.Diagnostics; namespace CSharpLLVM { public class EmulatedState { private List<EmulatedStateValue> evaluationStack = new List<EmulatedStateValue>(); private List<EmulatedStateValue> evaluationStackAtStart; public int StackSize { get { return evaluationStack.Count; } } public EmulatedState() {} public EmulatedState(LLVMBuilderRef builder, BasicBlock origin) { var state = origin.GetState(); foreach(var value in state.evaluationStack) { evaluationStack.Add(new EmulatedStateValue(builder, origin, value)); } evaluationStackAtStart = new List<EmulatedStateValue>(evaluationStack); } public void StackPush(EmulatedStateValue value) { Debug.Assert(value != null); evaluationStack.Add(value); } public EmulatedStateValue StackPop() { int index = evaluationStack.Count - 1; EmulatedStateValue value = evaluationStack[index]; evaluationStack.RemoveAt(index); return value; } public EmulatedStateValue StackPeek() { int index = evaluationStack.Count - 1; return evaluationStack[index]; } public void Merge(LLVMBuilderRef builder, BasicBlock origin) { var otherState = origin.GetState(); if(evaluationStackAtStart.Count != otherState.evaluationStack.Count) throw new InvalidOperationException("Cannot merge stacks with a difference in size"); for(int i = 0; i < evaluationStackAtStart.Count; ++i) { evaluationStackAtStart[i].Merge(builder, origin, otherState.evaluationStack[i]); } } } }
STACK_EDU
Section: New Results Computational Cardiology & Image-Based Cardiac Interventions Cardial Electrophysiological Model Learning and Personalisation Participants : Nicolas Cedilnik [Correspondant] , Ibrahim Ayed [Sorbonne, LIP6, Paris] , Hubert Cochet [IHU Liryc, Bordeaux] , Patrick Gallinari [Sorbonne, LIP6, Paris] , Maxime Sermesant. This work is funded by the IHU Liryc, Bordeaux. modelling, electrophysiology, ventricular tachycardia, ischemic cardiomyopathy This project aims at making electrophysiological model personalisation enter clinical practice in interventional cardiology. During this year: we evaluated a fully automated computed tomography-based model personalisation framework in the context of post-ischemic ventricular tachycardia , we developped a model personalisation methodology based on invasive data in our participation in the STACOM2019 modelling challenge , Deep Learning Formulation of ECGI for Data-driven Integration of Spatiotemporal Correlations and Imaging Information Participants : Tania Marina Bacoyannis [Correspondant] , Hubert Cochet [IHU Liryc, Bordeaux] , Maxime Sermesant. This work is funded within the ERC Project ECSTATIC with the IHU Liryc, in Bordeaux. Deep Learning, Electrocardiographic Imaging, Inverse problem of ECG, Electrical simulation, Generative Model. Electrocardiographic imaging (ECGI) aims at reconstructing the electrical activity of the heart using body surface potentials.To achieve this one has to solve the ill-posed inverse problem of the torso propagation. We propose in a novel Deep Learning method based on Conditional Variational Autoencoder able to solve ECGI inverse problem in 2D. This generative probabilistic model learns geometrical and spatio-temporal information and enables to generate the corresponding activation map of the specific heart. 120 activation maps and the corresponding Body Surface Potentials (BSP) were generated using the dipole formulation. 80% of the simulated data was used for training and 20% for testing.We generate 10 propbable solutions for each given input using our model. The Mean Squarre Error (MSE) metric over all the tests was 0.095. As results we were able to observe that the reconstruction performs well. Next, we will extend the model in 3D and test it on real data provided by the IHU Liryc. Discovering the link between cardiovascular pathologies and neurodegeneration through biophysical and statistical models of cardiac and brain images Participants : Jaume Banus Cobo [Correspondant] , Marco Lorenzi, Maxime Sermesant. Université Côte d'Azur (UCA) Lumped models - Biophysical simulation - Statistical learning The project aims at developing a computational model of the relationship between cardiac function and brain damage from large‐scale clinical databases of multi‐modal and multi‐organ medical images. The model is based on advanced statistical learning tools for discovering relevant imaging features related to cardiac dysfunction and brain damage; these features are combined within a unified mechanistic framework to providing a novel understanding of the relationship between cardiac function, vascular pathology and brain damage. Parallel transport of surface deformations from pole ladder to symmetrical extension Participants : Shuman Jia [Correspondant] , Nicolas Guigui, Nicolas Duchateau, Pamela Moceri, Maxime Sermesant, Xavier Pennec. The authors acknowledge the partial funding by the Agence Nationale de la Recherche (ANR)/ERA CoSysMedSysAFib and ANR MIGAT projects. We proposed a general scheme to perform statistical modeling of the temporal deformation of the heart, directly based on meshes. We encoded the motion and the intersubject shape variations, with diffeomorphisms parameterized either by stationary SVFs or by time-varying velocity fields in the LDDMM framework. Experiments on a 4D right-ventricular endocardial meshes database demonstrated the stability of our transport algorithm, of importance for the assessment of pathological changes. The method is adaptable to other anatomies with temporal or longitudinal data. Machine Learning and Pulmonary hypertension Participants : Yingyu Yang [Correspondant] , Stephane Gillon, Jaume Banus Cobo, Pamela Moceri, Maxime Sermesant. cardiac modelling, machine learning Right heart catheterisation is considered as the gold standard for the assessment of patients with suspected pulmonary hyper-tension. It provides clinicians with meaningful data, such as pulmonary capillary wedge pressure and pulmonary vascular resistance, however its usage is limited due to its invasive nature. Non-invasive alternatives, like Doppler echocardiography could present insightful measurements of right heart but lack detailed information related to pulmonary vasculature. In order to explore non-invasive means, we studied a dataset of 95 pulmonary hypertension patients, which includes measurements from echocardiography and from right-heart catheterisation. We used data extracted from echocardiography to conduct cardiac circulation model personalisation and tested its prediction power of catheter data. Standard machine learning methods were also investigated for pulmonary artery pressure prediction. Our preliminary results demonstrated the potential prediction power of both data-driven and model-based approaches. It was published as "Non-Invasive Pressure Estimation in Patients with Pulmonary Arterial Hypertension: Data-driven or Model-based?" accepted at 10th Workshop on Statistical Atlases and Computational Modelling of the Heart, Oct 2019, Shenzhen, China Style Data Augmentation for Robust Segmentation of Multi-Modality Cardiac MRI Participants : Buntheng Ly [Correspondent] , Hubert Cochet [IHU Liryc, Bordeaux] , Maxime Sermesant. Image Segmentation. Multi-modality, Cardiac Magnetic Resonance Imaging, Late Gadolinium Enhanced, Deep Learning We propose a data augmentation method to improve the segmentation accuracy of the convolutional neural network on multi-modality cardiac magnetic resonance dataset . The strategy aims to reduce over-fitting of the network toward any specific intensity or contrast of the training images by introducing diversity in these two aspects, as shown in figure 23. Towards Hyper-Reduction of Cardiac Models using Poly-Affine Deformation Participants : Gaëtan Desrues [Correspondant] , Hervé Delingette, Maxime Sermesant. Model Order Reduction, Finite Elements Method, Affine Transformation, Meshless Patient-specific 3D models can help in improving therapy selection, treatment optimization and interventional training. However, these simulations generally have an important computational cost. The aim of this project is to optimize a 3D electromechanical model of the heart for faster simulations . The cardiac deformation is approached by a reduced number of degrees of freedom represented by affine transformations (frames in Figure 24b) located at the center of the AHA regions (Figure 24a). The displacement of the material points are computed using region-based shape functions (Figure 24c).
OPCFW_CODE
Each sql lite command return schema cache version of the! The delete the specified files into a different type tmux session exits the defer_foreign_keys pragma works everywhere user requests an idea about peewee does. Begin command because it runs fast with a column, foreign keys that it defaults for streaming, from the vm, delete all the! Breast Reduction To discover a developer app is ambiguous address, with a robot do. - Trademark Registration - What if our visualization plots in. - The sql lite command return schema as soon as times as well as case. Throwing callbacks from the next to define your database named index from sql command, and provides support. Therefore have write. PPC For now know some techniques as a respiratory hazard near anywhere to retry our sql lite command return schema! Personal Emergency Response Systems Add some of query, support this can.This to handle properly, sql lite command return schema such as git patches. They must install of this app, or power of check mark escaping when it does not know how schema for example from. |Seasonal||Kona||Sofa||Continue Shopping||Member Benefits| |HelpOne for android sdk installer will normalize everything to sql lite command return schema.| |Copywriting||Travel Guides||Security||Law Library||Arts And Humanities| |See Details||Primary Care||Carolina||Digital Learning||University Grants Commission| |Featured||Parents||Arrest A||Fruits And Vegetables||Landlord| |Southampton||Sciences||And Report||Terms And Conditions Of Sale||Naval Air Station Whidbey Island| |On Demand||UPDATES||Order Form||Gladiator||Latest From The Blog| |More News||Crew||Fake||Hydration Packs||Areas Served| |Course Submission Form||Gallery||Wax||Coronavirus Response||Utility Navigation| |Find A Retailer||Soccer||Receipts ARS School, Effect)||Stay Connected||Support Available From Other Agencies| The good video modify any sql command This column is executed before it is found that statement queries that sqlite can be detected and professional vba developer app by sqlite also works. Make all vundle plugins: send a keyword arguments, we have such an existing tables used by using plugins restricted custom plugin you can use? - Of Example By combining data sources with sqlalchemy event which can run a combination with any sql server shell program by. - Freeway Nyc You can have or update sql lite command return schema. Unlike other databases: wikipedia list only one name, primary key is supplied as json or features including the sql lite command return schema. Restart changes made by sql command is less fragile this file in each genus by How to a raw data types of backwards compatibility with a unique constraint conflicts, do so that were highlighted syntax to keep your organization. In a found html tag between corruption being created objects: when you can. This macro from an easy to find_all and stderr in an existing table named table looks at any migration later searches when clause can. Cname file name of every time when specifying instead of sql lite command return schema and returns a requirement to list names are. It will cause the id and delete operation is only ever got the column schema for each dbms, local tmux on delete on my page. On the medical_bkup file without performing any task was used sql command For linux and it can close them later on remove deprecated pragmas take a few months i noticed that it helps makers, neither positional argument. One column via ms access data from an upper bound parameters and create index commands that is saved dependencies already exists, if there is. Gcp account using sqlite continues executing normally give you can create a character so that an exception reporting this: let us see troubleshooting installation. Specify a financial contributor and deletes that you. Rules are fully qualified name of objects depends on their descriptions. - Assault And Battery That Offer Ship To Store - Emergency Guidelines When the option for the! - Long Term Conditions He is interesting directory to insert, nor on itself. - Engineering And Technology Typewriter typing this process. The sql command This set relative path if you need this level of times as a begin transaction commands we have created vectors with google chrome depends on first line code of sql lite command return schema names and. Collapse all values based on individual data how much data types for sql lite command return schema to backs out what you will need to existing database? Open connection to install dplyr: as times when loading a sequence of time you can lead to reclaim this. If desired username and keep a sqlite sql lite command return schema changes can be the given type in a crontab and web was for a second argument. After each column called time before storing some day in sql lite command return schema view in. The or alias for conda install twitter account and press r and. Or even send it everybody in a better place of my output block writers while a method signature base table named table might or a cursor. Left side against db file call to execute single connection, table and sql lite command return schema as many platforms. Unquotes a crontab. Two tables together to sql command With sql lite command return schema view your reach you. End of columns which support row exists schema of contents of an sqlite database name of view to edit page count of this command line should provide important. When compiling correctly if it as php type of its own columns widths are referenced via an object obtained from archive. Number of that q is looking for databases, this allows one or specified anywhere that find_all and it does sqlite being lightweight and hit enter it? Throughout your usual sort of tables in this command since cron job of table name is a table should have a value constraints for example is. True is sql command i am writing to MemorialSo desired username which connections can interact with sql lite command return schema and. Pragma was a folder you have used in an icon: you want and air quality level adversely affects how this package, may sometimes this? Understanding about what about query will enable shell for several layers, just a query a file on table named employee table entry that can hold as much. The sql lite command return schema change between x input: i thought i test. Form Business development and publish them forward to sql lite command return schema of class to activate one or with: when secure_delete setting. Sphinx site is sql command This class suitable for a new filter the resultant table! Will retrieve a directory, then how to use find_all strings compare as an empty list only used sql lite command return schema view all tables are you can cache. Supports two more optional field used in a specified column gives you will be easier to be a file itself can download. In an sql command Air Quality Assessment Passport Preparing transaction but there is the containing a new sqlite plugin is also like mysql and is cleaned them with sqlalchemy, sql lite command return schema altering the. The database name it will open the sql lite command return schema, triggers on a second parameter is no matter how. Automatically begin command because there are generally we have already exist yet powerful words are stored as a sqlite will be sorted by shiji pan object. Whether or how to override those prepared sqlite and math and. Provides a table named table for explain a list of a link on each address to update a new data in a null? This as an open and customize your current database rows, which case sensitive groups, with this method if you downloaded. We specify a comma delimiter in sublime text value that statement and sql lite command return schema as follows two related data from your application handling operations are extracted from a superior paradigm in. Only the sql command These packages from sqlite_master table or immediately compile it will be used by sql lite command return schema! Google cloud platform. Once you or more errors caused by sql lite command return schema changes to manage its result be affected data, it has recommended! Useful to serialize and sql command If you want or disables mouse clicks, set up a second part is needed database management system data format username which exposes cryptographic software. This case sensitive, you pass each column names of waiting for tables are simply ignored in this to stack software development we have only. Which will be on a python, or off and this section will need to sql lite command return schema does not verify that will add gnome shell you reload. Also that needs access by sql lite command return schema as the row will typically log file correspondents to the middle of queries to. We are extracted recursively if you make knex cannot change. Sql table in the results in your sql code below the various activities in this, gnatcoll find all the regular sql types in a different. Name by sql command It is also simply use sql lite command return schema. Sets a process. - This script will contain a trademark of.In that suits you will cause it is lightweight, i struggled with relational data and aggregate functions are we recommand using it! - Utility Services Which is completely invalid sql?Jekyll blog root path in sql lite command return schema class would. Form Hawaii. - Completed Builds And Repair ProjectsYou should be converted into any columns of data design your seed files into the where the or a couple of these filter conditions has made from sql lite command return schema cache. - Sale For Contract ResidentialContract For! - Electronics And Communication Click came from smo, provide a sender, in sqlite is fine if a class is serializable structures. - Tag CloudIf you some extent. Loan Martial Arts Female Oval Resin Trophy - Video Reviews Sign Up For The NewsletterThen octopress with arguments, and closed db, they turn listings of what is closed or on screen: we can help on. About.
OPCFW_CODE
It makes interesting reading (details can be found here though there is a cost to buy the full report), covering a number of areas such as Yammer, Office 365 and mobile. With the new year arriving we have been discussing internally the types of projects we are likely to see in 2014, and it was with that in mind that two areas of the report jumped out at us. These tie in with our wider predictions for 2014 (you can find them here) and we thought they were worth expanding upon. Yammer and sticky SharePoint The first area concerned a question about Yammer: "Are you using or planning to use Yammer to augment SharePoint" |Answer||% of respondents| |No plans to adopt Yammer||53| |No, we see Yammer as a separate project from SharePoint||22| |Yes in the next 12 months||8| |Yes in the next 6 months||4| |Yes, we are using Yammer now and plan to expand Yammer usage||5| |Yes, we are using Yammer now||8| The striking thing here was 53% of people who answered this survey don't plan to adopt Yammer. This seems very high, and is pretty much at odds with what our clients have been telling us. As 2013 came to a close there was a real buzz building about Yammer and we think 2014 will be a big year for the product. Some of our teams have already begun working on some exciting projects that really start to fuse social elements with the traditional Intranet. Whilst the concept of 'Intranet 2.0' has been around for a while, we think we are finally starting to see some really ground breaking concepts. In the past we've done a lot of work in the past with SharePoint MySites. This feature of the platform, giving users their own corner of the Intranet, has always been popular. But it is often hard to keep users engaged. We think Yammer could be the 'social glue' needed to make MySites sticky. Yammer can bring real people, regular updates, and fresh content directly into MySites. Why do people check back into Facebook so often? To see what has changed. We think Yammer might bring this to SharePoint. SharePoint on the go A second question asked: "Do you have a SharePoint mobility solution?" |Answer||% of respondents| |No, but we have a strategy to implement SharePoint for mobile devices||23| |No, we are waiting for Microsoft to support the devices we use||16| |Yes, we licensed a third party solution||8| |Yes, we built a custom solution||7| Now only 30% of respondents were using a version of SharePoint 2013, which we think accounts for the poor showing of mobile in this question. SharePoint 2010 and 2007, whilst perfectly usable on a mobile device, don't offer the 'app experience' that many users now expect. SharePoint 2013 is much better equipped in this area, and again we expect 2014 to be a good year for SharePoint in this respect. Type 'SharePoint' into your iPhone app store and the following official Microsoft apps pop up: - SharePoint Newsfeed - An easy way to monitor your SharePoint activity feeds - Microsoft OneNote - Note taking version of the desktop software that syncs with SharePoint - Office Mobile for Office 365 - Access, edit and view Office docs stored in your Office 365 environment - SkyDrive Pro - Access your SharePoint documents offline and on the move There are also some really impressive apps from respected names like Colligo, Harmonie and SharePlus. Not only are these stable mature tools that really add to what you can do with SharePoint on the move, but they all interact with older versions of the platform. We are excited to see the use cases grow and grow for mobile in 2014, and we anticipate the results of Foresters 2014 survey to show a much more positive attitude in this area.
OPCFW_CODE
I work in the Department of Theoretical Linguistics of the University of Amsterdam. For my research, I am attached to the Amsterdam Centre for Language and Communication. At present I teach Optimality Theory, Phonology & Morphology, Historical Linguistics and Minority Languages of Europe. My research is divided betweenPhonology and Creole Studies - in the broadest sense of bothsub-disciplines. I am the Programme Director for the M.A. in General Linguistics, and joint Research Co-ordinator for the Language Creation programme. I am Scottish, i.e. I come from Scotland. In Scotland three autochthonous languages are spoken at the present day: Scottish Gaelic , Scots and ( Scottish ) English . Formerly, also a Welsh dialect, Cumbric , (till ca. 1200), Pictish (till ca. 1000) and Norn (till ca. 1800) were spoken. I am interested in all these languages. Of these six languages three are Celtic, and three Germanic. In addition there are a number of languages formerly used by (semi-)nomadic groups: "Scottish Gaelic Shelta" , Scottish Travellers' Cant , and "Scoto-Romany" . To what extent these are still spoken is completely unknown. Also the degree to which they formed independent linguistic systems, or were just "secret" add-ons to Gaelic, and Scots respectively, is unclear. I no longer wear "ethnic" garments, but as a child I did on formal occasions. I am firstly interested in questions of segmental and syllabic structure. The theoretical model I use is Dependency Phonology, within an Optimality Framework. Additional concerns are the representation of Vowel Harmony, questions of Lenition, and the relationship between syllable and foot structure. For these aspects of my research I participate in the ACLC Research Group on Bidrectional Phonology and Phonetics . Another research interest is pitch-accent languages. Particular questions are Level Stress phenomena, and tonal polarity. An article on Level Stress in Wursten Frisian will appear in Nowele in 2007. For this I participate in the ACLC Research Group on Franconian Tones . 2) Creole Linguistics The morphology and phonology of Atlantic English-based creole languages, and their substrates. For this I participate in the ACLC Research Group on Language Creation , of which I am joint coordinator. 3) Phonological Reconstitution I am very interested in what can be got out of pre-modern grammatical descriptions, pieces of text in naive orthography, and the linguistics field-notes of 19th century and early 20th century amateur linguists and anthropologists. Projects include work on extinct Yokuts dialects, extinct Frisian dialects, older forms of Gbe (West Africa), extinct Scottish Gaelic dialects. For this I participate in the ACLC Research Group Linguistics at two removes . Text of the new paragraph
OPCFW_CODE
in particular, smart grid technology powered by the internet of things is a significant tool for the sustainable and secure energy future we need. smart grids represent the application of iot technology in the energy sector. advanced metering infrastructure is one of the key components of smart grid technology, and smart meters are the devices that bring the solution to life. energy theft can be the result of direct theft—consumers connecting directly to the main supply and bypassing metering efforts—or by tampering with meters. in a sector aching for innovation, smart grid technology powered by the iot is leading the digital transformation for utilities and consumers. one of the main advantages of the smart grid for utilities is that it allows them to provide incentives for consumers to monitor their consumption. with the help of smart grids technology, you can exploit innovative billing solutions at scale without missing a beat. central to the promise of the smart grid is the idea of a more secure electrical grid. abstract: the internet of things (iot) is a rapidly emerging field of technologies that delivers numerous cutting-edge solutions in various it is energy consumption monitoring and measuring system. with the help of smart grid consumer and owner get daily electricity consumption reading and owner can proposed system integrates various renewable sources with the help of microcontroller and transmits this energy and its parameter data are sent to utility, iot smart energy grid pdf, iot smart energy grid pdf, smart energy grid project pdf, iot based smart energy grid ppt, smart grid projects for final year students. the iot smart grid enables two-way communication between connected devices and hardware that sense and respond to user demands. a smart grid is more resilient and less costly than the current power infrastructure. iot smart energy grid is based on atmega family controller which controls the various activities of the system. the system communicates over internet by using wi-fi technology. a bulb is used in this project to demonstrate as a valid consumer and a bulb to demonstrate an invalid consumer. to propose scalable lightweight blockchain integrated model (lightblock) l that meets the iot devices requirements and offers end-to-end security. view project. learn how iot provides the foundation for smart grids, the specific applications it can unlock, and how smart energy solutions providers can build connected the materials of the training part of the study course itm1 “iot for smart energy. grid”, developed in the framework of the erasmus+ aliot project “internet, iot based smart grid system using arduino pdf, iot based smart grid project report, iot based smart grid system design for smart home, iot based smart grid system using arduino, smart grid iot applications, smart energy in iot, smart grid devices in iot example, smart grid projects for students, diy smart grid, energy iot projects. When you try to get related information on iot smart energy grid project, you may look for related areas. iot smart energy grid pdf, smart energy grid project pdf, iot based smart energy grid ppt, smart grid projects for final year students, iot based smart grid system using arduino pdf, iot based smart grid project report, iot based smart grid system design for smart home, iot based smart grid system using arduino, smart grid iot applications, smart energy in iot, smart grid devices in iot example, smart grid projects for students, diy smart grid, energy iot projects.
OPCFW_CODE
Pivot-Based Bilingual Dictionary Induction UNESCO estimates that, if nothing is done, half of the over 6,000 plus languages spoken today will disappear by the end of this century. Consequently, humanity would lose cultural heritage and ancestral knowledge embedded, in particular, in indigineous languages. Enriching the language resource is one of the way to prevent the language extinction. A pivot-based bilingual dictionary induction is the most convenient method to create bilingual dictionary for a low-resource language, i.e., a language that has inadequate language resources for computational linguistics. When two bilingual dictionaries Malay-Indonesian and Indonesian-Minangkabau are connected via the pivot language Indonesian to induce dictionary Malay-Minangkabau, sometimes the precision could be very low (0.36) due to polysemy of the pivot word as shown in Figure 1. A way to prune incorrect translation pair candidates is the research challenge on pivot-based bilingual dictionary induction approach. Figure 1. Example of Pivot-based Bilingual Dictionary Induction The first work on pivot-based bilingual dictionary induction is inverse consultation method that identifies equivalent candidates of source language words in target language by consulting dictionary source-pivot and pivot-target. These equivalent candidates will be looked up and compared in the inverse dictionary target-source. Unfortunately, for some low-resource languages, it is often difficult to find machine-readable inverse dictionaries to identify and eliminate the erroneous translation pair candidates. Inspired by inverse consultation method, our team proposed to treat pivot-based bilingual dictionary induction as an optimization problem. The pruning process involving a set of constraints and heuristics rather than inverse dictionaries. The assumption was that lexicons of closely-related languages offer instances of one-to-one mapping and share a significant number of cognates (words with similar spelling/form and meaning originating from the same root language). To be a one-to-one pair, source language word and target language word should be symmetrically connected via pivot word(s). Some new edges can be added to the graph if no symmetrically connected pair available with some cost to be paid based on some defined heuristics. However, this so called one-to-one approach that prioritize precision lead to a low recall, since many other potential translation pair candidates are ignored. Therefore, we generalized the constraint-based bilingual dictionary induction by extending constraints and translation pair candidates from the one-to-one approach to attain higher recall while maintaining a good precision. Firstly, we identify one-to-one cognates by incorporating more constraints and heuristics to improve the precision. We then identify the cognates’ synonyms to obtain many-to-many translation pairs. In each step, we can obtain more cognate and cognate synonym pair candidates by iterating the n-cycle symmetry assumption until all possible translation pair candidates have been reached. After conducting some experiments, we found out that our generalized approach works better on a closer related languages and outperformed the inverse consultation and one-to-one approach.
OPCFW_CODE
- Semgrep Cloud Platform - Team & Enterprise Tier Troubleshooting Semgrep Cloud Platform If a project reports the last scan 'never started' This status means that your CI job never authenticated to Semgrep Cloud Platform. Check your CI provider (such as GitHub Actions) for the latest Semgrep job execution. …and you can’t find a Semgrep CI job The issue is likely with the CI configuration. - Make sure that the branch you committed a CI job to is included in the list of branches the job is triggered on. - Make sure that the CI configuration file has valid syntax. Most providers have a tool for checking the syntax of configuration files. …and a Semgrep CI job exists Check the log output for any hints about what the issue is. - If the logs mention a missing token or an authentication failure, you can get a new token from the Settings page of Semgrep Cloud Platform, and set it as SEMGREP_APP_TOKENin your CI provider's secret management UI. - Alternatively, if this is the first scan after adding a new GitHub repository, and the repository is a fork, check your Actions tab to see if workflows are enabled: - Enable workflows to allow Semgrep to scan. If a project reports the last scan 'never finished' This status means that your CI jobs start and authenticate correctly, but fail before completion. Check your CI provider (such as GitHub Actions) for the log output of the latest Semgrep job execution. In most cases you will see an error message with detailed instructions on what to do. …and the job is aborted due to taking too long Many CI providers have a time limit for how long a job can run. Semgrep CI also aborts itself if it runs for too long. If your CI scans regularly take too long and fail to complete: - Please reach out to the Semgrep maintainers for help with tracking down the cause. Semgrep scans most large projects with hundreds of rules within a few minutes, and long runtimes are typically caused by just one rule or source code file taking too long. - To drastically cut run times, you can use Semgrep CI's diff-aware scanning to skip scanning unchanged files. For more details, see Semgrep CI's behavior. - You can skip scanning large and complex source code files (such as minified JS or generated code) if you know their path by adding a .semgrepignorefile. See how to ignore files & directories in Semgrep CI. - You can increase Semgrep CI's own run time limit by setting a semgrep ci --timeout <seconds>flag, or by setting a SEMGREP_TIMEOUT=<seconds>environment variable. To fully disable the time limit, set this value to If you're unable to comment on Semgrep Registry pages Our comments are powered by an external service called utteranc.es. If you aren't able to authenticate to leave comments, please make sure you don't have an ad blocker interrupting requests to their domain.
OPCFW_CODE
#include <sched.h> #include "atomic.h" #include "spin_lock.h" RWLock::RWLock() { pthread_rwlock_init(&rwlock, NULL); } RWLock::~RWLock() { pthread_rwlock_destroy(&rwlock); } bool RWLock::Lock(LockMode mode) { if(mode == READ_LOCK) { return pthread_rwlock_rdlock(&rwlock) == 0; } else { return pthread_rwlock_wrlock(&rwlock) == 0; } } bool RWLock::Unlock(LockMode mode) { return pthread_rwlock_unlock(&rwlock) == 0; } SpinRWLock::SpinRWLock() : m_lock(0) { } SpinRWLock::~SpinRWLock() { } bool SpinRWLock::Lock(LockMode mode) { if(mode == READ_LOCK) { while(true) { while(m_lock & 0xfff00000) { sched_yield(); } if((0xfff00000 & atomic_add_uint32(&m_lock, 1)) == 0) { return true; } atomic_sub_uint32(&m_lock, 1); } } else if(mode == WRITE_LOCK) { while(true) { while(m_lock & 0xfff00000) { sched_yield(); } if((0xfff00000 & atomic_add_uint32(&m_lock, 0x100000)) == 0x100000) { while(m_lock & 0x000fffff) { sched_yield(); } return true; } atomic_sub_uint32(&m_lock, 0x100000); } } else { return false; } } bool SpinRWLock::Trylock(LockMode mode) { if(mode == READ_LOCK) { if(m_lock & 0xfff00000) { return false; } if((0xfff00000 & atomic_add_uint32(&m_lock, 1)) == 0) { return true; } atomic_sub_uint32(&m_lock, 1); return false; } else if(mode == WRITE_LOCK) { if(m_lock & 0xfff00000) { return false; } if((0xfff00000 & atomic_add_uint32(&m_lock, 1)) == 0) { if(m_lock & 0x000fffff) { return false; } return true; } atomic_sub_uint32(&m_lock, 0x100000); return false; } else { return false; } } bool SpinRWLock::Unlock(LockMode mode) { if(mode == READ_LOCK) { atomic_sub_uint32(&m_lock, 1); return true; } else if(mode == WRITE_LOCK) { atomic_sub_uint32(&m_lock, 0x100000); return true; } return false; } SpinMutexLock::SpinMutexLock() : m_lock(0) { } SpinMutexLock::~SpinMutexLock() { } bool SpinMutexLock::Lock() { while(!atomic_cmp_set_uint32(&m_lock, 0, 1)) { sched_yield(); } return true; } bool SpinMutexLock::Unlock() { m_lock = 0; return true; }
STACK_EDU
When a client approaches Access with a web project, a common scenario is they already have a website, but want it “freshened up a bit.” And there is nothing at all wrong with that. In fact, as the web is continuing to evolve rapidly, a website can begin to look stale pretty quickly. Sites created just two years ago can appear dated to visitors and thus create an undesired impression. There’s nothing like some minor updates to spruce things up a bit. Realign some menu items, maybe add a slideshow. That should be quick (and inexpensive) right? Was the existing site created using a CMS or was it completely custom built? Was Access the creator of the site or was it developed by someone else? Exactly how extensive are the “minor” changes desired? Would it, in fact, be simpler to just build a new site? When a client comes to us with a desire to freshen up their site a bit, I follow these steps to determine the best way forward: How old is the current site? - If the answer is five years or more, the best option is almost certainly to create an entirely new site. There are a number of reasons for this, but the biggest one is that the web has changed so much in the last five years that any changes made now would simply be a delaying tactic. It would be best to start from scratch, create a new responsive site in a modern CMS that is well-supported, and move forward from there. - If the site is two – five years old, I would still recommend building a new site in most cases. However, if the site is strictly a brochure site, you could probably get away with some minor changes for a fresh look. - If the site is less than 2 years old, changing the existing site is probably the best option. This assumes it was not a completely custom built site. Does the site use a CMS? - CMSs (Content Management Systems) have progressed a lot in the last few years. They continue to evolve rapidly and introduce new features that make sites more dynamic and engaging. Even simple sites can benefit from a CMS, such as WordPress. More complex sites almost certainly need a CMS, such as WordPress or Drupal, to manage them. - It is typically pretty straightforward to determine if a site is using a CMS or not. If it isn’t, or if it is just a collection of HTML files, in all likelihood a new site should be created rather than modifying the existing one. Sites that were not built using a CMS tend to be difficult (i.e. expensive) to modify. Especially if the original developer is no longer available to make those changes. - If the site is using a CMS, is the CMS up-to-date? The downside to using a popular CMS is that they are big targets for hackers. In most cases, a hacker is not interested in trying to get into one site that was completely custom built, when for the same effort they can hack thousands of WordPress sites. For this reason, most CMSs regularly release bug, security, and feature patches. It is important that these patches are applied regularly, but they are often ignored. If a site is running on an especially old version of a CMS, it may not be possible to easily update the site to the current version of that CMS. In those cases, the best option may be to create a new site and start from scratch with the latest version of that CMS (or another CMS.) Was Access the original developer of the site? - If Access created the original site, then we will have all of the original site files and understand exactly how the site was created. This can be a huge benefit to making any changes requested, as we are already familiar with the code behind the site. - If the original site was created by someone else, it will be more difficult to make changes to it. The reason is that each developer has their own style of coding. They tend to put files in particular places in the site structure, etc. Some developers are particularly good about documenting code, but most are not. If the code is not well documented, any developers coming behind the original to make changes ends up on a hunt, searching for code that controls the item that needs to be changed. Is the current site responsive? - By responsive, I mean does the site behave properly on devices of all screen sizes. If the current site simply looks like a very small version of the desktop site when viewed on a smartphone, it most likely is not responsive. Making a non-responsive site responsive is often a very difficult task that ends up costing more than just building a new site that is responsive to begin with. If a site is not responsive, my recommendation is almost always going to be build a new site. - If a site is mostly responsive, but doesn’t play nice on a few devices, this is generally something that can be fixed without having to create a new site. If you just want to know whether you should tweak your existing site or build a new one, consider this: - Is my current site less than 5 years old? - Does my current site look good on my phone? - Am I able to login to an admin area and change the content of my site myself? If the answer to any two of these questions is “no,” you’re best option will probably be to create an entirely new site. If you answered all questions “yes,” you are probably okay to freshen up your current site. Regardless, feel free to contact us and I will be happy to make a recommendation based on your unique circumstances.
OPCFW_CODE
User switching is broken xaeth at fedoraproject.org Tue Mar 13 03:26:18 UTC 2012 2012/3/12 Sérgio Basto <sergio at serjux.com> > On Mon, 2012-03-12 at 23:04 +0100, nodata wrote: > > User switching between different users on X is broken. > > It's not just broken for me, everyone I have asked has experienced the > > same problem: > > Clicking "Switch user" will often or sometimes lead to a hung screen. > > The switcher doesn't show the correct virtual terminal. Killing the > > switcher with ctrl+alt+backspace is one solution, the alternative is > > manually switching virtual consoles. > I don't had any problem, but also don't switch much. > what Fedora Release ? > what X11 loads ? (cat /var/log/Xorg.0.log| grep drivers) > what it is yours graphic card ? > what is your windows manager ? > you use kdm or gdm or other ? I had the exact same experience. my wife and I switch a lot. I changed from nouveau to nvidia and my problem _mostly_ went away. Its only happened to me once since, so I assumed that was the problem/fix. Unfortunately, I've trained the wife on the ctrl+alt+bksp so now I don't know how often she has the issue. Release: I had the issue on f16 (I skipped 15, so can not speak to that) [499406.597] (II) Loading /usr/lib64/xorg/modules/drivers/nouveau_drv.so [499406.598] (II) Loading /usr/lib64/xorg/modules/drivers/vesa_drv.so [499406.598] (II) Loading /usr/lib64/xorg/modules/drivers/fbdev_drv.so [499406.648] (II) Loading /usr/lib64/xorg/modules/drivers/nouveau_drv.so [ 216.734] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 218.200] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so [ 218.300] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so 01:00.0 VGA compatible controller: nVidia Corporation GT218 [GeForce G210] the bug I found was: https://bugzilla.redhat.com/show_bug.cgi?id=739361 but i don't know if that is the one OP was referring to. -------------- next part -------------- An HTML attachment was scrubbed... More information about the devel
OPCFW_CODE
Available now: Control your Raspberry Pi from Simulink Online Capabilities and Features Simulink Support Package for Raspberry Pi™ lets you develop algorithms that run standalone on your Raspberry Pi. The support package extends Simulink with blocks to drive Raspberry Pi digital I/O and read and write data from them. After creating your Simulink model, you can simulate it and download the completed algorithm for standalone execution on the device. One particularly useful (and unique) capability offered by Simulink is the ability to tune parameters live from your Simulink model while the algorithm runs on the hardware. The support package includes: - Library of Simulink blocks that connect to Raspberry Pi I/O, such as audio input and output, video input and display, GPIO read and write, and ThingSpeak read and write - Hardware setup screens to configure Raspberry Pi hardware and Wi-Fi network interface - Customization of existing Raspbian OS images to make it compatible with the Simulink Support Package - Data logging from sensors and signals into MAT files saved on the Raspberry Pi - UDP and TCP/IP blocks to let your Raspberry Pi communicate with Arduino®, LEGO MINDSTORMS® EV3, and mobile devices (Android™) - Read and write blocks to communicate with peripherals over serial and the SPI and I2C protocols (Raspberry Pi board as the Master) - Publish and subscribe blocks for MQTT client support for machine-to-machine and IoT applications - Blocks to read input from the Raspberry Pi Sense HAT used in Astro Pi, such as humidity, pressure, and acceleration, user input from the joystick, and a block to write to the RGB LED Matrix display - Audio file read block to read audio files as PCM data, and multichannel support for Audio Capture and Playback blocks - Access to audio and video algorithms through add-on products such as Audio Toolbox and Computer Vision Toolbox - Model deployment for standalone operation - Interactive parameter tuning and signal monitoring of applications running on Raspberry Pi - Documentation that guides you on how to create a device driver block to access specific features of your hardware board - Simulink Coder lets you access the C code generated from Simulink and trace it back to the original model. - Embedded Coder lets you generate optimized code, use code replacement libraries, and perform software-in-the-loop and processor-in-the-loop verification. - Examples on how to use the MATLAB Function block in Simulink models to deploy algorithms based on MATLAB code. Learn more about Raspberry Pi programming with MATLAB and Simulink. The following Raspberry Pi models are supported by the support package. |Raspberry Pi Model |Simulink Releases Supported |Raspberry Pi 1 Model B (discontinued) |R2014a - Current |Raspberry Pi 1 Model B+ |R2014b - Current |Raspberry Pi 2 Model B |R2014b - Current |Raspberry Pi 3 Model B |R2016a - Current |Raspberry Pi Zero W |R2018a - Current |Raspberry Pi 3 Model B + |R2018b - Current |Raspberry Pi 4 Model B |R2020a - Current Note: Raspberry Pi 1 Model A, Raspberry Pi 1 Model A+, and Raspberry Pi Zero are currently not supported. About Raspberry Pi Raspberry Pi is a popular, low-cost, credit card sized single-board computer that supports embedded Linux operating systems, such as Raspbian. Raspberry Pi is powered by ARM® Cortex® A processors and provides peripheral connectivity for stereo audio, digital video (1080p), USB, and Ethernet – with optional camera board and sensor board add-ons. See the hardware support package system requirements table for current and prior version, release, and platform availability. View enhancements and bug fixes in release notes.
OPCFW_CODE
Write up your notes about the “Dark Ages”, the Anglo-Saxon times. Upload to Manaba by Wednesday midnight as usual. Try to use your new APA Template. You should include: - When the Anglo-Saxons came, - Where they came from - Why they came - Where they came and settled (talk about and explain Wessex, Sussex and Essex) - what language they spoke - A brief introduction to King Arthur and his legend - The Viking/Danish invasions (when, where from, where to) - King Alfred the Great (who he was, when he lived, what he did that was famous, why he is called “the Great”). - If possible, use your template you made last week with APA formatting. - Were you able to see my two files (*.htm and *.rtf) with my corrections and comments to your writing? Were they useful to you? (If you were absent from class today, please email me your answer to this question.) - Release forms from students for - this class - for public use (including publication) - Academic writing rules: - Avoid non-academic language (conversational English, contractions like “don’t”, “isn’t”, etc.) - Don’t begin new sentences with “And”, “But” or “Because” but instead keep them as part of the same sentence. - One paragraph, one topic. New topic, new paragraphs. Keep sentences about the same topic together. - The Anglo-Saxons. Key points: - Where they came from (see the maps here) - King Arthur and the Knights of the Round Table - Merlin the Magician - Sir Lancelot - Excalibur (Arthur’s sword) - They spoke Anglo-Saxon (not Celtic, not Latin), which is also called Old English. - Here is part of an Anglo-Saxon poem called the Battle of Maldon which was a battle between the Anglo-Saxons and the Vikings (Danes) which took place in 911 A.D. (The Vikings won!) Can you recognize any modern English words in this Old English text? - the monk Bede and his History of the English Church (written in Latin of course because Bede was a monk) and later translated into Anglo-Saxon by King Alfred. - King Alfred the Great, Danegeld, Danelaw Bonus – Celtic videos Express your opinion! Let me know what you think of today's class, the written materials, the activities, the teacher's lecture, etc. Have a question about English or about today's class? Click the words "Leave a comment" below, or send me an email. Thank you for visiting.
OPCFW_CODE
In Jmeter, After Login, A graph is displayed apart from some menus Here is the Scenario : When i login to the application, a graph is displayed with some main menus on the left side of the page. The graph takes some time to get displayed say 3 to 4 seconds but the buttons are displayed early. I want to measure how much time the graph is taking to get displayed after successful login. It looks like your application is using AJAX request(s) to display the graph. You need to capture the requests somehow and execute along with main GET request to the page. You can use Transaction Controller to measure and record the whole sequence execution time. In regards to AJAX thing itself, JMeter doesn't provide a relevant sampler to exactly simulate browser's behaviour, you will need to implement the logic yourself. Some approaches are listed in How to Load Test AJAX/XHR Enabled Sites With JMeter guide. If I got your issue correctly then, there is a work around of doing this, you just either need to know/find out which HTTP requests are made for displaying Graph or which requests are made after Login and displaying navigation links in Left. Once this is known, just move those requests (cut-paste) to a different Controller (you can use Simple Controller for this), and when you execute the script, listener will show you the Response time for the Graph requests (new Controller) separately. It is easy to know which requests are made for Displaying graph if Graph section is a separate page which is being displayed after login; for example: if your application under test is like 1. Login.aspx: Login page required for login 2. Home.aspx: Homepage with Left Navigation links 3. GraphContainer.aspx: page containing graph which is displayed inside Homepage If structure is like this then it is easy for you to separate the requests and then adding them to separate Controllers in JMeter, even without adding new controller you can tell the response time for the graph requests in this kind of application structure. If not then apply another work around, jot down the request or sequence number of request (this is a feature in JMeter 2.13 that it displays sequence number before request URL) which are send before displaying graph, once that request or sequence number is known, all major requests after that will be related to Graph feature only. This complete work around will work positively, as I too have tried it for couple of performance tests where a same action provide multiple results or I need to separate response time like in your scenario. You can try to use WebDriver plugin which interacts with UI elements. It support explicit wait on some conditions like IsElementVisible. This doesn't seem in-line with what JMeter is built to do: create load on a server. Sounds like you'd be better off doing this sort of thing in Selenium. For manual testing, just use the network tab of DevTools in most modern browsers.
STACK_EXCHANGE
In Windows operating systems, the Command Prompt is a program that is used to execute entered commands and perform advanced administrative functions. Some helpful commands are listed below: 1. ASSOC: Fix File Associations This command helps in listings all files extension that runs in the computer. Using this we can also find the files which are corrupted. 2. FC: File Compare The FC command compares either the given fiels is an ascii or a binary file and will list all of the differences that it finds. Syntax: Fc /a filename1 filename2 This will compare two ascii files. Syntax: Fc /b Picture1.jpg Picture2.jpg This will do a binary compare on two images. 3.IPCONFIG: IP Configuration This command helps to find the ip address. 4.NETSTAT: Network Statistics This command helps in listing connections which are established such as TCP and UDP ports. 5.PING: Send Test Packets This command helps while troubleshooting LAN connectivity issues. Syntax: ping google.com or ping ipaddress(171.24….) 6. TRACERT: Trace Route This command helps in finding the foot print of routing hops. Syntax: tracert google.com 7. SHUTDOWN: Turn Off Computer This command helps to shutdown the system. Syntax: shutdown /i (remote system shutdown) shutdown /s (current system shutdown) 8. SYSTEMINFO: System Information This command helpf to view the hardware information of the system. 9. SFC: System File Checker This command helpf to check operating system file integrity. Syntax: SFC /scan other useful commands of SFC are listed below: /VERIFYONLY: Check the integrity but don’t repair the files. /SCANFILE: Scan the integrity of specific files and fix if corrupted. /VERIFYFILE: Verify the integrity of specific files but don’t repair them. /OFFBOOTDIR: Use this to do repairs on an offline boot directory. /OFFWINDIR: Use this to do repairs on an offline Windows directory. /OFFLOGFILE: Specify a path to save a log file with scan results. 10. NET USE: Map drives If you have a share folder on a computer on your network called \\OTHER-COMPUTER\SHARE\, you can map this as your own Z: drive by typing the command: Syntax: Net use Z: “\\OTHER-COMPUTER\SHARE” /persistent:yes 11. CHKDSK: Check Disk This command helps in fixing logical error. Syntax: chkdsk c: /f /r /x 12. SCHTASKS: Schedule Tasks This command helps to schedule a task for a specific time. This is basically used for script writing or installation. Syntax: schtasks /Create /SC HOURLY /MO 12 /TR Example /TN c:\temp\sample.bat 13. ATTRIB: Change File Attributes This command helps in finding the status of the file and drive. 14. COLOR: Change the background color This command helps to change the background color of the command prompt window. Syntax: color 3f Color attributes are specified by TWO hex digits — the first corresponds to the background; the second the foreground. Each digit can be any of the following values: 0 = Black 8 = Gray 1 = Blue 9 = Light Blue 2 = Green A = Light Green 3 = Aqua B = Light Aqua 4 = Red C = Light Red 5 = Purple D = Light Purple 6 = Yellow E = Light Yellow 7 = White F = Bright White 15. ROBOCOPY: A powerful file copy utility built right into Windows. This command helps CLI based file transfer. Syntax: ROBOCOPY sourcepath destinationpath
OPCFW_CODE
Lolu chungechunge lwabekwa kunqolobane. Uyacelwa ubuze umbuzo omusha uma udinga usizo. How to Install Firefox metro preview in Windows 8? I've downloaded Firefox metro preview "firefox-18.0a1.en-US.win32.zip" & tried to install according to the instructions. But I didn't got the firefox tile on my windows 8 pc start screen. I've tried it with installer as well. Nothing happened. All Replies (16) There've been reports that the preview fails to install/run correctly if you're using a 32-bit version of Windows 8. Also, can you look in the "All Apps" list that you can access by right-clicking a blank spot on the start screen and choosing "All Apps" from the bar that appears, and see if Nightly shows up in there? It also requires the RTM version of Windows 8, so if you're trying to do this from the release preview, it probably won't work... A final thing to check is the default program settings. Go to the start screen, and start typing "default", then choose "Default Programs" from the list. In the window that pops up, click "Set Your Default Programs". In the list of programs, select "Nightly", and then click "Set this program as default" to give it all of the defaults. Then go back to the start screen and see if the tile appears there. i download the lastest nightly build 19.0a1 when i changed the default program into nightly, it's still open in desktop mode. is this a nightly bug ? It isn't yet in the normal Nightly builds, you'll have to download the Metro Preview build: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-elm/firefox-18.0a1.en-US.win32.installer.exe Beyond that, the Metro version will only open from the tile on the start screen, and only if it shows the new icon on the tile. The requested URL /pub/mozilla.org/firefox/nightly/latest-elm/firefox-18.0a1.en-US.win32.installer.exe was not found on this server. - ) Help me please Oh. Sorry, we're up to Firefox 19 now: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-elm/firefox-19.0a1.en-US.win32.installer.exe The Nightly Channel has been 19.0a1 ever since October.9 build. i already download it i used win 8 pro x86, but it doesn't open in metro preview i don't know why :/ As you have 32-bit Windows and not 64-bit, you may be able to run it in desktop mode but not in Metro mode. Okulungisiwe ngu James so this isn't a bug, but the system only for x64 ? i hope can open firefox in metro mode for x86 I've installed the lastest nightly build on Windows Pro x64, and it does NOT open in Metro mode on the start screen. Regardless of how I open it, it reverts to desktop mode. Any suggestions? Try to add the -metrodesktop switch. "C:\Program Files\Nightly\firefox.exe" -metrodesktop I added the -metrodesktop argument from command line and got the same result. Nightly is installed in the Program Files (x86) directory. Is there a 64-bit version that I need? it's doesn't work, maybe someday can be fixed I need a solution to Nighlty, the metro don't work, th4nks Only set nighly into default, then nightly turn into windows 8 modern ui. done
OPCFW_CODE
I wanna use this theme but it has lots of module problems. I have tried to update the modules to the latest versions but it might be incompatible with the whole theme config. Will there be a new version? if not, can somebody help me edit this theme so it will be compatible with latest LS build? I'm still 70% newbie, haven't finished with all the docs. Argghh. If anyone has spare time, pls help ok? I'll be eternally grateful;) sure :) if you have a problem, yo, i'll solve it ;) how come you are always the first to answer questions?? are u a moderator? nope, i'm lucky :D actually i guess i'm just an active community-member *said proudly* :D no work to do at the moment and i've got bored with all the games so when i sit down behind my comp i'll try to solve other people's problems, cause i haven't had many of those lately. Everything is fine when i load nonstep. But when i recycle it, it says 'exception while quitting module grdtray3.dll ' and 'exception while quitting module jdesk.dll' and 'unable to register desktopbackground class' and 'grdtray:unable to register window class'. After that, only some parts of theme is functional. Then the theme crashes with this message >> AppName: litestep.exe AppVer: 0.24.6.1 ModName: jdesk.dll ModVer: 0.0.0.0 Offset: 00002e50 I used the zipfile which has bad crc or somthing. How to update the modules and step.rc to be compatible with latest LS build? I know it will be tedious to reconfigure the modules but it is a good theme, don't wanna waste it. i've been modifying nonstep for months now .. so if theres anything i can help with ... :) first i'd replace $ModulesDir$grdtray3.dll with $LitestepDir$systray2.dll .. do not update to the latest layercuts.dll.. weird on-top-problems appeared when i did that but i think the module with the most stability issues used in nonstep is grdtray oh, and update your indiestep build too :) Use systray2 instead of grdtray3. garland, is your modified nonstep working? Can u please upload it somewhere so i can download it? I'm still not good at editing these stuff.:-0 I tried replacing grdtray with systray and it worked! However, the system tray now sticks at the upper location when i switch to bottom bar mode. How to configure again?? add this to your nonstep\config\bottom.cfg: ;; SYSTRAY ;; i think that should do.. and for uploading it, hmm i think i'll contact ursula2k and ask her(him?) for permission to release my stuff screenshot (so far) http://www.informatik.uni-oldenburg.de/~troggan/weee_ver2.png hmm, garland beat me too :P been mowing grass all day long. arrgh!! i hate grass. on the other hand now it's so nice to watch :) garland, nice white colour scheme!What are the 3 boxes under 'notes'? 1. 'SendTo' command for Run doesn't work after when using Litstep! 2. Where can i edit the contents of 'quick launch' since i am using a theme which has quick launch but can't drag new items into the list. 3. Can i use LSxCommand as a calculator? 4. Is 'send shortcuts' to desktop disabled? 5. How to add a 'show desktop' shortcut to minimise all windows like the show desktop shortcut in quick launch? Thanks for bearing with me.^.^ 1. dunno. maybe it's a feature of explorer. i'm sure ilmcuts or someone else smarter than me knows :P 2. that folder should be specified in your config somwhere. check the quicklaunch module's docs. 4. well, if you don't have desktop icons then there's no point in sending shortcuts there ;) 5. create a shortcut which launches a bang !MinimizeWindows (and be sure you have the latest indiestep). 1. I don't quite get what you mean with "for Run" 2. The quick launch dir should always be "%APPDATA%\Microsoft\Internet Explorer\Quick Launch" - where %AppData% is something like "C:\Documents and Settings\username\Application Data" on 2K/XP. 5. Alternatively you could use showdesk.dll (since !MinimizeWindows doesn't work for everyone) oh thanks for the link for quick launch. I couldn't find it before! I mean the 'run' in Windows that lets you open a file/program by typing in its location like i used it to launch the link to the Quick Lauch folder. My "run" prompt doesn't have a "Send To" feature though...? Typing "Send To" at the run prompt will open the "Send To" folder for win9x systems. It's not a utility or anything it's just a folder under c:\windows\ so when you enter it's name it automatically opens. In Win 2000 and XP the send folder is in your user profile so you will need to type the full path to open the folder. When I first installed LS I remember that I couldn't "run" c:\ and have it open the contents of C:\ in windows explorer. I expect this is the same poblem rojak is having and is a side effect of not having explorer as the shell. Since then I've changed file managers and it works as expected. I don't know of a solution for this if you wish to stick with explorer as your file manager.
OPCFW_CODE
[FEATURE]: Separate build storage cargo capacity Describe the solution you'd like: "Fill Cargo Hold %" is a useful feature to prevent mostly empty runs. However when fulfilliing buy offers from player owned build storages it would be better if such an order would be completed instead of simply ignoring it for all eternity. I suggest a new value "Player build storage cargo mod" (with a better name) with default 100 that acts as a percent modification to "Fill Cargo Hold %". For example setting FCH to 75% and the new one to 50 means Build storage offers must only fill up to 37.5% of cargo space. Describe alternatives you've considered: A simple boolean flag instead of another slider that disables the "Fill Cargo Hold %" check for player owned build storages. However at worst it would mean an Incarcaruta deliver 300 energy cells. Additional Context (if any) The only way with the current version that ensures that build storage is served is buy setting "Fill Cargo Hold %" to 0. Such a trader would either sit idly a lot (if you only use it for build storage) or would do insanely inefficient normal trades (basically rushing to every station that completes a single production cycle). The supply mule is supposed to ignore the cargo hold slider if the trade is for build storage and would finish off the ware. Is that not working? I am pretty sure it does not work, but I will try to test it again under controlled conditions and report back. The second clause of this if statement should let the build storage trades through: https://github.com/Misunderstood-Wookiee/Mules-and-Warehouses-Extended/blob/eef767121a549f7405c6ba248465043139516a80/aiscripts/mule.lib.evaluate_tradeoffers.xml#L135 What is $targetOfferedAmount in this case? If it is the volume required by the build storage wouldn't that mean that this check only every returns true if the amount offered by the source is exactly the amount required by the target? Yes, but if the amount were bigger, then it wouldn't get clipped by the $minCargoSize check. I will have to think through the logic, but I am heavily distracted at the moment. I will try to think through it later today. Ok and I will test it ingame again as soon as I have enough money in my current game. It does "feel" like something looks wonky in the if statement but I can't quite put my finger on it without sitting down and writing out a truth table I tested it in game and it did not work Scenario: HQ with small dock queued buy offers 185 hull parts at 274 Cr. Assigned Vulture Vanguard is just sitting there even with in system sell offers that are below buy offer price (880 hull parts for 260 Cr sell offer) assigned a Courier Vanguard with Fill Cargo Hold % at 25% and it still did not complete the trades I took a look into the code and the code you mentioned SHOULD work: and not ($buyoffer.buyer.isclass.buildstorage and ($tradeAmount == $targetOfferedAmount)) $tradeAmount is what we want (or can) trade for, minimum out of affordable, maximum of what we want to buy and available space, targetOfferedAmount is what we want to buy. Pick hull parts in above case. Trade amount in that case is 185 (because 185 is the lowest out of what we want, our cargo space (10200) and whatever is affordable). I think the problem is from restricting station construction trades to your own faction (which I activated so the AI would not snatch up the trade offers while I did my test). The trade is skipped if either the buy or the sell offer is faction restricted. In this case the sell offer was Teladi which was not allowed to fulfill the buy offer of my station. I am unsure whether this is a bug or a feature. On one hand I understand that you might not want to trade with a specific faction at all in which case that makes sense to apply the restriction to the SELL offer as well as your stations buy offer. But on the other hand that makes it impossible to prevent the AI from fulfilling your build storage buy offers while still allowing your ships to buy wares at the lowest possible price. Thanks. I think I was not expecting the build storage trade setting to act that way. I even double checked with a vanilla build storage trader and he acts the same way i.e. not finding trades as long as a faction restriction is in place for build storage trade. Anyway, my initial assumption was wrong and even the reason why it did not work for me was intended behavior so I am closing this. Thanks again! I should maybe add that tid bit to the readme for the buid storage On Thu, 6 Jan 2022, 10:59 pm satheon49, @.***> wrote: Thanks. I think I was not expecting the build storage trade setting to act that way. I even double checked with a vanilla build storage trader and he acts the same way i.e. not finding trades as long as a faction restriction is in place for build storage trade. Anyway, my initial assumption was wrong and even the reason why it did not work for me was intended behavior so I am closing this. Thanks again! — Reply to this email directly, view it on GitHub https://github.com/Misunderstood-Wookiee/Mules-and-Warehouses-Extended/issues/146#issuecomment-1006523138, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFH3SZYOBATSDLOKZYR5MZDUUV7YTANCNFSM5LKHFI4Q . You are receiving this because you are subscribed to this thread.Message ID: <Misunderstood-Wookiee/Mules-and-Warehouses-Extended/issues/146/1006523138 @github.com> Was going to say a aimple ignore build storage check box would probably resolve this. On Thu, 6 Jan 2022, 2:49 am satheon49, @.***> wrote: Describe the solution you'd like: "Fill Cargo Hold %" is a useful feature to prevent mostly empty runs. However when fulfilliing buy offers from player owned build storages it would be better if such an order would be completed instead of simply ignoring it for all eternity. I suggest a new value "Player build storage cargo mod" (with a better name) with default 100 that acts as a percent modification to "Fill Cargo Hold %". For example setting FCH to 75% and the new one to 50 means Build storage offers must only fill up to 37.5% of cargo space. Describe alternatives you've considered: A simple boolean flag instead of another slider that disables the "Fill Cargo Hold %" check for player owned build storages. However at worst it would mean an Incarcaruta deliver 300 energy cells. Additional Context (if any) The only way with the current version that ensures that build storage is served is buy setting "Fill Cargo Hold %" to 0. Such a trader would either sit idly a lot (if you only use it for build storage) or would do insanely inefficient normal trades (basically rushing to every station that completes a single production cycle). — Reply to this email directly, view it on GitHub https://github.com/Misunderstood-Wookiee/Mules-and-Warehouses-Extended/issues/146, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFH3SZ6JHVRCVJFZSZXVX43UURR6ZANCNFSM5LKHFI4Q . You are receiving this because you are subscribed to this thread.Message ID: @.*** .com>
GITHUB_ARCHIVE
2023-03-15: APP 2.0.0-beta14 has been released ! IMPROVED FRAME LIST, sorting on column header click and you can move the columns now which will be preserved between restarts. We are very close now to releasing APP 2.0.0 stable with a complete printable manual... Astro Pixel Processor Windows 64-bit Astro Pixel Processor macOS Intel 64-bit Astro Pixel Processor macOS Apple M Silicon 64-bit Astro Pixel Processor Linux DEB 64-bit Astro Pixel Processor Linux RPM 64-bit Help with combining Ha and OSC from a different session Hello all - I've purchased APP and I find it quite useful for simple single and multi-session work, gradient removal, Ha/OIII extraction etc. when all of the sessions are of a similar nature (e.g. they use the same filter). I may not know how to ask this question very well, so I will just state what I am trying to do and I'm looking for some clear direction (don't assume I know all of the inner workings of APP - I just basically take all of the defaults and let APP do it's thing and I hope it is correct). I am using a Fuji camera with dual wavelength narrowband filter for the Ha signal generation and the same setup minus the filter for the "OSC" color information. This is for nebula processing. I have two sessions of images collected with the duo-NB filter and I want to extract the Ha signal, which I know how to do. I also then want to process another (third) session of data that was acquired with no filter. I'd like to use the full color image from the third session and boost the red channel with Ha from the first two sessions. I know about the combine RGB tool, which I have used for Ha-OIII extract Ha/OIII type HOO re-combinations, but I don't know how to do what I am now trying to do. I'd like to try to do as much of the work in APP. I know that it is possible to simply process session 1 & 2 together and extract the Ha signal and then perform an Adaptive Airy Disc integration of the third session separately and then fiddle with the rotation in Photoshop and do channel manipulation and layer blending to get the result I am trying to achieve (if I can figure it out - ha!), however, I'd like to see how far I can use APP instead of leaning on Photoshop so much (I'm not a Photoshop expert either). Can someone explain to me without assuming I'm that I'm a world class APP user how to do this? Thanks! Edit: Perhaps this is what I am trying to do: https://www.astropixelprocessor.com/community/postid/7610/ Yes you can follow that workflow indeed. I also think you can just load in the RGB in RGBCombine and it will be seperated in R, G and B. That's something I didn't try much, so that's from me remembering it used to work. 🙂 However, nowadays my workflow is more similar to what you found, in that way I can nicely correct all channels before combining, which gives a better starting position for further correction as well. Thank you Vincent...
OPCFW_CODE
Discourse relations such as ‘contrast’, ‘cause’ or ‘evidence’ are often postulated to explain how humans understand the function of one sentence in relation to another. Some relations are signaled rather directly using words such as “because” or “on the other hand”, but often signals are highly ambiguous or remain implicit, and cannot be associated with specific words. This opens up questions regarding how exactly we recognize relations and what kinds of computational models we can build to account for them. In this talk I will explore models capturing discourse signals in the framework of Rhetorical Structure Theory (Mann & Thompson 1988), using data from the RST Signaling Corpus (Taboada & Das 2013) and a richly annotated corpus called GUM (Zeldes 2017). Using manually annotated data indicating the presence of lexical and implicit signals, I will show that purely text based models using RNNs and word embeddings inevitably miss important aspects of discourse structure. I will argue that richly annotated data beyond the textual level, including syntactic and semantic information, is required to form a more complete picture of discourse relations in text. Amir Zeldes is assistant professor of Computational Linguistics at Georgetown University, specializing in Corpus Linguistics. He studied Cognitive Science, Linguistics and Computational Linguistics in Jerusalem, Potsdam, and Berlin, receiving his PhD in Linguistics from Humboldt University in 2012. His interests center on the syntax-semantics interface, where meaning and knowledge about the world are mapped onto language-specific choices. His most recent work focuses on computational discourse models which reflect common ground and communicative intent across sentences. He is also involved in the development of tools for corpus search, annotation and visualization, and has worked on representations of textual data in Linguistics and the Digital Humanities. February 23, 2018 @ 12:00 pm – 1:15 pm Hackerman Hall B17 3400 N Charles St Baltimore, MD 21218 Naturally, the literary scholar is often concerned with more context than can be conveniently displayed in a KWIC concordance, which is why most literarily oriented concordance interfaces offer hyperlinking functionality between concordances and expanded context views of the corpus. The advantage of using both views in conjunction is that potentially interesting results can be reviewed easily in the plain-text concordance, possibly with helpful highlighting functions and annotations, whereas a detailed view navigated to from this list can contain both more text, and representations that are more taxing to interpret, such as aligned facsimiles. A good example of this mode of operation can be found in the Canterbury Tales Project, which also offers special marking for variants in the collation, so that different versions of a search result can be navigated to on the fly. Although these functions have been developed largely with literary computing in mind, they are entirely applicable to corpus linguistics as well. Many linguistic domains require relatively large contexts, and many corpora correspondingly offer not only adjustable context width for concordances, but also dedicated text-length context views, which are especially appropriate for studying text-wide dependencies. The rhetorical structure annotated in the above mentioned Potsdam Commentary Corpus, for example, cannot be adequately interpreted without very large context, and often requires reading an entire text. Corpora comprised of short news stories or essays can also be studied at text level, using searches to retrieve text containing interesting phenomena. This allows researchers, for instance, to study constructions typical of the beginning or end of a text, and their dependencies on various features being found in or absent from the entire text. This means that the same corpus can be exploited by researchers in different fields, or even used to examine interdependencies between different layers (for example the effect of information structure on syntax). More and more types of annotation, often created by work-intensive manual methods, are being proliferated, for example verbal argument annotations in PropBank and discourse annotations for connectives like because or although in the Penn Discourse Treebank. New research methods taking advantage of such annotations simultaneously may reveal as yet unknown interactions between different linguistic levels.
OPCFW_CODE
Variable type? This doesn't look correct? I have the same problem. After trying various commits, looks like this one broke it: https://github.com/hrsh7th/cmp-cmdline/commit/d8738f0104a8e2fd71e7e0ecef229423107fa11a I've got same problem. @uga-rosa @hrsh7th Any chance we can revert https://github.com/hrsh7th/cmp-cmdline/commit/d8738f0104a8e2fd71e7e0ecef229423107fa11a to fix this issue? What makes it VARIABLE is that it is defined as such. https://github.com/hrsh7th/cmp-cmdline/blob/5af1bb7d722ef8a96658f01d6eb219c4cf746b32/lua/cmp_cmdline/init.lua#L72 What does this revert mean? The number that comes right after the : represents a range, and it would be correct to line up a command that can take a range (:h :range). I goto line all the time with :150 as an example, which used to work fine, but now creates a very incorrect completion list like you can see above. Perhaps reverting isnt the right solution, but that commit did break something (as seen in the images above). Note the first image too, where it happens with non-number prefixes. I've pinned my plugin to the commit prior to this, but would prefer not to do that, if there is an appropriate fix. It is true that there are some commands out there that do not allow range, but it is correct in itself that the command is completed with :{number}. For example, :1w is a formal command. And isn't this issue a report that the kind is variable? I explained in my earlier reply that that is the spec. I have no idea what you think the problem is, or how reverting back to before that commit will fix it. I can't reply any further if you don't explain it to me in detail from scratch. You and I do not share the same premise. Here's how it worked before https://github.com/hrsh7th/cmp-cmdline/commit/d8738f0104a8e2fd71e7e0ecef229423107fa11a: typing just :150 shows no completions yet: then when you type another character, it starts to complete: but now typing just :150 shows a huge list: while I understand that the large list in the second technically is correct in that its showing every possible completion, I would argue that the previous behavior was more useful. Similarly, if i just type : it doesn't show me everything i can complete with yet, it waits until i have one character to start completing. The way numbers complete now, feels equivalent to just showing me every possible command right off the bat. At least I think having the option to complete using the old behavior should be made available. ah, i think my issue is different that the one from the original poster (when i originally thought it was the same). I can move it to its own issue, though i don't think i can move the discussion. Let's make another issue. I can no longer reproduce this and assume a recent update must be responsible for the fix. The original issue is back now that I've updated to commit 8ee981b4a91f536f52add291594e89fb6645e451 Please create new issue. The Variable type issue is not a valid issue.
GITHUB_ARCHIVE
My 8800GTS 512 is running extremely hot. I haven't really had any issues with it crashing due to overheating, but it still reaches obscene temperatures. I think the only reason I'm able to keep it running is because I leave the fan at 85-100% all the time because it'll get above 90c idle otherwise. At 100% it idles at about 77/78c, and at 88% it idles at 84c. My ambient temperature around the card is usually about 65c, which is fairly high. It can get over 90c at load, sometimes up to 96-97c - although only if the fan is below 90% or on auto. While I haven't had any issues so far, this can't be good for the longevity of the card (although I've had it for two years and don't plan on keeping it much longer.) I also have a similar problem with my CPU, although it doesn't get quite that hot. I do have dust problems, due to having animals, but I regularly clean all the fans. My case is well-ventilated (Lian-li PC-61,) so I don't really think that's the issue. Mainly, I was just wondering if there were any suggestions to help get the temperature down. I've recently applied thermal paste to the CPU, which helped a bit, but not as much as I would have hoped (about a 10 degree difference.) I'm thinking about buying a new fan, do you guys have any suggestions? I don't really want to spend more than ~$30 on one, though. I've also just ordered some more thermal paste, so I'll see how that works. Other relevant information: the room is usually between 18 and 22c, there are a crapload of wires in the case that might block airflow, but I've done the best I can to secure them - they're fairly neat. The PSU I'm running is an Antec Trio 650w. I have stock MSI 8800GTS 512, and my CPU is an AMD Athlon X2 6000+. None of my equipment is overclocked at the moment, although I have ran my GPU at ~700/1750 and my CPU at ~3100 in the past. I've got a 8800GT (Palit, I think) with stock cooler. Temperatures are 60-65degrees Celcius while idle and 90-95 full load, 97 in furmark stress test. (666MHz GPU, 950MHz memory). These are the "normal" (if you can call them that) temperatures for this card. Nothing to do about it, really. I've got no suggestions for the CPU coolers, since I have an Intel C2Q and the stock fan is enough for me now, so I haven't searched for anything else. Reread the OP. He's got a 8800GTS probably with a dual slot cooler. From what I know those cards should not be that hot, especially with a room temperature of around 20C. To the OP, have your tried cleaning the card cooler(not the fan). If you have it for 2 years now there might be a lot of dust in there. I suggest to open the cooler and check. I saw you applied thermal paste to the CPU, but have your tried to do that to the GPU? sometimes those coolers get a cake of dust stuck in them and you have to use a high pressure air compressor to get them out or remove the plastic shroud. It's littely a dust cake. I've seen it before. Removeing the cooler and reapplying paste is also A good idea. Sometimes it is not really applied to well from factory and I've seen some cheap paste that gets dry and really doesn't do it's job well. Remove the card, remove the cooler, disassemble the cooler, remove the dust, reassemble, clean the gpu, use arctic silver 5 thermal compound, reassemble the the card and finally test. As a result the temps should be lower by at least 10-15c. By the way cleaning fans alone isn't good enough.
OPCFW_CODE
reframing the trigger warnings debate In two recent articles I’ve read about trigger warnings in academia, the concern has largely been about academic freedom, with James Turk being a mega drama queen and a group of faculty justifying why the won’t use them with false assumptions and poor reasoning. While I do understand that the current discussion was precipitated by Oberlin saying that triggering, but non-essential, content should be removed. I can see why people have a problem with this but they rarely stop at this. It is a general, dig-in-our-heels “no one can tell me what to do, you’re the boss of me” reaction1. There are really two distinct things happening here: one is a real-world event where faculty were actually asked to remove content and one is a more general debate about the use of trigger warnings in academia. But these aren’t the same discussions and the conflation of the two by pretty much critic I’ve seen significantly weakens their argument. In the case of the professors’ piece, their sections on the trigger warnings themselves and the relation to disability are pretty much either factually incorrect or depressingly uncaring and unwilling to accommodate. Triggers are not restricted to PTSD. Their point about PTSD triggers being unpredictable (and thus unable to be forwarned) is a good one. And it would matter more if triggers were something that only impacted people with PTSD. This is neither true of the academic literature on triggers nor of the current discouse about triggers. Triggers for people with anxiety, phobias, depression, etc. can, in actual fact, be predicable. This is why there are (informal) standards about what sort of things are commonly warned for. Moreover, triggers do not only impact people with psychological/mood disabilities. Outside of higher ed, trigger warnings are commonly used on the internet to also warn people about things like moving or flashing gifs (for people with photosensitve disabilities like epilepsy). Saying outright “we will not put trigger warnings on anything” includes situations like this. Perhaps the most laughable (but also heartbreaking) point the professors make is when they note that PTSD is a disability and should be handled in a systematic way by the campus’ disability services. This might hold more weight if these services were better funded and better supported. And this includes faculty. I can’t even count the number of stories I’ve heard from disabled students who do all the right things (get their ‘certification’2, accommodation plans, etc) only to have the faculty member outright refuse to accommodate. And acting like the marginalized person here has any real recourse against the institution is counter to reality. One of the things I find most interest about the article is the ways that the faculty so clearly distinguish themselves from the university as institution. This might be my ‘unpopular opinion’ moment, but I really want to be clear here: if you are a faculty member, you are the system. Particularly when it comes to something like disability accommodations. The reason why I’m highlighting the accommodation/disability aspect of this is because it is the important and salient point being lost in the panic about academic freedom. Academic freedom isn’t a human right. However, accommodations for disability are a human right. What you say, when you say academic freedom matters more than disability accommodations, is that your privilege3 matters more than someone else’s human rights. In so doing, you really are the embodiment of institutional oppression. Because this is the message that society at large tells disabled people. And in the hand-waving over trigger warnings, people really seem to mistake their actual purpose. Warning people about certain content that might trigger an adverse response it part of it… but it isn’t the only function. One thing they serve to do is allow people to engage in certain discussions and materials in an informed and consensual way. It is a way to add context to something that may not otherwise have it (eg, a journal article in a reading list will usually just have the citation and titles aren’t always descriptive). Yes, in some cases trigger warnings serve as a necessary shelter for people who need them, but they also empower some of those same people to engage in the classroom in a healthy fashion. It isn’t about avoidance. If a person sees that a particular day/reading will have something that might potentially trigger them, they can make an informed decision concerning their current state and whether or not they can engage without significantly damaging their health. And/or they are able to prepare in advance any extra support or self-care they might need to recover from the experience (ie, booking an appointment with a campus councellor, asking a friend to meet them afterwards, etc.). Part of the problem with this, is that most people (again) are only thinking of themselves. With adequate warnings students might also be able to make arrangements for classes they have after the potentially triggering one. Without any systematic approach to this, how easily do you think a student could get a professor from a different class excuse their absence if they, say, had a panic attack and needed to take medication rendering them unable to attend the rest of their classes that day? What if they have a test on the same day as the class and that they may not be able to do both a potentially triggering class and a test on the same day? While trigger warnings do serve an important function of protecting disabled students, traumatized students, etc. from additional harm, they also empower those same students to make good, informed decisions about their education. It creates an environment where their agency is considered real and respected. It gives them an active role in their educational experiences. But I guess the real solution is maintaining a hostile and unsafe learning environment for disabled students because ~academic freedom~. Although, for the article written by the professors, points 7,8, and 9 are good and really worth considering. But only if the conversation remains in the realm of actually removing material from courses, as with the Oberlin decision. ↩ To be honest, the fact that they even mention certification lets me know that they are woefully out of touch with critical disability discourse. Particularly since they are presenting this as coming from marginalized people themselves (ie, the professors are marginalized in some way). Getting ‘certified’ for a disability (particularly psychological ones) can be very, very difficult. Particularly for people of colour. Are we really trying to pretend, here, like the medical industry is outside of the same structurally oppressive context as the university? ↩ I’m speaking very literally here: academic freedom is a privilege. Like, not even in the theoretical sense. It is a privilege that academics have and no one else. ↩
OPCFW_CODE
Like many mail clients, Apple’s Mail app purports to provide a rich text and graphics environment in its messages. You’d like to think that setting font size is part of that, but if you have used different sizes in messages, you may have noticed that they don’t work properly. The bug is easy to reproduce: in a new message, set a series of lines to use different font sizes (in any font, that is immaterial), such as 13, 15, and 17 points. In the draft message, these will be shown scaled correctly. Send yourself that message, and when you read it in Mail, some of its fonts will be scaled differently from those shown in the draft. Above, the same message is rendered in Mail 10.3, on the left in the main message viewer, and on the right when a draft being edited. These messages use HTML format. If you open the message as sent in Safari 10.1.2, as shown above, the lines are rendered in the correct sizes. Looking at the HTML source generated by Mail reveals part of the problem: two different elements are used to set font size in the message. <font size="3" class=""> These use two different methods for determining font size: an HTML ‘scale’ from 1-7, where 3 is ‘normal’, and a length, which is here given in pixels. Although pixels and points are not really interchangeable, at a display resolution of 72 dpi, they are effectively the same, as a point is 1/72nd of an inch. It happens that in Mail’s main window, the HTML scale is being rendered differently from its draft editor and Safari. There is also a more fundamental problem here, as the <font> element in HTML has been declared “obsolete”, and usage notes read: Do not use this element! Though once normalized in HTML 3.2, it was deprecated in HTML 4.01, at the same time as all elements related to styling only, then obsoleted in HTML5. Current advice is to use CSS Fonts, as in the other element specifying the size in pixels. So the current version of Apple’s Mail app actually has multiple issues here. First, it shouldn’t be generating code using the obsolete <font> element. Continuing to do so ignores the very real danger that a recipient’s mail client may now perfectly correctly ignore that element, and not set the font size correctly. Apple cannot claim that this is being done for backward compatibility, because in some font sizes Mail uses CSS Fonts, which are current, standard in HTML5, and recommended. Next, Mail shouldn’t be mixing the two different schemes for setting font size, particularly as the old HTML 1-7 method only relates to itself, and not to absolute font sizes. To mix them is asking for this sort of rendering issue. Finally, Mail should be rendering its main message and draft editor windows using the same engine, preferably in common with Safari. The evidence here is that Mail is using two different rendering methods, one of which handles font size better than the other. From what seems to be a simple bug emerges evidence of several quite significant issues in Mail’s internals. On this glimpse, it looks quite a mess. Thanks to Bart Hanson for alerting me to this curious bug.
OPCFW_CODE
Table of Contents Can React be used commercially? React. Just like Angular is supported by Google, React. js is maintained by Facebook and a community of developers. Both are open source and free to use under the MIT license. Which is more powerful Angular or react? In short, if you are looking for flexibility and simplicity, it is better to use React. js. However, if you need the most efficient way to organize and boost your application with a complete tool, AngularJS remains your best solution. What are the limitations of react JS? Disadvantage of ReactJS - The high pace of development. The high pace of development has an advantage and disadvantage both. - Poor Documentation. It is another cons which are common for constantly updating technologies. - View Part. ReactJS Covers only the UI Layers of the app and nothing else. - JSX as a barrier. Why react JS is better than other frameworks? Compared to other frontend frameworks, the React code is easier to maintain and is flexible due to its modular structure. This flexibility, in turn, saves huge amount of time and cost to businesses. When use react JS vs AngularJS? Application: Angular is most widely used for large applications like video streaming app or music instrument app because of it’s full-blown framework nature. On the other hand, ReactJS is just a library so it’s good for SPA (Single page application) or where it doesn’t require much formatting. Should I study Angular or React? If you are going to build a project that will have a lot of modules, then Angular will be the better choice as it will allow the maintaining the project very easy. The Angular code will be much easy to understand and easy to edit as well. The Angular is a bit slower than the ReactJS, so keep this thing in mind as well. Is ReactJS more popular than angular in 2020? React is the most downloaded framework with more than triple times than Angular. When we compare ReactJS vs AngularJS popularity in 2020, Stack Overflow Trends also reveals React receives the largest percentage followed by Angular. Google Trends also shows a similar picture that ReactJS is catching fast than Angular. Should I learn angular or react for web development? The choice and use of other necessary tools in the complex mix is problematic. According to Google Trends, React is more popular among developers than Angular. One reason may be that it is easier for beginners to learn. React is mainly used for the interactive user interface, but Angular is a perfect framework. What is the difference between angular and React React? React lets you manage your app code as you like but Angular consists of a series of ready to use elements. Even if it has priority collisions and namespaces, Angular is always prepared to employ more elements. So, regarding the reuse of code, Angular is much better. Is it true that Facebook and Google built react and angular? There is no doubt Facebook and Google have not built React and Angular just back the developers across the globe. They have also developed or supported them to run their own applications and websites on these particular systems.
OPCFW_CODE
import os from ..utils.importing import import_module_from_source class Regressor(object): def __init__(self, workflow_element_names=['regressor']): self.element_names = workflow_element_names def train_submission(self, module_path, X_array, y_array, train_is=None, prev_trained_model=None): if train_is is None: train_is = slice(None, None, None) regressor = import_module_from_source( os.path.join(module_path, self.element_names[0] + '.py'), self.element_names[0], sanitize=True ) reg = regressor.Regressor() if prev_trained_model is None: reg.fit(X_array[train_is], y_array[train_is]) else: reg.fit( X_array[train_is], y_array[train_is], prev_trained_model) return reg def test_submission(self, trained_model, X_array): reg = trained_model y_pred = reg.predict(X_array) return y_pred
STACK_EDU
Before we get started, let’s take a look at this definition: “A language-neutral way of implementing objects that can be used in environments different from the one they were created in.” And this one: “A platform-independent, distributed, object-oriented system for creating binary software components that can interact.” Do you think these are describing a service mesh from the 21st century? Well, they are actually definitions of Microsoft DDE (1987) and COM (1993) technologies, respectively. It’s worth noting that a lot of concepts were invented a while ago and then just reused under different names to solve different cases. The real world is going to merge desktop and servers, desktop applications, web applications and APIs to one unified environment. It seems clear to me and a lot of other security experts that we’ve already made a lot of technologies to solve similar problems like interprocess communications, data sharing and data analysis. This fact made obvious an assumption that the security layer should be the same as well. There is no difference between the attack surface on endpoints, servers and clouds. It doesn’t matter what the subject is or what the object is during the data transmission process. It could be a Win32 application, legacy web application, microservice, serverless application or something else entirely — the security requirement will be the same. It’s related to that fact that all the security controls eventually target to protect data. This is true for other resources, of course, but let me please keep folks who have dealt with unexpected crypto miners and DDoS attacks out of this talk. Let’s focus on data as the main thing. Whatever we are building in security — from old-fashioned VLANs and role models to modern pod security policies, east-west communications security, or container isolation strategy — we always use data as a starting point. And the data nowadays is everywhere inside a company — in clouds, servers, desktops, laptops and mobile devices. To make business faster, we need to give widespread access to this data, and that’s not a trend but a survival requirement. It’s the same thing for CISO and risk management folks from where user data was stolen. Regardless if it’s a developer machine, QA environment or database, it’s all the same from a data perspective. This idea is fairly similar to the zero trust concept that was invented by Forrester analyst John Kindervag back in 2010. It requires verifying every single source (object or subject) in any communication processes at every single stage. According to a 2018 CSO article, “Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything trying to connect to its systems before granting access.” So the data is similar everywhere, as we discussed above. The last thing that was mentioned as a difference between desktops and servers is automated end user behavior. People with that point of view claimed that desktops should be protected in a completely different way just because they are machines for human operators, unlike servers, which host services for external usages. So at this point, we have figured out that desktops, laptops, servers and serverless applications are all similar from the data sensitivity and usage perspectives. That’s why similar application security controls and policies are relevant in the modern world. Application security always starts with the following points: - Data profiling, categorization, prioritizing and risk mapping (data authentication). - Attack surface definition and entry/input points inventory. - Application to data and input mapping. As a result of this, we will have a map of data types, which application uses which types of data and which inputs these applications accept. This map allows for implementing basic controls, policies and other restrictions. Again, this is all completely unified and agnostic to the platform, application type and runtime. Once we have a map, the first goal is to build an authentication and authorization strategy and implement it. Again, according to the zero trust concept mentioned above, we should not trust anything by default. And to claim they are not “anyone,” applications should authenticate themselves. – Read more
OPCFW_CODE
Can you explain to me what an independent variable is?. Dating 4 months no relationship between variables Mar 1, 2016 Is used to show a relationship between many values. If one of your data dimensions is time — including years, quarters, months, weeks, days, Stacked bars are not good for comparison or relationship analysis. . To display a Gantt chart, you would typically need, at least, a start date and an end date. DATEDIFF, Compute difference between two dates in given time unit XDATE, Extract date component from date variable, (date), Standard WKDAY for the day of the week (1 through 7 where 1 is Sunday, not Monday);; XDATE. necesito olvidarme de ti Dating 4 months no relationship between variables On this date at Daily Kos in 2002 dating back to 1993, for some of the most highly As with any good ghost, Cooper has been recruited into the CIA, becoming one The minuet was so stylized and refined that its relationship to courtship was fighter born three months prematurely inspired courage and the love of many. A few months ago one of the moderators crafted an all encompassing I'm sure lots of applicants without a failed CS to rank higher than you. 4). .. A study from 2016 showed little to no correlation between MCAT scores and USMLE step 1 2 ck for now and schedule next available CS date and once you done with CK, Jun 27, 2016 1st of last month: SQL(cvtdate(year(current date-1 month), month(current The second example also uses a no-prompt variable to build a PDF Your prenatal visits will probably be scheduled every month until 32 to 34 weeks. have no risk factors for pregnancy complications; you have a history of regular periods; you are certain of the date your last menstrual A fetal pole means that the arms and legs developed to variable extents, depending on gestational age. over 50 dating kansas city Dating 4 months no relationship between variables Learn how to work with dates in R. Dates are represented as the number of days since 1970-01-01, with negative values for earlier dates. %B, abbreviated month unabbreviated month, Jan January. %y %Y, 2-digit year 4-digit year, 07 2007 Use date and time functions to create formulas that return serial numbers, a specific date or time, or that calculate the difference between dates or times. For example, a version marker of 2013 indicates that this function is Calculates the number of days, months, or years between two dates. Any other feedback?
OPCFW_CODE
Figure C-3 depicts the data flow for the Define Commands and Command Procedures process. Data originates in the Command Requirements and Vehicle Command Word Structure data stores, which define source information needed to generate specific data items needed to support Commanding. This information would include formatting rules, contingency conditions, verification methods, and fault recovery methods. The generated products include the Constraint DB, Command Info DB, Command Eval DB, and Command Anomaly Resolution KB data stores. The Constraint DB data store contains the constraint rules for each SV command, i.e., the conditions that must be valid before the command can be released to the SV. The Command Info DB data store contains information about each command, including formatting rules for generating the command load, command decode information, etc. The Command Eval DB data store contains the rules or procedures for evaluating and verifying command execution. The Command Anomaly Resolution KB data store contains knowledge bases needed for automated detection and resolution of commanding-related anomalies. The Command Pool data storecontains command elements, such as single commands, block commands, and memory uploads, which are represented as sequences of bits that can be executed by the SV. The Timeline data store contains a description of the overall pass plan, and the command activities within the plan. This information is used by Generate Command Plan to link executable elements to command plan steps and store them in the Command Processing Product data store. The following describes the processes identified in Figure C-3. Generate Command DBs - This function builds a number of databases used to support command plan construction, evaluation, and anomaly resolution. It inputs the vehicle-specific command word structure provided by the contractor. This will likely be in some standard database format. In addition, this function inputs vehicle specific command requirements, such as the constraints that must be present for release of a particular command or conditions that would indicate its successful execution. This function outputs the Constraint DB, the Command Info DB, the Command Evaluation DB, and the Command Anomaly Resolution KB. The command pool is a temporary storage area for load ready command elements. Generate Command Loads - This function generates command loads that can be executed by an SV. Processing is initiated by the Timeline data store, which also indicates commands for which command loads are needed. This function inputs the command format definitions, which are formatting rules for each command. This function may also process satellite software and database memory uploads. The output of this function is the command load, called the command element, that goes to the command pool. Generate Command Plan - This function generates the command plan, the sequence of commands issued during a pass to accomplish some task. It uses the pass plan and associated information from the Timeline data store to determine the sequence of command steps and conditional steps needed to achieve the goal of the pass plan. Generate Command Plan may optionally input a Recommended Corrective Action, which is a predefined command plan from Telemetry or from Determine Command Performance. It also inputs command elements, which are executable commands from the command pool. The output of this function are the command plans, which are a series of steps and decisions linked to the executable command elements.
OPCFW_CODE
Maximum duration 86400 in SessionInfo cannot support 'Extend Session Duration' Problem description QoD has introduced 'Extend Session Duration' since v0.10.0 which means that the total duration of a QoD session may exceed 86400. The responses of: GET /sessions/{sessionId} POST /sessions/{sessionId}/extend shall return the total duration of the QoD session. However, in the datatype 'SessionInfo' used by the responses above, the max limit of 86400 remains. Expected behavior Remove the max limit of 86400 in datatype 'SessionInfo'. Alternative solution Or define another max limit larger than 86400 in datatype 'SessionInfo'. Additional context The documentation of 'Extend Session Duration' operation: "Extend the duration of an active QoS session. If this operation is executed successfully, the new duration of the target session will be the original duration plus the additional duration. The remaining duration plus the requested additional duration shall not exceed the maximum limit. Otherwise, the new remaining duration will be the maximum limit." @emil-cheung Currently the description you have cited does not allow to extend beyond the "maximum limit" which is defined as 86400 seconds in SessionInfo: The remaining duration plus the requested additional duration shall not exceed the maximum limit. Otherwise, the new remaining duration will be the maximum limit. But there is another reason to remove the "86400 seconds" limit from SessionInfo, see #249. The maxDuration will be given by the QosProfile going forward (currently we have restricted maxDurationwith a comment in the description also to 86400 seconds, to avoid inconsistencies). maxDuration will be then the upper limit for a session duration and "extend session duration" shall not allow to extend beyond this limit. Do you agree to close your issue and continue the discussion within #249? @emil-cheung Currently the description you have cited does not allow to extend beyond the "maximum limit" which is defined as 86400 seconds in SessionInfo: The remaining duration plus the requested additional duration shall not exceed the maximum limit. Otherwise, the new remaining duration will be the maximum limit. But there is another reason to remove the "86400 seconds" limit from SessionInfo, see #249. The maxDuration will be given by the QosProfile going forward (currently we have restricted maxDurationwith a comment in the description also to 86400 seconds, to avoid inconsistencies). maxDuration will be then the upper limit for a session duration and "extend session duration" shall not allow to extend beyond this limit. Do you agree to close your issue and continue the discussion within #249? @hdamker Probably a bit more discussion before closing the issue. Let's take one step back to clarify our understandings. Please allow me to describe an extreme example. Let's say a developer originally created a QoD session with duration of 86,400 seconds (already reaching the max limit). Now, 80,000 seconds has passed and he wants to extend the duration with another 80,000 seconds. At this very moment: Elapsed time = 80,000 (s) Remaining duration is 86,400 - 80,000 = 6,400 (s) Requested additional duration = 80,000 (s) Remaining duration + requested additional duration = 6,400 + 80,000 = 86,400 (s), which is still within the max limit. Total duration is 86,400 + 80,000 = 166,400 (s), which significantly exceeds the max limit of 86,400. My understanding was that as remaining duration + requested additional duration is 86,400 seconds, the duration extension request is acceptable. While your understanding is that since total duration is 166,400 seconds, the duration extension request is not acceptable. I am fine with either understanding, but let's clarify which one is the expected one among the community. If my understanding is correct, then the max limit in 'SessionInfo' should be removed; If your understanding is correct, then the API documentation should be improved, e.g., adding additional statement about the total duration shall not exceed the max limit. Hi @emil-cheung and @hdamker, Upon thorough review, it appears that the following formulation may be ambiguous and thus requires adjustment: ... (1) the new duration of the target session will be the original duration plus the additional duration. (2) The remaining duration plus the requested additional duration shall not exceed the maximum limit. Otherwise, the new remaining duration will be the maximum limit. Paragraph 1: In this section, the new duration of the target session is the sum of the original duration and the additional duration. Paragraph 2: Here, the new duration time is the sum of the remaining duration of the session along with the requested additional duration. In the handling of session information, (SessionInfo), parameters such as "startedAt" and "expiredAt" are crucial. To ensure that the duration limit of a session does not exceed beyond a day, I propose integrating the logic from the first paragraph. An example of extending a session could be as follows: var newDuration = oldDuration + additionalDuration; var newExpiresAt = oldStartedAt + oldDuration + additionalDuration; .. if (newDuration > SECONDS_PER_DAY) { newDuration = SECONDS_PER_DAY; newExpiresAt = oldStartedAt + SECONDS_PER_DAY; } The limitation of a session to one day would be maintained. The second paragraph appears to lack coherence, as it would imply extending the duration of a session beyond a day. I trust these adjustments better align with the basic requirements. @dfischer-tech I agree, the second paragraph is ambiguous. We might have had another semantic in mind when writing it. There are two options: (a) maxDuration (or the limit of 86400 seconds) is absolut. Also the extended session duration can't be beyond this limit. (b) maxDuration is relative to the current time. The maxDuration will be "restarted" by extendDuration (similar to deleting the current session and creating a new one). The requirement would be in this option that "expiresAt" is not more than maxDuration into the future. We had defined the calculation of the new duration based on the original duration (the first sentence), to avoid unclarity about the new duration. Regarding the limit for the duration we obviously currently described the option (b). The new remaining duration shall not exceed maxDuration. But as @emil-cheung has rightfully shown, that would mean that the resulting overall duration of the session could exceed "maxDuration" (even could be unlimited extended again). If option (a) we need to correct the second paragraph, e.g. to The resulting duration shall not exceed the maximum duration limit. Otherwise, the new session duration will be set to the maximum duration. If option (b) we need also make the second paragraph clearer: The *new* remaining duration shall not exceed the maximum duration limit. Otherwise, the new remaining duration will be set to the maximum limit. The remaining duration is calculated as the 'expiresAt' minus the current time of the extendDuration API call We need also to add examples to make it clear. We have to decide first if we want to follow option (a) or (b). and then patch the documentation. @emil-cheung @jlurien @eric-murray @SyeddR @RandyLevensalor - which option is fitting to your understanding and should be followed? (b) maxDuration is relative to the current time. The maxDuration will be "restarted" by extendDuration (similar to deleting the current session and creating a new one). The requirement would be in this option that "expiresAt" is not more than maxDuration into the future. @hdamker I'd lean towards option b. By treating extend duration as a new session, we can apply the same pending / approval flow as we would for a new session. We also need to ensure that we document the behavior if the session can't be extended. This also allows a service provider to implement a policy that would reject an extend session that exceeds the maxDuration of the initial session in their business logic without limiting their options. Also, with treating extend session as new session it keeps the door open to allow for other QoD session attributes to be changed in the same manner. Either through a patch or a specific modify option, such as we are using with extend session. My understanding on the topic was also option (b), but I see the ambiguity in the statement. Option (b) allows more flexibility and it's probably easier to understand for developers. We may discuss if implementations could impose further limits to extension. It's somehow related to #257 Decision from QoD Call March 22nd: Option (b) will be followed, update of documentation as described in issue In addition the limit of duration has to be eliminated in SessionInfo => reference to "CreateSession" has to be expanded to allow that The PR for this issue will be included within the v0.10.1 patch release Action: PR to be created - @emil-cheung
GITHUB_ARCHIVE
What is a control system? A control system is a group of electronic, mechanical, pneumatic, hydraulic, etc. components. They are used together to achieve a desired goal. So that it can be considered a control system at least must have three essential elements are: A Variable control an actuator and reference point (set-point). For example , to fill a 500-liter tank with water, we need a hydraulic pump, a water intake and the electronic elements for turning the system on and off. In this case, the reference point is how full you want a tank. What is an open loop system? An open loop control system is characterized in that it does not receive any information or feedback on the state of the variable, usually these are used when the variable is predictable and has a wide margin of error, since the time can be calculated or the times the cycle must be repeated to complete the process. Example open loop Parts of an open loop There are basic elements that make up an open loop control system: It is responsible for processing the input signals and making a decision to send it to the correction element. This element is the one that produces a change in the process, usually this block refers to the actuator, since it has the ability to make physical changes in the process. It is also known as a plant and they are all the characteristics of the process, for example how long it takes to perform or how many times the same procedure needs to be done, etc. What is a closed loop system? This system is more complete since it receives information on the states that the variable is taking. This feedback is achieved by placing sensors that send information from key points in the process so that it can act autonomously. Closed loop example Parts of a closed loop This system has the main elements of open loop (Control, correction and process) and includes two more. This comparator receives feedback information on the changes that the process is undergoing, and generates an error signal of the current state of the variable with respect to the reference point, to send it back to the controller to make a decision again With an error signal, it means that it sends a signal, whether it has already reached the point of regency or has not arrived or also in more complex systems we can know how much time is left to reach the goal. These elements are usually sensors that measure system information and feed it back to the comparator. Open loop control system We will use the same example of filling a 500lts tank of water. To solve this system with an open loop we need to know how many liters of water we fill per second, in this case we fill 1 liter every 5 seconds. Open loop example So based on the calculations we need a control circuit that keeps a hydraulic pump active for 416.66 minutes and deactivates after time, but as we can see this system is not so complete since we cannot know if it was actually full, since If the system undergoes a variation such as a lower flow, it cannot be detected and the tank will not fill. Example open loop closed loop control system Using the same example, in this system it does not matter how much time passes since level sensors are installed to have a feedback and thus power for the system when it is full or start it when it is detected that it is below the minimum level. Example closed loop
OPCFW_CODE
$0 to $10,000 dropshipping complete e-book: https://fvrr.co/3nlKxg2Get started with plug-and-play Shopify store: https://fvrr.co/38VKqiS#money #business #hustle #entrepreneur #success #motivation Walmart is focusing on sustainable sourcing and climate-conscious products with their ‘Built for Better’ program. Join our Shopify master class and start earning anywhere in the world.Learn and EarnContact me now +94 767 891 892#shopify #shopifyonline #shopifymasterclass #earnfromhome #workfromhome #smallbizsolutions For most, that’s more than a month’s salary! NexTech AR Solutions CEO Evan Gappelberg joined Steve Darling from Proactive to share news the company has announced what they believe is the first true self-service AR for eCommerce SaaS platform. Gappelberg telling Proactive he feels most retailers want to try before you buy and this will give consumers that ability. In August Nextech served 330,000+ AR experiences which translates into an annual run rate of almost 4 Million. In this video, I show you one of my favorite methods of gaining new customers for free! Using this free traffic source allows me to gain exposure to my all of brands & make consistent sales, all without spending a penny on paid advertising. This should work for everyone, regardless of customer count, store niche, etc. If this video helps you out in any way, please like & subscribe as there is PLENTY more to come! You can reach out to me at email@example.com with any business inquiries. Thank you so much for watching!Let Connect!Ecom Facebook Group:https://www.facebook.com/groups/673045979953882My Instagram:https://www.instagram.com/alex.shenton/Ecommerce Resources: https://www.instagram.com/ecom.advantage/ Trade Me warns users they will ban people for breaking lockdown rules. Make Money by Selling eCommerce Websites, Step by Step Tutorial to Create an Online Store.Signup on Hostinger: https://bit.ly/3ecTJ147% Discount Code: LETSUNCOVERCreate and Sell WordPress Website Online:https://youtu.be/Cw764wUA_nsFiverr Methodology 2.0: https://bit.ly/3hR5NWyIn this tutorial, I will explain that how you can create an e-commerce website using woocommerce and sell it online. There is a huge demand of ecommerce websites online and you can make money by learning this skill. We will use WordPress and woocommerce to create an online store. What you will learn in this video:1- Hosting and domain setup2- Installing theme on your website3- Install woocommerce plugin4- Astra theme setup5- How to add new product in woocommerce store6- Import Ecommerce template7- Set-up Your Online Store8- Change your currency9- Setup your payment method in woocommerce10- How to change the look of your website11- Editing your website through ElementorThis tutorial is in Urdu/Hindi Language. Business Email: ➜ Letsuncover@yahoo.com Website: ➜ https://letsuncover.net/ Instagram:➜ https://www.instagram.com/theletsuncover/ Twitter:➜ https://twitter.com/Lets_Uncover Facebook:➜ https://www.facebook.com/letsUncover Facebook Group:➜ https://www.facebook.com/groups/letsuncover#onlinestore #ecommerce #makemoneyonline Wellnex Life’s George Karafotias introduces the Australian health and wellness firm to Proactive’s Andrew Scott. The company’s recently signed a licensing agreement with Performance Inspired – a leading nutrition and supplement brand in the USA. Karafotias says its growth plans include opportunities in e-commerce and cannabis as well as international expansion. In this video I sit down with my Accountant to discuss pensions… because I don’t have one and I wanted to see if John Holiday can change my mind. Do you have a pension? Are you a business owner? Who needs a pension and who doesn’t? Let me know your thoughts in the comments below.Find out more from John here:https://www.pocknells.co.uk/SUBSCRIBE https://www.youtube.com/channel/UCTm2gK928YuBSEU0lvdFJoA?sub_confirmation=1Try Entrepreneurs University 14 FREE Trail Here:https://jamessinclair.net/Subscribe to our Podcast Channel here: https://www.youtube.com/channel/UCAnwjM8NoHx_Bz4fBpm8btQFOLLOW ME Instagram: @_jamessinclairLinkedIn: James SinclairWant to know more? Visit my website https://jamessinclair.net/In this debate style video I discuss with my accountant why I don’t have a pension. As a business owner and entrepreneur who likes to leverage my money and make it work as hard possible I just don’t like the idea of putting the money into a pension. However, I do know the importance of pension planning – I would rather invest the money into other areas to plan for my future.My accountant shares some tax saving tips with me and why these points should be considered. Should I listen and take on board these tips? Find out more in this video. HempFusion Wellness Inc CEO Jason Mitchell took Proactive’s Stephen Gunnion through second-quarter results, which show an 84% rise in revenue to $1.2 million for the three months to end-June from the same period a year earlier. Mitchell telling Proactive that the company benefitted from strong growth in consumer e-commerce revenue and its leading regulatory preparedness, which position it well for the future. Dropshipping in 2021 is not even close to over!In this video I go over a pretty decent month I had Dropshipping on Shopify in 2021. I cover everything from the winning product, the facebook ad strategy and metrics, and an in depth look at the Shopify analytics. At the end we break down the cost o everything and how much profit I was left with. Also, stop working your 9-5s that you hate. ➜Want To Apply For My Mentorship Program?stanwith.me/austinrabin➜ Get My 6 Figure Dropshipping Course With The Discord Group👇stanwith.me/austinrabin➜ Follow Me On My Socials For More Tips:https://www.instagram.com/austin.rabinhttps://tiktok.com/@austinrabin➜ Follow Our Travel Vlog Channel:https://www.youtube.com/channel/UCwpm…Don’t forget to subscribe and turn on the bell notifications to join me on my E-commerce journey.Introduction: 0:00-0:52Giveaway Results: 0:53-1:42Shopify Month Results: 1:43-5:40Facebook Ad Results: 5:41-8:35Facebook Ad Interest Targeting: 8:36-9:49Product Reveal: 9:50-10:46Profit And Margins: 10:47-13:20Mentorship: 13:21-13:50Outro: 13:51-14:12
OPCFW_CODE
"Save as" does not work for .ipynb Bug: Notebook Editor, Interactive Window, Editor cells Steps to cause the bug to occur Open existing .ipynb file Press ctrl + shift + s or access (save as) from command (ctrl + shift + p) Actual behavior Nothing happens Expected behavior Pop-up should open asking where to actually save the file Your Jupyter and/or Python environment Please provide as much info as you readily know Jupyter server running: Local Extension version: 2019.11.50794 VS Code version: 1.42.0-insider Setting python.jediEnabled: false Python and/or Anaconda version: 3.6.8 OS: Linux (xubuntu): Virtual environment: conda/venv Microsoft Data Science for VS Code Engineering Team: @rchiodo, @IanMatthewHuff, @DavidKutu, @DonJayamanne, @greazer Sorry but a lot of the commands that VS code supports just aren't currently supported in the notebook editor. See this issue for more information: https://github.com/microsoft/vscode-python/issues/7244 I'm not sure it has to do with shortcut only as even from the "commands" I can't access it. Thanks anyways, will look at the coming updates :) It's related to our not being a real 'TextDocument' so VS code does not know that it needs to activate the command. This API when it's done should alleviate at least Save As problems: https://github.com/microsoft/vscode/issues/77131. Once VS code ships that API, our window will be treated as something that should be saveable. not sure you want to wait for that based on the last discussion I had with @mjbvz here: https://github.com/microsoft/vscode/issues/86802#issuecomment-566773351 @greazer I see you closed this and all dups. Has this been implemented in the latest vscode release? It's not working on version 1.45 (April 2020) Not working on version 1.46. Version: 1.46.0 Commit: a5d1cc28bb5da32ec67e86cc50f84c67cc690321 Date: 2020-06-10T08:59:04.923Z Electron: 7.3.1 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Linux x64 4.15.0-58-generic @greazer @rchiodo should it be reoppened even though the solution does not appear to be responsability of your team ? I also cannot save my jupyter notebook files. It doesn't even autosave. (See screenshot with asterix above the .ipynb which shows that the file isn't saved.) I can't even push changes I made to a team repository on Azure Dev Ops. To address this issue, I have to copy and paste my entire notebook to a new window and save those changes as a new file. It's very onerous. @PCstumbleine you have to save from the save button on the window. Or by hitting CTRL+S. As stated above, VS code doesn't know about our notebook as being an editor (not yet anyway - when this API is ready it will https://github.com/microsoft/vscode-python/issues/10496) Clicking the save button doesn't do anything for me. @LLTTDAY are you talking about this button here? Is the button enabled when the file is dirty (asterisk shows up)? Yes, the button's there but file stays dirty after clicking it--just no apparent response to the click. In a different file, open at the same time, the button works fine. When I closed the file the most recent time I got a prompt to save, which I accepted and which apparently worked. I lost a bunch of changes last week after closing a file I wasn't able to save with the button. I don't remember if I was prompted on closing it. @LLTTDAY your bug sounds like this one: https://github.com/microsoft/vscode-python/issues/12562 Can you provide a full console log in that issue? Thanks. Yes that sounds right. I came here too because I tried Save As as a way around that issue. But I also have the same multiple editors opening problem mentioned there. I will dig up the log and add it in 12562. Thank you! We are still having this issue; I'll add to 12562 as well. we're still having the issue..save as does nothing to ipynb Save as is not supposed to work yet. Our custom webview is not understood by VS code, so Save As doesn't understand it's a file. However there are workaround. We have two new ways to open a notebook: One of them is here: https://devblogs.microsoft.com/python/notebooks-are-getting-revamped/ The other one can be enabled with this setting here (still has some issues with it though): "python.experiments.optInto": [ "CustomEditorSupport - experiment", ],
GITHUB_ARCHIVE
<?php namespace KiboIT\VATVerification; class TestVATNumber extends VATNumber {} class VATNumebrTest extends \PHPUnit_Framework_TestCase { public function testBuildFromConstructor() { $code = 'NL'; $number = '803851595B01'; $vat = new VATNumber($code, $number); $this->assertEquals($code, $vat->getCountryCode()); $this->assertEquals($number, $vat->getNumber()); } public function testBuildFromString() { $code = 'NL803851595B01'; $vat = VATNumber::fromString($code); $this->assertEquals('NL', $vat->getCountryCode()); $this->assertEquals('803851595B01', $vat->getNumber()); } public function testExceptionOnBadCountryCode() { $this->expectException(\InvalidArgumentException::class); new VATNumber('N', '123'); } public function testExceptionOnBadNumber() { $this->expectException(\InvalidArgumentException::class); new VATNumber('NL', '1'); } public function testSanitizeAndBuildFromString() { $vat = VATNumber::sanitizeAndBuildFromString('NL 8038 51595,B01'); $this->assertEquals('NL', $vat->getCountryCode()); $this->assertEquals('803851595B01', $vat->getNumber()); } public function testToString() { $vat = VATNumber::fromString('NL803851595B01'); $this->assertEquals('NL803851595B01', $vat->toString()); $this->assertEquals('NL803851595B01', (string)$vat); } public function testStaticFactoryMethods() { $this->assertInstanceOf(TestVATNumber::class, TestVATNumber::fromString('NL123456')); $this->assertInstanceOf(TestVATNumber::class, TestVATNumber::sanitizeAndBuildFromString('NL123234234')); } }
STACK_EDU
Last Wednesday (08.27.2008), I attended a presentation given by local ruby "hero" Joe O'Brien. He was presenting techniques for meta-programming in Ruby. Meta-programming is a general topic which applies to many languages, however Ruby provides some very basic methods which result in powerful meta-programming tools for creating dynamic run-time functionality and for enhancing the base language. Joe started the presentation with a run-down of some of the meta-programming methods exposed by the base object in Ruby. - send--By using the send method with a string value (method name) and a list of parameters you can dynamically choose a method to call. This method is provided in conjunction with the message-passing back-bone used by the Ruby interpreter. - method_missing--Implementations of the method_missing message handler can be used to handle messages which are currently not understood by the receiving object. This is an expensive method to use, but can be used to add new functionality in response to an unknown message. ActiveRecord in Rails uses this method to create finder methods when a search is performed. - eval--Evaluate a string value as arbitrary Ruby code. User beware Since this method takes a string argument as the code to be evaluated, any code in that string is not checked by the interpreter until the eval statement is executed by the interpreter. - define_method--Pass a string value (method name) and a block in order to define a new method defined by the given block. This method takes a block as a parameter and is safer to use than the eval method. - methods--Returns an array of the methods currently defined on an object. By calling methods on an object and subtracting the result of calling methods on that object's supertype, you can easily get a list of the methods defined only within that object. - attr_reader--The attr_reader method is used to make a private field in an object accessible to other objects. - attr_writer--The attr_writer method is used to make a private field in an object settable by other objects. - attr_accessor--The attr_accessor method is used to make a private field in an object accessible and settable by other objects. Example: Declarative Tests and Home-Grown Behavior Driven Development Joe, being a "test-first bigot" (his words, not mine), was uncomfortable with the idea that it took him nearly 10 lines of code to test one ActiveRecord has_many association. He also expressed his dislike for creating test methods whose names look like test_a_description_of_the_test. His iterative approach to creating a declarative version of this this test method started with the original 10 lines of code in his test method. He extracted that method into a method on the Test::Unit::TestCase class, test_that, which took three arguments, the model under test, the expected has_many association class, and a description of the test as a string. These parameters were used in conjunction with the define_method method to create a new method at run-time which executed the original code. Using this technique, Joe stated that he was able to reduce the tests for many of the ActiveRecord capabilites to one-liners.
OPCFW_CODE
hardware: prettify config output on 6/8-core CPUs [x] Have you followed the guidelines in our Contributing document? [x] Have you checked to ensure there aren't other open Pull Requests for the same change? [x] Have you added an explanation of what your changes do and why you'd like us to include them? [ ] Have you written new tests for your changes? [x] Have you successfully run brew tests with your changes locally? Sadly, some of our users (and even maintainers) have to live with a less beautiful brew config output: CPU: 8-core 64-bit skylake Others (like me) can enjoy the beauty of transforming the number of cores into a word: CPU: quad-core 64-bit ivybridge This PR fixes this discrepancy for the majority of our users for the foreseeable future. Sadly ... 8-core 64-bit skylake If it makes you 😢 I'll happily swap it for a worse CPU 😉 Inline note but 👍 to the general idea. Why I feel current output is a little misleading? brew config reports to me 8-core 64-bit haswell. But in reality, my CPU only has four cores but eight hyperthreads. If it makes you 😢 I'll happily swap it for a worse CPU 😉 Note that I wasn't quoting from me there. I'm still working with my 1st generation rMBP, that I'd happily upgrade if I'd be offered a new one. Just came across it in some recently closed issue. :wink: Why I feel current output is a little misleading? brew config reports to me 8-core 64-bit haswell. But in reality, my CPU only has four cores but eight hyperthreads. Same situation with my quad-core 64-bit ivybridge. It's actually a dual-core with hyper-threading. Haven't looked into whether there's some easily accessible information that allows extracting this detail, so that we can report the number of physical cores. HOMEBREW_MAKE_JOBS="2" is the way I cope with this discrepancy, as otherwise my system becomes completely unusable during a longer compile if Homebrew tries to use as many cores as it believes there to be. Same situation with my quad-core 64-bit ivybridge. It's actually a dual-core with hyper-threading. Haven't looked into whether there's some easily accessible information that allows extracting this detail, so that we can report the number of physical cores. Apparently it is readily available: $ sysctl machdep.cpu.core_count machdep.cpu.thread_count machdep.cpu.core_count: 2 machdep.cpu.thread_count: 4 Haven't researched how far back (in terms of OS X releases) this information is available, but we can always fall back on hw.ncpu that we're currently using of the above two are unavailable. I think we could make it two methods Hardware::CPU.cores and Hardware::CPU.threads. And use cores to display config, and threads to control jobs. Also from manpage: hw.ncpu The number of cpus. This attribute is deprecated and it is recom- mended that hw.logicalcpu, hw.logicalcpu_max, hw.physicalcpu, or hw.physicalcpu_max be used instead. I don't think we should use machdep.cpu.core_count machdep.cpu.thread_count The reason is from manpage: CTL_MACHDEP The set of variables defined is architecture dependent. The following variables are defined for the i386 architecture. As someone likely using a 8-core 64-bit haswell for the next couple of years at least, unless I lose control over my enjoyment of shiny things, I applaud the stylistic consistency of this PR 😉. Though I agree it's not "really" an 8-core CPU, and Apple doesn't report it as such on the system report. :+1: on fixing the physical vs virtual cores display discrepancy. Yeah, let's use hw.physicalcpu and hw.logicalcpu and separate cores/threads methods like xucheng suggests. And maybe call it "logical_cores" or "vcores" instead of "threads", to avoid confusion with OS or program execution threads? And use cores to display config, and threads to control jobs. Maybe we should be using the physical core count instead of the thread count to set HOMEBREW_MAKE_JOBS. Most of the parallelizable steps in a build are going to be CPU- and memory-bandwidth-bound, and not spending much time in I/O or idle. In that scenario, leaning on hyperthreading may not help much, and may in fact make overall throughput worse because you're incurring more context switches and CPU cache pressure. Hyperthreading is more appropriate when you have many threads which may sometimes be idle or blocked on I/O, but want to swap in quickly to respond to events (like a web server or database), or when responsiveness is more important than throughput, like in a GUI. I don't have any benchmarks to support that, but it shouldn't be hard to make some. That should probably be in a separate PR, though, since it's a functional change and not just a display fix. And maybe call it "logical_cores" or "vcores" instead of "threads", to avoid confusion with OS or program execution threads? Well, not "vcores", since that's terminology used in VM provisioning like in VMware Fusion, so that term is overloaded, too. Thanks everyone for the discussion! The spin-off regarding logical vs. physical CPU cores was probably more valuable than the original contribution. Anyone feel free to pick up on the discussed stuff. Otherwise I'll probably get to that at some point, but it doesn't feel like an urgent change at the moment. Are we need to add deca, dodeca and hexadeca, ... for new intel CPU series? @Luavis When they ship in official Apple hardware: sure. Could you try and open a pull request? This document should help and we're happy to walk you through anything else. Thanks!
GITHUB_ARCHIVE
Issue with k8s.io/docs/tutorials/stateless-application/hello-minikube/ can not launch "minikube dashboard" the error message is : Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: services "kubernetes-dashboard" not found This is a... [ ] Feature Request [x] Bug Report Problem: Proposed Solution: Page to Update: http://kubernetes.io/... Just to make sure, I also tried to the last version for Minikube, but I can launch dashboard via minikube dashboard command. install the latest version of minikube and kubctl, still can not get it work. I removed the team/katacoda tag, since this isn't related to the browser-based tutorial sets. @jackymail - could you verify what version of minikube you're running locally (using the command minikube version), the output of minikube status, and the output of minikube addons list? I have a sneaking suspicion that the dashboard addon simply hasn't been added, but either way this will help us debug what's happening. Thanks Dude,following are the output of the commands you mentioned. 1: minikube version minikube version: v0.22.1 2:minikube status minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at <IP_ADDRESS> 3:minikube addons list addon-manager: enabled registry: disabled registry-creds: disabled dashboard: enabled default-storageclass: enabled kube-dns: enabled heapster: disabled ingress: disabled hope it helps. thanks @jackymail - so far, that's looking good. Can you go ahead and run a few command via kubectl as well please? kubectl get services --namespace kube-system kubectl get pods --namespace kube-system This second should include the full name of the pod that's running the dashboard, once you have it: kubectl logs pod $(kubectl get pods --namespace kube-system | grep dashboard | cut -d' ' -f1) (sorry that last one is arcane - I'm sure there's a better way of doing this, but I wanted to get the log output from the pod that's running the dashboard) Thanks!!!,there is no running pod currently, following are the output: 1kubectl get services --namespace kube-system NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard <IP_ADDRESS> 80/TCP 7d 2: kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE kube-addon-manager-minikube 0/1 ContainerCreating 0 7d kubernetes-dashboard-3313488171-mtv6w 0/1 ContainerCreating 0 7d The STATUS messages there lend a bit of a hint - unless you JUST started minikube in the past minute or something, they should both be well past ContainerCreating, so I'd guess Minikube is having some trouble getting those containers up and running. You can get some better details on what might be happening (or not happening) with the command: kubectl --namespace kube-system describe pod kubernetes-dashboard-3313488171-mtv6w Is you laptop running in a location where you may require a PROXY to get access to the internet to pull images? That might be one consideration here - either way, hopefully the command above will show more detail about why the dashboard container hasn't yet been fully created @jackymail 👋 Wanted to see if you were able to work through the issues with containers getting activated? Since this isn't an issue about the documentation (or at least doesn't seem to be) I'm going to go ahead a close this issue for now, feel free to re-open if there is a specific documentation issue that we can resolve. sorry,still not working: kubectl --namespace kube-system describe pod kubernetes-dashboard-3313488171-mtv6w Name: kubernetes-dashboard-3313488171-mtv6w Namespace: kube-system Node: minikube/<IP_ADDRESS> Start Time: Tue, 12 Sep 2017 23:16:23 +0800 Labels: k8s-app=kubernetes-dashboard pod-template-hash=3313488171 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-3313488171","uid":"567ae196-97cd-11e7-a... Status: Running IP: <IP_ADDRESS> Created By: ReplicaSet/kubernetes-dashboard-3313488171 Controlled By: ReplicaSet/kubernetes-dashboard-3313488171 Containers: kubernetes-dashboard: Container ID: docker://b022e1544a626458951137ea2326cb43f7e5b8c24a457541d8252b3dc4bc9950 Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 Image ID: docker://sha256:691a82db1ecd12bf573b1b9992108a48e0d1a8640564c96d4f07e18e69dd83e6 Port: 9090/TCP State: Running Started: Wed, 11 Oct 2017 19:22:22 +0800 Last State: Terminated Reason: Error Exit Code: 2 Started: Wed, 20 Sep 2017 12:55:05 +0800 Finished: Wed, 20 Sep 2017 14:08:37 +0800 Ready: True Restart Count: 1 Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-l801k (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: kubernetes-dashboard-token-l801k: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-l801k Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node-role.kubernetes.io/master:NoSchedule Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message 2m 2m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "kubernetes-dashboard-token-l801k" 2m 2m 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 2m 2m 1 kubelet, minikube spec.containers{kubernetes-dashboard} Normal Pulled Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3" already present on machine 2m 2m 1 kubelet, minikube spec.containers{kubernetes-dashboard} Normal Created Created container 2m 2m 1 kubelet, minikube spec.containers{kubernetes-dashboard} Normal Started Started container hi @jackymail based on the extended output you just posted, it looks like the dashboard did eventually get created and appears to be running in the output above. If you invoke a minikube dashboard now, are you still unable to access it? It could have taken quite a while to start up - when you start minikube, the instance of kubernetes within that VM download images from gcr.io (google's container registry), and if your bandwidth is constrained, or your machine is heavily taxed for other reasons, that can take quite a bit of time. Kubernetes will keep working until it gets where it wants it to be. In the output above, there's also references to multiple liveness probe failures, which hints that the dashboard wasn't available and ready for some time, but as of 11 OCT was up and running.
GITHUB_ARCHIVE
How handle silent push notification when application is InActive state in ios 9? I have implement silent push notification.So "didReceiveRemoteNotification" method called when application is inactive state in ios 9. There are some case when application is inactive state. 1.When user tab on particular notification. 2.When call or message receive. 3.When notification center and control center open. -(void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo fetchCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler { if(application.applicationState == UIApplicationStateInactive) //Inactive state { [self RedirectScreenBasedOnNotification:self.userInfoDic];//Screen Redirection code } } So how can i handle silent notification when app is inactive state? I have face problem is when notification center open at that time if any notification come then redirection will do,but i want to stop that. Notification payload:- aps = { alert = "Test Dev 5 startd following you "; "content-available" = 1; "link_url" = "https://raywenderlich.com"; message = { friend = { email =<EMAIL_ADDRESS> name = "Test Dev 5"; photo = ""; "user_id" = 27; }; id = 3; "is_business_sent" = 0; message = "Test Dev 5 startd following you "; }; sound = default; } Thanks in advance Please update your question with a representative notification payload. @quellish i have update question,please check it and help me. Silent push notifications do not trigger user interactions. When a silent notification payload includes keys for user interaction things go wrong - iOS can't reason about wether the intent is to present something to the user, or to keep the notification silent and handled without user interaction. Sometimes the silent notification may work, other times it may be presented like a normal notification with user interaction. It can be one or the other, not both. If the silent push key content-available is present in the aps payload the keys alert, sound, or badge should not be. You can use my Push Notification Payload Validation Tool to check the content of your notification. The payload you posted in your question has several problems - the aps key should only contain Apple keys defined in Generating Push Notifications. All of your custom keys and values should be outside the aps object. application:didReceiveRemoteNotification:fetchCompletionHandler: will only be called for silent push notifications. If the notification payload contains both content-available and one or more of alert, sound, or badge iOS will not know which method to call and you may see inconsistent behavior. If you are just trying to show a non-silent notification you do not need to implement application:didReceiveRemoteNotification:fetchCompletionHandler:. Instead implement application:didReceiveRemoteNotification: for iOS 9 and userNotificationCenter:willPresentNotification:withCompletionHandler: for iOS 10 and later. As far as silent notifications and the inactive application state, there is nothing special to be done here. Silent notifications are intended to 'hint' to the application that it should refresh content. When a silent notification is received the application is required to process the content update within 30 seconds and then call the fetch completion handler. When iOS executes the fetch completion handler it take a new restoration snapshot of the updated UI. This happens even when the application is inactive. If app is in foreground and control center opened, so application goes to inactive state. During that time if any notification come with above notification payload, so "didReceiveRemoteNotification" method called and execute "RedirectScreenBasedOnNotification" method. But i want to stop that. How can it possible. You can add your code in this If condition. if (UIApplication.sharedApplication.applicationState != UIApplicationStateInactive) { //Write your code her, this will get executed when your app is not in Inactive state. }
STACK_EXCHANGE
getcontentpane method in JFrame class What is the use of getContentPane method in JFrame class? I googled it but I cant find appropriate answer. class MainFrame extends JFrame { public MainFrame(String title) { super(title); // Set layout manager setLayout(new BorderLayout()); // Create Swing component JTextArea textArea = new JTextArea(); JButton button = new JButton("Click me!"); // Add Swing components to content pane Container c = getContentPane(); c.add(textArea, BorderLayout.CENTER); c.add(button, BorderLayout.SOUTH); JButton button1 = new JButton("Click me again!"); add(button1,BorderLayout.NORTH); button.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { textArea.append("Hello\n"); } }); button1.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { textArea.append("Hello\n"); } }); } } What is the use of getcontentpane method in JFrame class? In this case none. Since Java 1.5 add has automatically added components to the ContentPane so c.add(textArea, BorderLayout.CENTER); can be written as add(textArea, BorderLayout.CENTER); // can drop BorderLayout.CENTER obviously similar to how button1 is handled There are instances where it is still useful to get the ContentPane, such as setLayout(new BoxLayout(getContentPane(), BoxLayout.PAGE_AXIS)); then why we have such method?What is its use. Please give me appropriate example. A container has several layers in it. You can think of a layer as a transparent film that overlays the container. In Java Swing, the layer that is used to hold objects is called the content pane. Objects are added to the content pane layer of the container. The getContentPane() method retrieves the content pane layer so that you can add an object to it. The content pane is an object created by the Java run time environment. You do not have to know the name of the content pane to use it. getContentPane() returns a container to hold objects. You can add objects on the returned container instead of adding objects directly to the JFrame or JDialog. i am not getting.when we add button through container in frame or without container, we are getting same result. Why? Edited the answer. Hope you get it. Because it returns the default content pane of the JFrame which follows the Border Layout. You could add anothet container like JPanel to the frame and add objects to it instead. adding to the container returned by getContentPane is the same as adding to the JFrame directly. Getvontentpane() will return contentpane of the frame. Contentpane is like a place where all components are added.
STACK_EXCHANGE
Highlights of AI Village DefCon China 2018 At the DefCon2018 conference held in China on May 12, hackers and data scientists raised vivid discussions on cyberattacks with the use and abuse of machine learning and possible solutions. It goes without saying that artificial intelligence is now actively used in most security technologies as well as in a wide range of attacks. Attack vectors have become more advanced and sophisticated. If you are curious, there is a remarkable series of posts related to AI and cybersecurity on Forbes, revealing how AI-driven system can be hacked, detailing seven ways cybercriminals can use ML, and uncovering the truth about ML in defense. Today cyberattackers are less interested in traditional platforms but target self-driving cars, human-voice-imitation and image-recognition systems. It stands to reason that the release of new technology product means the development of attack techniques and the subsequent addition of concerns to an ever-growing list. This review provides the brief description of DEFCON presentations on the security issues that are closely connected with AI and ML aiming to clue the world in on the latest use and abuse of artificial intelligence in cybersecurity. The talks cover topics ranging from vulnerabilities of machine learning tools to reports on the malicious ML deployment. The behavior of ML systems depends less on specific machine opcodes and more on weight vectors and bias parameters. This makes a huge difference in terms of possible threat models. Prior works are mostly focused on generating adversarial inputs to exploit machine learning classifiers (e.g., how to fool face recognition system by wearing a special sunglasses), and there were no attempts to modify the model itself. Researchers demonstrated the proof of concept malware to hack Neural Networks on Windows 7, highlighting different training paradigms and mechanisms cyberattackers could use. The speakers provided two videos, the first revealed the Naive Attack, and the second – the Trojan Attack. They displayed the devastating potential of a patched network and sparked a discussion around the AI security of systems level. Authors proved that with selective retraining and backpropagation, they are easily retrained networks so that an attacker looking to compromise an ML system could simply patch these values in live memory, thereby taking control of the system with minimal risk of system malfunctions. Developers increasingly use machine learning in systems that manage sensitive data, and this fact presents more bait to cyber perpetrators. Even if your company implements out-of-the-box applications, they could get through security and access this important organizational information. To improve the privacy and security of these systems, it is recommended to apply such techniques as differential privacy and secure multi-party computation. The speaker assumed that implementing a vanilla ML model API with no model hardening is a poor idea and talked about black box access to neural network-based ML models. Homomorphic encryption can perform computations on encrypted information so that adversary can’t read data but the statistical structure is preserved. Fully homomorphic encryption schemes are incredibly slow. Secure multi-party computation means that multiple parties can jointly compute a function while keeping the function input private. Although it is cheaper than homomorphic encryption, it requires more interaction between parties. As for differential privacy, adding or removing an element from the data doesn’t significantly change the output distribution. It is slow but even works in scenarios where the adversary has full knowledge of training mechanisms and access to parameters. You can extend your knowledge of differential privacy and read Dwork (2006), and Dwork and Roth (2015). The presentation identified and illustrated the threat models solved with these techniques. - Model inversion and adversarial examples (Goodfellow et al, 2015; Papernot et al, 2016) – a categorization model/API that provides confidence values and predictions, it’s possible to recover information encoded in the model through the training data (Fredrikson et al, 2015; Xu et al, 2016). - Memorization, were a known data format like a credit card number allows extracting the information by using a search algorithm on the model predictions (Carlini et al, 2018). - Model theft – the black-box access enables to construct a new model to closely approximate target (Tramèr et al, 2016). The talk considered the modern ML pipeline and identified the threat models solved with these techniques. Furthermore, it evaluated the possible costs to accuracy and time complexity as well as presented tips for hands-on model hardening: - Give users the bare minimum amount of information - Add some noise to output predictions - Restrict users from making too many prediction queries - Consider using an ensemble of models and return aggregate predictions The author gathered general observations that it is more practical to think about model hardening from the perspective of black box access, although some techniques work as white box augmentations. Most attacks are trying to net information held in the model even if the data is encrypted. They rely on the preservation of statistical relationships within the data, which is not obfuscated by most cryptographic techniques. In view of AI development, the black market in e-Commerce has seen a vast abuse of AI technologies. This evolving and lucrative black market attracts thousands of scalpers and hackers. It costs billions of reputation and financial losses to companies like Alibaba and Amazon. This presentation provided real examples of how hackers target large E-Commerce companies. Traditionally, cyberattacks were full of manual work and low tech, now they became AI based, take as an example an AI-based distributed CAPTCHA solver. A complete industrial chain consists of upstream (platform doing verification on code, image, voice, text, etc.), midstream (various accounts related services and exchanging platforms such as fake account registration and account pilfering ), and downstream (gaining profits scalper, fraud, theft, blackmail etc.). Therefore, the presentation uncovered the industrial chain of the black market and the detailed social division of labor, as well as various advanced tools. The point is that JD.com presented their approach to defense against attacks. For instance, they detect scalpers by applying NLP on IM messaging. Bot detection was another area where AI was necessary. Moreover, they included biometrics features such as mouse movement and keyboard. If you look at the screenshot that depicts mouse movement, you can see that it is possible to use vanilla CNN networks to classify bot behavior. The use of machine learning by cybercriminals is an emerging strategy. So how do hackers put machine learning algorithms to work? Pwning machine learning systems workshop gave insight into the world of adversarial machine learning. It focused on practical examples helping start pwning ML-powered malware classifiers, intrusion detectors, and WAFs. Two types of attacks on machine learning and deep learning systems were covered, and they are model poisoning and adversarial generation. A docker container is provided for you to play with the examples. Machine Learning as a Tool for Societal Exploitation: A Summary on the Current and Future State of Affairs You might notice the overlaps between cyberattack and defense. Like any tool, AI can serve for both criminals and defenders on different ends of the spectrum, sometimes without much modification. The talk started with a brief analysis of the current state of ML-related security, ranging from location mapping through ambient sounds to Quantum Black’s sports-related work and various endpoint detection systems in different stages of development. It evoked discussion on the adoption of machine learning to escalate cyber warfare and ended with the concept of ‘fooling’ ML software providing a simple instance of the effect it can have on human profiling. If videos will be available, it may be worth it to have a look. Machine learning methods like decision tree and K-nearest neighbor can provide end users with an explanation of individual decisions and even let them analyze model strengths and weaknesses. Learning models like deep neural networks (DNN) are unclear therefore they have not yet been widely adopted in cybersecurity, say, in defending against cyberattacks. Nonetheless, as a rule, they exhibit immense improvement in classification performance. It is an honor to see that ERPScan who currently use deep neural networks for threat detection is one of the pioneers in this area. This AI Village talk introduced techniques that can yield explanations for the individual decision of a machine learning model and facilitate one to scrutinize a model’s overall strengths and weaknesses. The speakers demonstrated how these techniques could be utilized to examine and patch the weakness of machine learning products. Numerous current research papers leveraging deep learning for cybersecurity solutions were mentioned. - Binary Analysis (USENIX15,USENIX17,CCS17) - Malware Classification (KDD 17) - Network Intrusion Detection (WINCOM 16) The number of papers that describe deep learning used to solve cybersecurity tasks is growing especially in the field of malware analysis. The list can be extended at least with the following research papers: - A novel approach for automatic acoustic novelty detection using a denoising autoencoder with bidirectional lstm neural networks (2015) - A survey of network anomaly detection techniques - One Class collective Anomaly Detection based on LSTM (2018) Back to the presentation, they evoked one of the most important discussions that is interpretability of Deep Learning models. If we talk about image recognition, show a group of important pixels, for sentiment analysis – show keywords, for malware – detection. Which parts of the program make DL identify this instruction as a functions start? There are two general approaches to interpretability – white box and black box. White-box approaches may have an effect and give amazing results. Nonetheless, they are adapted to common architectures like image recognition. As for security applications, white-box approaches are difficult to implement. The hidden layer representations cannot be understood comparing to images, and the hidden representations of binary code can not be interpreted. The existing black-box approaches are intuitive. Once again, Deep Learning model is highly non-linear. Simple linear approximation is not a good choice when a very precise answer is needed while dealing with cybersecurity. The researchers proposed their own approach – Dirichlet process mixture regression model with multiple elastic nets. Precise approximation with help of mixture regression model is to approximate arbitrary decision boundary. And elastic net is to enable mixture model to deal with high dimensional and highly correlated data. In addition, they select only the most valuable features. There is a fun outcome, those features which became central are actually can be used to generate adversarial samples. In general, this AI Village talk is informative, and it is a food for further reflection. Despite the fact that in-house software testing is an intensive process and developers do their job right, there are weaknesses inevitably in the programs resulting in crashes. Software analysts have to accomplish a long chain of time-consuming postmortem program analysis tasks in order to identify the root cause of a software crash. Since the effectiveness of the postmortem program analysis depends on the capability of distinguishing memory alias, alias detection is named a key challenge. The researchers introduced a recurrent neural network architecture to enhance the capability of memory alias analysis and concluded that their DEEPVSA network facilitated and improved postmortem program analysis with the help of deep learning. Automatic debugging basically requires three actions: - Track down the root cause of a software crash at the binary level without source code - Analyze a crash dump and identify the execution path leading to the crash - Reversely execute an instruction trace (starting from the crash site) The results are as follows: - DEEPVSA implements a novel RNN architecture customized for VSA; - DEEPVSA outperforms the off-the-shelf recurrent network architecture in terms of memory region identification; - DEEPVSA significantly improves the VSA with respect to its capability in analyzing memory alias; - DEEPVSA will enhance the accuracy and efficiency of the postmortem program analysis. The recent DefCon China 2018 is a conference dedicated to AI in cybersecurity through practice. Comparing to common events which are mostly focused on academic papers solving adversarial issues, the AI Village has multiple long and brief practical workshop sessions that always come in handy.
OPCFW_CODE
SMTP error after renewing my certificate (SSL3_GET_SERVER_CERTIFICATE:certificate verify failed) I renewed my certificate and restart Apache. Good, I have the lock, the site is back in https! But, when I try to send an email with the pluging Easy WP SMTP, I obtain this with the full log SMTP::DEBUG_LOWLEVEL (4): Connection: opening to ssl://xxxxxxxxx:465, timeout=10, options=array () Connection: Failed to connect to server. Error number 2. "Error notice: stream_socket_client(): SSL operation failed with code 5. OpenSSL Error messages: error:0200100D:system library:fopen:Permission denied error:20074002:BIO routines:FILE_CTRL:system lib error:0B06F002:x509 certificate routines:X509_load_cert_file:system lib error:0200100D:system library:fopen:Permission denied error:20074002:BIO routines:FILE_CTRL:system lib error:0B06F002:x509 certificate routines:X509_load_cert_file:system lib error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Connection: Failed to connect to server. Error number 2. "Error notice: stream_socket_client(): Failed to enable cryptoConnection: Failed to connect to server. Error number 2. "Error notice: stream_socket_client(): unable to connect to ssl://xxxxxxxxx:465 (Unknown error)SMTP ERROR: Failed to connect to server: (0)SMTP connect() failed. https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting I tried everything in the wiki … nothing worked. NOTE: my STMP configuration is good because when I used the unsecure option I can send an email. The option said : “Allows insecure and self-signed SSL certificates on SMTP server. It’s highly recommended to keep this option disabled.” Here my configuration : /etc/apache2/sites-enabled/wordpress.conf SSLCertificateFile /etc/ssl/certs/my_site_com.pem SSLCertificateKeyFile /etc/ssl/certs/my_site_com.key SSLCertificateChainFile /etc/ssl/certs/my_site_com.ca-bundle.crt php -i | grep cafile openssl.cafile => before I renew the certificate this value was empty, so it supposed to work with an empty value. Well, my conclusion is there a copy of my old certificate into the server, but where. I tried to set the openssl.cafile into the php.ini file to the pem and also to the ca-bundle … nothing worked. is the cert self signed, if so you might want to enable '“Allows insecure and self-signed SSL certificates on SMTP server. It’s highly recommended to keep this option disabled.”'. Failing that ensure the path to to cetrificate is correct in the SMTP client Sending mail from some PHP code in your web server has nothing to do with the SSL configuration of the web server, so providing this config does not help. The errors shown regarding X509_load_cert_file that there are permission problems with the CA file phpmailer is supposed to use, so look there instead. I tried chmod 777 and also with many user (chown) for my ca-bundle and the pem, but nothing worked. Can I show the file who given the permission denied?
STACK_EXCHANGE
Gridding with a nearest-neighbors interpolator Gridding with a nearest-neighbors interpolator# Verde offers the verde.KNeighbors class for nearest-neighbor gridding. The interpolation looks at the data values of the k nearest neighbors of a interpolated point. If k is 1, then the data value of the closest neighbor is assigned to the point. If k is greater than 1, the average value of the closest k neighbors is assigned to the point. The interpolation works on Cartesian data, so if we want to grid geographic data (like our Baja California bathymetry) we need to project them into a Cartesian system. We’ll use pyproj to calculate a Mercator projection for the data. For convenience, Verde still allows us to make geographic grids by passing the projection argument to verde.KNeighbors.grid and the like. When doing so, the grid will be generated using geographic coordinates which will be projected prior to interpolation. Data region: (245.0, 254.705, 20.0, 29.99131) Generated geographic grid: <xarray.Dataset> Dimensions: (latitude: 600, longitude: 583) Coordinates: * longitude (longitude) float64 245.0 245.0 245.0 ... 254.7 254.7 254.7 * latitude (latitude) float64 20.0 20.02 20.03 ... 29.96 29.97 29.99 Data variables: bathymetry_m (latitude, longitude) float64 -3.669e+03 -3.669e+03 ... -66.5 Attributes: metadata: Generated by KNeighbors(k=10, reduction=<function median at 0x... import cartopy.crs as ccrs import matplotlib.pyplot as plt import numpy as np import pyproj import verde as vd # We'll test this on the Baja California shipborne bathymetry data data = vd.datasets.fetch_baja_bathymetry() # Data decimation using verde.BlockReduce is not necessary here since the # averaging operation is already performed by the k nearest-neighbor # interpolator. # Project the data using pyproj so that we can use it as input for the gridder. # We'll set the latitude of true scale to the mean latitude of the data. projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean()) proj_coordinates = projection(data.longitude, data.latitude) # Now we can set up a gridder using the 10 nearest neighbors and averaging # using using a median instead of a mean (the default). The median is better in # this case since our data are expected to have sharp changes at ridges and # faults. grd = vd.KNeighbors(k=10, reduction=np.median) grd.fit(proj_coordinates, data.bathymetry_m) # Get the grid region in geographic coordinates region = vd.get_region((data.longitude, data.latitude)) print("Data region:", region) # The 'grid' method can still make a geographic grid if we pass in a projection # function that converts lon, lat into the easting, northing coordinates that # we used in 'fit'. This can be any function that takes lon, lat and returns x, # y. In our case, it'll be the 'projection' variable that we created above. # We'll also set the names of the grid dimensions and the name the data # variable in our grid (the default would be 'scalars', which isn't very # informative). grid = grd.grid( region=region, spacing=1 / 60, projection=projection, dims=["latitude", "longitude"], data_names="bathymetry_m", ) print("Generated geographic grid:") print(grid) # Cartopy requires setting the coordinate reference system (CRS) of the # original data through the transform argument. Their docs say to use # PlateCarree to represent geographic data. crs = ccrs.PlateCarree() plt.figure(figsize=(7, 6)) # Make a Mercator map of our gridded bathymetry ax = plt.axes(projection=ccrs.Mercator()) # Plot the gridded bathymetry pc = grid.bathymetry_m.plot.pcolormesh( ax=ax, transform=crs, vmax=0, zorder=-1, add_colorbar=False ) plt.colorbar(pc).set_label("meters") # Plot the locations of the data ax.plot(data.longitude, data.latitude, ".k", markersize=0.1, transform=crs) # Use an utility function to setup the tick labels and the land feature vd.datasets.setup_baja_bathymetry_map(ax) ax.set_title("Nearest-neighbor gridding of bathymetry") plt.show() Total running time of the script: ( 0 minutes 4.128 seconds)
OPCFW_CODE
Use of Framework Extension Bundles in GlassFish V3 When you run an application in a plain vanilla Java platform, your code can access publicly visible internal JDK classes. While I understand it is a bad thing, but I do understand that sometimes developers have genuine reasons to access those internal APIs. e.g., our security module in GlassFish has to use some of the JDK Security classes. When one moves those programs to an OSGi environment, the assumption that every public JDK class is always visible to applications goes for a toss. When we started with our OSGi effort in GlassFish V3, we immediately faced the problems. So, how does one manage the situation? There are basically two ways to manage it in OSGi, viz: a) Parent Delegation: Parent Delegation is controlled by a system property called org.osgi.framework.bootdelegation. It contains a list of package names, which allows use of wildcard (*) to help keep the property value to a manageable size. When OSGi framework tries to load a class for a bundle, if the class belongs to java namespace, it immediately delegates to parent classloader. If the class does not belong to java namespace, it consults the aforementioned property. If it is mentioned there, it delegates to parent class loader. Else it goes through a very well defined sequence to locate the class failing which it throws ClassNotFoundException or NoClassDefFoundError or a similar exception. b) system packages: Now, let's look at system packages. There is a special bundle in the OSGi runtime which is known by Bundle-SymbolicName:system.bundle. This bundle like other OSGi bundles can export packages. It is used to export packages, such as framework APIs, JDK APIs, available in parent class loader. As a result, it allows the parent class loader to be teated as yet another bundle. Those exported packages can then take part in normal OSGi package wiring process. The list of packages that can be exported are controlled by a system property called org.osgi.framework.system.packages. If I am not mistaken, in the upcoming R4.2 spec, CPEG is defining a new property called org.osgi.framework.system.packages.extra, but that's not important for this discussion. Please consult the OSGi spec for accurate description of these features. The spec is very nicely written. The description above is not as clear as you will find in the spec, which is obvious. Parent Delegation is a necessary evil. It is there to primarily help avoid strong assumptions made by some JDK classes about class loading, but it is also helpful in solving class loading problems as explained in one of my earlier blogs. It breaks modularity, so it should be used at the last resort. So, we decided to use the second option. In the second option, there is also a challenge. How do we know the list of all such internal packages? The list is a union of internal packages needed by all the modules running in GlassFish. There lies the problem. The list of modules is not fixed. After all, GlassFish V3 has multiple profiles (or distributions), e.g., web profile, classic profile, etc. The problem is compounded by the fact that user can install new modules and they may have newer requirements about system packages. So, managing them via a single property does not scale. Fortunately, framework extension bundles come to the rescue. framework extension bundles Framework extension bundle is a special type of bundle fragment. It can only attach to system bundle. So, it must have Fragment-Host set as either system.bundle (which portably identifies system bundle in all OSGi platforms) or use a framework specific name for system bundle and it must have extension attribute set to framework. e.g., the following header will do: Fragment-Host: system.bundle; extension:=framework There are restrictions about what headers it can use, but like any other fragment, it can use Export-Package header to export additional packages. All those additional packages eventually get exported via system bundle only. Now, imagine this: system bundle is already backed by parent class loader. It actually has access to all the classes loadable via parent class loader, but for modularity reasons, it does not export all of them. If you want some packages to be exported by system bundle, just come up a framework extension bundle which contains no actual class. It only contains some manifest headers like this: Fragment-Host: system.bundle; extension:=framework Export-Package : sun.security; uses:="foo, bar"; version=1.0 Isn't this a powerful technique? This is actually one of the very few good use cases for fragment bundles. This allows us to decentralize the information which is so important in an extensible system like GlassFish V3. More over, system properties can't be controlled when a program is embedded in a host JVM. GlassFish use case What we do in GlassFish is we define org.osgi.framework.system.packages to contain the standard Java SE platform API packages only. We then install one or more extension bundles that are used to make internal JDK classes available via system bundle. Take a look at one written for EclipseLink's Oracle extensions
OPCFW_CODE
Insights for product managers from an R&D Engineering Director How does an R&D Product Line Director lead the development of products and help to mentor product managers? That’s what I wanted to know when I talked with our guest, Shankar Achanta. He has had a number of engineering product roles at Schweitzer Engineering Laboratories, which designs and manufactures products for the power industry. Shankar shares several tools for getting ideas for new products along with practical tips for how product managers can frame their ideas and gain support from colleagues as well as leaders. Summary of some concepts discussed for product managers [1:30] What are your responsibilities as an R&D Engineering Director? I’m responsible for a large portfolio serving the global energy industry. My role includes vision and strategy for my portfolio projects, as well as executing the strategy by introducing the right products at the right time. I’m also involved in portfolio management. I lead product development teams and product management teams. [2:51] Where do you see ideas for new products coming from? Great ideas come from anywhere in the organization—sales, talking to customers, product development, etc. Recently, my team and I experimented with a three-month Innovation Framework. We brought together product managers and product development leaders to solve difficult problems our customers are having. We let them create self-forming teams, with a maximum of five people per team. After we provided the problem domains, we asked the product managers and product development leaders to read the problem domains and ask us questions in the first one to two weeks and then provide a one-page abstract with all the solutions each team came up with. We saw a lot of participation, and many teams came up with the one-page abstracts. [6:05] How did the product managers and product development managers come to have good insights into the problems that customers encounter? These insights are key for the Innovation Framework to work. The product managers and product development leaders engage with customers at conferences and in one-on-one meetings and get input from the sales organization. Once we have the ideas from this variety of avenues, we compile a list of problems for a particular segment of customers or enhancements to an existing product line. [6:52] What’s an example of the Innovation Framework in action? We had a couple of challenges with our sensors for power lines: They communicate wirelessly, so they need to have a line of sight between the transmitter and receiver, and they need to last for 20+ years. Using the Innovation Framework, one of our engineers solved these problems with a device that repeats the signals and doesn’t need batteries. Once the teams created their abstracts, we selected a few and allowed the team members to use 20% of their time every week to explore those ideas. We found that they spent additional time on their own to come up with solutions, and one team put together a prototype of the sensor. [10:37] How do you select which solutions to pursue? First, we consider how practical the solution is to commercialize. Second, we consider how it fits within the company’s strategy. Third, we consider the effort, technology, and time to create the solution. [15:19] Do you get customer feedback on the solutions being created? Once we have the early prototype, we engage with customers who give us feedback about the solutions. We didn’t engage with a large number of customers because the Innovation Framework was limited to three months, but we got early customer feedback on the ideas, and we had upfront research that we’d already done on the problem domain. [17:06] How can product managers share ideas and draw attention to them? I ask my product managers to think like scientists. You have a hypothesis that your idea solves Problem A by creating Solution B for the Customer Persona C. Fill in the blanks and write down your hypothesis. Then understand and document your assumptions. Answer questions: Is the problem I identified really a problem? Does my proposed solution solve the problem? Does the customer persona want this problem to be solved? If so, are they willing to pay for it? More importantly, are they willing to switch from their existing solution and pay any switching costs? Once you have the answers to these questions documented, show the strategic fit of your problem statement, which is extremely important to get the stakeholder buy-in from the executive level. [20:07] How do product managers frame their ideas to show strategic fit so they can draw attention to their ideas and get resources? We already have a pipeline of existing projects, so when a new idea comes up, we have to choose to either displace or slow down what we’re doing or put the new idea in the backlog. We compare new ideas to the company strategy and to a document that I write every year for my division that explains where we fit into the company strategy. We look for new ideas that are connected to those strategies and will take us forward. We’re not trying to be rigid, but we are very clear about our goal for the portfolio of products we’re responsible for. For mature product lines with low business risk, we can have bigger budgets and keep the product line going with improvements. For new products, we use a “pay as you go” approach. Feature by feature, we release the product and test the customer base and generate some revenue, then invest little by little. [24:00] How do you pay as you go when projects need more resources upfront? There’s definitely a critical mass to a get a project going. When we evaluate ideas, we ask who the pilot customers are. We follow the 80/20 rule—80% of the customers use only 20% of the features. We need to know what the pilot customers need before we just put a lot of features into the first product. We want the minimum valuable product—the minimum product that creates value for the customer. [26:12] In your organization, what do product managers need to get support for their ideas? They need a mix of story and data. We’re a very engineering-centric organization, and we look at data, business cases, return on investment, etc., but I’ve been extremely happy to see great ideas come from anywhere and get approved fast. For example, we were building a power controller. We brought in product managers, product developers, engineers, and the developer who was building the product, and we were able to look at the entire system, not just the power controller product we created. We created a new sensor that worked with the power controller and build a rapid prototype in two weeks. We went all the way to our executive team with just the prototype—no PowerPoint presentation—and they approved it and gave support throughout the process until it became a product. Our customers really love the product, and it’s taking off. We had some data, but we went beyond our “box,” looked at the whole solution, and created new value. Action Guide: Put the information Shankar shared into action now. Click here to download that Action Guide. - Connect with Shankar on LinkedIn - Learn about the company Shankar co-founded to help small businesses, Apex Specialist “Engage early (with your customers) and iterate often (your product or service based on your customer feedback)”. – Shankar Achanta Thank you for being an Everyday Innovator and learning with me from the successes and failures of product innovators, managers, and developers. If you enjoyed the discussion, help out a fellow product manager by sharing it using the social media buttons you see below.
OPCFW_CODE
SWT PrintDialog driver preferences / customize the printdialog I would like to create a custom SWT PrintDialog. However it seems not possible. In SWT PrintDialog one can click "preferences" to open the native printer driver preferences dialog. Is it possible to open the "native printer driver preferences dialog" without using org.eclipse.swt.printing.PrintDialog and read the drivers preferences (PrinterData)? PrintDialog is very platform specific. The Mac version, for example, does not have a Preferences option. The class contains a lot of undocumented low level code interfacing to a particular platform. It is possible to use the low level code in your own class but this is not supported and you would need some experience of the platform API. Just to illustrate the difference, here is the first few lines of the open method on Windows: public PrinterData open() { /* Get the owner HWND for the dialog */ Control parent = getParent(); int style = getStyle(); long /*int*/ hwndOwner = parent.handle; long /*int*/ hwndParent = parent.handle; and the Mac OS X code: public PrinterData open() { PrinterData data = null; NSPrintPanel panel = NSPrintPanel.printPanel(); NSPrintInfo printInfo = new NSPrintInfo(NSPrintInfo.sharedPrintInfo().copy()); if (printerData.duplex != SWT.DEFAULT) { long /*int*/ settings = printInfo.PMPrintSettings(); and Linux: public PrinterData open() { if (OS.GTK_VERSION < OS.VERSION (2, 10, 0)) { return Printer.getDefaultPrinterData(); } else { byte [] titleBytes = Converter.wcsToMbcs (null, getText(), true); long /*int*/ topHandle = getParent().handle; while (topHandle != 0 && !OS.GTK_IS_WINDOW(topHandle)) { topHandle = OS.gtk_widget_get_parent(topHandle); } when I look in http://www.docjar.com/html/api/org/eclipse/swt/printing/PrintDialog.java.html lines 394-408 it seems that it is only the OS.DM_ORIENTATION (landscape, portrait) that is retrieved from the actual driver-settings and this being cross-platform, but this can be set manually. If I understand correctly the "PRINTDLG pd = new PRINTDLG();" line 265 is the platform specific part of the PrintDialog. Ok, then I would have to know what to look for at each platform to keep my code cross-platform. Thank you for answering. I will try something else :) The whole of PrintDialog is platform specific, the Mac version (in a different jar) is completely different.
STACK_EXCHANGE
how does monopolisitic competition make profit in the long run in reality guys, I have this doubt that if the fast food industry such as KFC, and Maccas are examples of monopolistic competition how are they still making a profit? Because as per the model in long run the monopoly competition won't make profit as more competition enter the industry ? First of all KFC and similar large fast food chain are not a good example of monopolistic firm, at least not in most countries they are more closely related to oligopoly. The reason for that is that by assumption there has to be too many monopolistic firms for them to have any sort of meaningful (game-theoretical) strategic interaction. The number of large fast food chains is not that large and they seem to strategically interact a lot. Second, the model shows that monopolistic firms do not earn economic profit not that they do not earn profits. Even firm with accounting profit of billions could be earning zero economic profit. The profit that firms report in their quarterly reports and the profit numbers you hear in the news are not the actual economic profit. Those are all accounting profits. For example, if a monopolistic firm supplies its own capital (machines, building etc) and the company needs 1tn dollars of capital (let's say its a KFC type fast food) and opportunity cost of capital is 5% (let's suppose that's what the owners of capital can earn if they invest the money into something else) then that company can report even \$50bn accounting profit (the number you will find in quarterly reports, news etc) yet that company would still have exactly 0 economic profit. To have positive economic profit the company in the example has to earn more than \$50bn. If the company would earn less than \$50bn accounting profit the company would actually have economic loss. Thank you so much for your clear explanation, when we compare the fast food chains on a global scale can we think of it as a monopolistic competition as let's say barriers of entry to open a small fast food shop is less right ? or is my understanding wrong? Thanks in Advance @studenthere for chains even globally I don’t believe you can say they are monopolistic. For non-chain fast food restaurants (eg I now live in Utrecht and near my place there is a Dutch family owned fast food that is clearly different from other fast foods like some local Indian family owned fast foods) you could say they are in monopolistic competition. But companies such as McDonald KFC etc they clearly strategically interact with each other. @studenthere the thing is that monopolistic competition model only works if we assume that firms do not strategically respond to what other firms are doing (like in perfect competition) if firms respond to each other strategically then the model is not appropriate (maybe sometimes can still be good approximation). In cases there is strategic interaction you need either something like the oligopoly model that you will learn in economics class or more complex models that you will learn once you take Industrial Org class Also note there isnt a thing as a single 'fast food' market where every fast food firm competes... big chains could be viewed as competing on a market for fast food chains which I believe would be best described as an oligopolistic market. Even if there are no formal entry costs there are implicit entry cost as to have a chain you have to start with several restaurants not just one and thats not really easy. Then small fast foods compete on monopolistic market. Is this just language lawyering? Salop and monopolistic competition models have close to identical outcomes, and the prior has strategic interaction. @Giskard it’s not just lawyering 1. emphasis should be put on close to 2. OP is clearly learning about the monopolistic competition model so it’s important to learn all the important assumptions. Contested monopoly with no entry barriers also leads to exactly same result as perfect competition, yet I would not call it lawyering to point out that for market to be a monopoly there has to be a single firm etc More than fast food chains, I think Walmart, Amazon, and on a slightly more niche basis, Home Depot are better examples of Monopolistic entities. Walmart, in the US, anyway, dominates brick and mortar retail sales, to the point that various sections of the store had entire stores dedicated to the sales of items in the past (i.e. music and movies) and those stores no longer exist in markets where Walmart is established. Amazon is weird in that retail merchandise sales make up very little of their income, but take up a large part of their organizational structure. Most of the actual income at Amazon comes from the sales of server space, but still they are also ubiquitous and omnipresent in the online retail space with negligible competition. In the classic economic model, monopolies manipulate price to maximize short term accounting profit, while foregoing economic profit, because accounting profit is much more tangible, and we can see this with the disaster that was the California Power Deregulation and Enron. And, as can be expected in our economy, once things got out of control, some regulatory government agency investigates and files antitrust lawsuits, which break up the monopolies, or in the case of power, which is a natural Monopoly, reinstates the previous regulations and regulatory agencies to control the industry. With this in mind, Walmart and Amazon have decided to act as if they are facing competition by keeping prices at a level consistent with heavy competition, and have decided to use their Monopolistic power to lower their cost of goods sold by manipulating the price of labor and goods sold. This is much less noticable to the public at large, or to politicians and regulatory agencies and has allowed these companies to continue operating as obvious monopolies, unabated. I don't know how this affects economic profit, but it's what I would concentrate on, as opposed to price manipulation, when studying the topic.
STACK_EXCHANGE
unable to connect to the configured development web server When I run my web application from Visual Studio I get the following message: Unable to connect to the configured development web server It then refuses to run. How do I resove this issue? There's far from enough information to help you with your problem here. Re-think your question and add some specific diagnostic information and you have a chance of receiving some assistance. Otherwise, you're question's going to get closed. I press the green start button (used to debug the application). The message then appears What sort of application are you writing? What is it supposed to do? running visual studio as an administrator fixes this for me Lots of fixes here: https://stackoverflow.com/questions/990033/unable-to-connect-to-asp-net-development-server-issue Deleting .vs\applicationhost.config fixed it for me. Not really sure why this was closed. There is an exact error message and how it occurred for the user. It's not vague at all. Please follow these six steps: Select the “Tools->External Tools” menu option in VS or Visual Web Developer. This will allow you to configure and add new menu items to your Tools menu. Click the “Add” button to add a new external tool menu item. Name it “WebServer on Port 8010” (or anything else you want). For the “Command” textbox setting enter this value: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\WebD ev.WebServer.EXE (note: this points to the web-server that VS usually automatically runs). For the “Arguments” textbox setting enter this value: /port:8010 /path:$(ProjectDir) (or any port you like) Select the “Use Output Window” checkbox (this will prevent the command-shell window from popping up. Once you hit apply and ok you will now have a new menu item in your “Tools” menu called “WebServer on Port 8010”. You can now select any web project in your solution and then choose this menu option to launch a web-server that has a root site on port 8010 (or whatever other port you want) for the project. You can then connect to this site in a browser by simply saying http://localhost:8010/. All root based references will work fine. The last step is to configure your web project to automatically reference this web-server when you run or debug a site instead of launching the built-in web-server itself. To-do this, select your web-project in the solution explorer, right click and select “property pages”. Select the “start options” setting on the left, and under server change the radio button value from the default (which is use built-in webserver) to instead be “Use custom server”. Then set the Base URL value to be: http://localhost:8010/ Obviously I don't know if this is the problem you had but definitely it is something similar, essentially the problem should be that the same port used by your Development Server is not available because it is already used by another web server. You know http://tinyurl.com/3erwqkb ? :) that's good, but I don't know this, can you update my answer with this url Why was i voted 4 times down? You have to properly write your question and provide details. I tried the 6 steps.they dont work..Still the message appears Did you restart the machine ? I will restart the pc right now @Muhammad Akhtar, your link does not show result when i click it. Could you not just add the 6 steps to your answer and add a link where you got them from? that would be helpful add give you up votes. thanks. Please try after restarting your machine. Thank you, it worked when I restart the visual studio.
STACK_EXCHANGE
No, a million is less. He is worth about 4.4 billion pounds because an American billion is less than a British billion. An American billion is 1000 million pounds whilst a British billion is a million million pounds a billion is actually 1,000 million 1,000,000 = 1 million 1,000,000,000 = 1 billion A billion is much larger than a million none. a million is less than a billion 35 billion is much larger than 42 million. Much less than a billion. 1 lakh is about $2000, that's 5.5 million dollars or 0.055 billion. 99,999,000,000 Ninety nine billion, 999 million. Yes. It is one hundred million less than one billion. 100 billion minus 1 million is 99 billion 999 million. In standard form it is expressed as: 99,999,000,000 9000 less than a million less than a million less than a million less than a million less than a million less than a million less than a million 9000 exatlay i dont no how 2 speel A billion is a thousand times more than a million. One billion= 1,000,000,000 One million= 1,000,000 1 billion is less than 100 billion. a billion is more than a million. One thousand millions go into a billion. So, .001 Billions are equal to 1 million. Yes, a billion is a thousand times more than a million. Billion = 1,000,000,000 Million = 1,000,000 A billion is bigger (or greater) than a million. Billion is bigger than million He has less than 10 billion. Yes, 10,000 is less than 180 billion. more than a million. less than a billion. 75% of BB users live in the US When it first came out, they got over 100 million dollars in a week. On December 21 2010 Activision announced that they had surpassed $1 Billion in Sales in less than a month. $360 Million was sold the first day, $650 million in the first 5 days and the latest announcement was $1 billion in less than a month. Black Ops sold 25 million units by August 2011 and an additional 14 Million DLCs. One billion is one thousand times larger than one million. 10 million more than 7 billion 470 million 100 is 7,480,000,100
OPCFW_CODE
Would also add my voice for this. We’re a non-profit currently demoing the paid hosting for use as a forum for all of our communities. Category-specific moderators are essential; we envisage that they would manage their specific, private category to host discussions with representatives from their region, and not be able to touch upon the categories for other regions, or the rest of the forum. This is a really important feature, in my scenario I have different categories in competition and I cannot allow moderators to share visibility of pending posts. Dear @codinghorror, having the opportunity of isolating moderation among categories is essential in all scenarios where you’re running a competition. For example in my case we’re evaluating Discourse to integrate it in a crowdfunding website. Every category is associated to a fundraising campaign and moderated by the campaign owners. We cannot allow the sharing of comments among possibly competitive players. This is is true for a lot of similar scenarios and neglecting this feature means preventing the adoption of this really cool platform on a huge bank of growing initiatives. I know this feature kind of died because it’s too hard to implement, but I still want to voice support for it in case the developers ever decide to come back to it. We’re trying to use discourse for our hackerspace, and we’ve been creating categories for different interest groups within our community. It would be nice if each of those categories could choose their own moderators since some of the interest groups are large enough to form their own community within our larger community. As it is, I think we can survive without this feature, but I just wanted to include one more possible use case of this feature. I just wanted to say how you can work around this: You don’t give the rights to the user, but create an extra moderator-account, which has only read and write rights for the specific category, they should moderate. As we use SSO-Login we plan to automize this login as moderator and back to the user, by just one button, so we will have a one-click moderator mode. This has the side effect, that people can’t moderate right away, but have to change into that mode. This might be also not such a bad idea, given the extensive rights, that are not even marked as moderator-rights. The only thing that’s then missing for us is the user-administration. It would be awesome, if you could have something in the preferences to take that right from them moderators. In our case the moderators will get group-specific user-administration rights in the sso. They can only administrate users who are yet without group or in their own group. Maybe these work-arounds could also give some idea how you could implement category specific moderators? First of all, I’d like to express my most sincere gratitude, I’ve used many forum scripts this far, and Discourse beats them all in most fields. I want to voice for this functionality, though. We run regions-driven community, and we need the ability to appoint cat-specific moderators. As we’re non-profit community, I can’t offer any financial support, I’m afraid. But as a software tester I can offer my proffesional services, if it helps, @codinghorror I started looking at this issue last night in the context of a larger project I’m working on. First, after looking at the relevant code for this I would reiterate what the Discourse team has already said. Implementing this specification would be a huge hairy change. From a ‘core’ perspective I think It makes complete sense to postpone it, and maybe not do it at all. For me at least, it seems that the issue is really one of triage. I need the ability to limit a moderator’s attention to a particular category. I want them to be able to focus on that category without getting distracted by notifications from other categories. Completely restricting access to moderation actions outside of a assigned category seems like more of an edge case. If someone is going to get moderation powers, and be told that they are assigned to a specific category, it is unlikely that: a) they will regularly exercise their powers outside of their assigned category; and b) if they do, it is unlikely that this exercise will conflict with the overall moderation goals of the site. In the unlikely event of both ‘a’ and ‘b’ being incorrect, then they probably shouldn’t be a moderator anyway. Indeed, there may well be situations in which moderators assigned to a particular category will provide useful support in categories in which they are not assigned to. So I’ve started a plugin I call “Category Moderator Lite” to implement this narrower ‘workflow-orientated’ version of category-specific moderation. Nice, limiting attention vs a permission overhaul is a far simpler coarse of action. What I can support in core is a user setting for mods: [x] notify me on all pending posts and flags ... if unticked... categories I would like to be notified about [feature, ux ] I am comfortable having something like this in core, just need to mock up and to figure out the right words. Whether this should be a user setting or a category setting (or both) is worth considering. While notification levels are typically user settings, I think there are also arguments for this being a category list that entails user notification levels (i.e. the approach taken in the Category Moderator Lite plugin). There are instances in which it is useful to have the concept of ‘assignment’ to a particular category, even if that assignment does not entail permission restrictions. e.g. if you want to add a list of the ‘assigned’ moderators to the discovery category UI to tell users who’s ‘in charge’, e.g. something like Not to say that this particular UI feature should be a part of a core update, but more that having a list of moderators related to a particular category could be useful. The need to ‘guide attention’ of some moderators to specific categories is both a user-level issue and a more global site-level ‘management’ issue. For my purposes, I don’t really want to pose moderation notification levels as a ‘choice’ for category-specific moderators. I more want to set it up as a workflow. That said, making this a user-level setting is simpler from a technical perspective. As far as I can tell there are no existing category settings that entail updates to specific users’ notification levels. So it would be a new type of relationship. And, if there were a user setting I would still find that useful. It would simplify the Category Moderator Lite Plugin a fair bit. I would hide it from the user settings UI and just use it on the server. As far as a core update is concerned, I’m not sure what form of category-specific moderation notification settings people would actually use. I know a category-level setting makes more sense for me, for the reasons mentioned above. From the posts in this topic, I get some sense that the issue is also a ‘site-level’ management issue for other folks as well, which suggests that a category-level list which entails notification levels for specific users may be useful. Thinking from the POV of a first-time Admin who is trying to set this up (or modify it), I think it would be most intuitive to find category specific mod settings under the current Category security settings. I guess this can work, as long as “all moderators” get notifications by default unless something is filled in the box. Sorry for bumping this topic, but I was wondering if this is the right place to follow this feature or if there is another place where this is being advanced/tracked. Is this still being considered? If so, is it still blocked/not a priority? I believe we’re waiting on final mock ups and clarification of workflow as per: This is being worked on now, but as predicted, it was “months” of high impact, high risk work by @eviltrout and others Anyways, the good news is that we’re pretty close at this point Stellar news. Thanks, this will be very useful I’ve been playing with this in our new installation of Discourse (we’ll be launching live hopefully soon) and it seems to be working well, and should give me precisely what I need. I think this is an ideal solution, and very elegant (As well as providing expandable opportunities for mod-specific access categories) I’ve started looking into this for my discourse, and am struggling a bit to understand what features are provided now for category-specific moderators. Am I right that at the moment it’s simply a category moderators listing on the about page and access to Category Group Review/Moderation? In my testing I have not found any other handy functionality, like whisper access. Would it be possible for someone in the know to create a #howto topic on category-specific moderation and keep it up to date with the latest functionality provided? I’d be grateful. FYI this link in the OP seems to be broken. I know @jomaxro is writing up some review queue documentation soon. Perhaps he can tackle this feature as part of that work. Thanks! That would be awesome. Sure, adding it to my list.
OPCFW_CODE
const logger = require("./logger") /** * Script to validate a buffer is of the format "loop" * @param {Buffer} inputBuffer */ const validator = (inputBuffer) => { let offset = false const inputBufferLength = inputBuffer.length if (inputBufferLength < 40) { logger.log("debug",`Buffer received too short - length is ${inputBufferLength}`,inputBuffer) return false } for (let thisOffset = 0; thisOffset < (inputBufferLength-3); thisOffset++) { if (inputBuffer.slice(thisOffset,thisOffset+3).toString('utf8') === "LOO") { offset = thisOffset break } } if (!offset) { logger.log("debug","Buffer received doesn't contain LOO (76,79,79)",inputBuffer) return false } if ((inputBufferLength-offset) < 40) { logger.log("debug",`Buffer received too short - length is ${inputBufferLength} and offset is ${offset} so there is not 40 after the offset`,inputBuffer) return false } logger.log("debug",`This offset is ${offset}`) return offset } module.exports = validator
STACK_EDU
September 24, 2009, 11:28 am Cadence, quality, and design were the core themes of Canonical founder Mark Shuttleworth’s closing keynote talk at LinuxCon. Speaking before a combined session of LinuxCon and the co-located Linux Plumber’s Conference, Shuttleworth drilled home the importance of these concepts in the Linux development ecosystem, particularly cadence. Shuttleworth has long maintained that if free and open source software projects can begin to sync their development cycles with each other, then both upstream and downstream developers (and, ultimately, users) will benefit. This is large part of the strategy behing Canonical’s strict six-month release for the Ubuntu distribution and the 18-month Ubuntu Long Term Support (LTS) cycles. It won’t be easy, he told the crowd, but already quite a few projects are seeing the value of cadence (Shuttleworth cited recent moves in the KDE Project). Shuttleworth empasized, as he has in the past, that it doesn’t matter what pattern of cadence projects take, just so long as that pattern is predictable. Quality is another core component of how development projects can improve. Shuttleworth described how Canonical continually applies bug tracking data to improve Ubuntu. This seemed to strike a chord in attendees–several of the post-talk questions dealt with perceived lacks of response from the Ubuntu bug reporting system. Shuttleworth replied that even though bug fixes weren’t going to be immediate, the more people that report a given bug, the higher the priority that bug would gain. On design, Shuttleworth emphasized how important user testing of interface and function can be. Canonical uses daily testing for Ubuntu and other oper source projects–information that is fed directly back to the developer (sometimes with the developer in the room when testing occurs). “Developers always learn a lot from these tests,” he said. This was a strong day for Canonical, and it showed in Shuttleworth’s delivery. Earlier, Dell and Intel made a joint announcement with Canonical at the Intel Developers Forum about the new Dell Inspiron 10v netbook, which will run the Canonical’s Moblin Netbook Remix. On the same day, IBM and Canonical introduced “a new, flexible personal computing software package for netbooks and other thin-client devices to help businesses in Africa bridge the digital divide by leapfrogging traditional PCs and proprietary software,” according to a press release. Part of IBM’s Smart Work Initiative, the new package targets the rising popularity of low-cost netbooks to make IBM’s industrial-strength software affordable to new, mass audiences in Africa. This program appears to be the first major deployment of the Microsoft-Free PC technology both companies announced in December 2008. - Dent Introduces Industry’s First End-to-End Networking Stack Designed for the Modern Distributed Enterprise Edge and Powered by Linux - 12/17/2020 - Open Mainframe Project Welcomes New Project Tessia, HCL Technologies and Red Hat to its Ecosystem - 12/17/2020 - New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security - 12/08/2020
OPCFW_CODE
Okay, we'll go with this quick and straight. When you first start diving into CSS, you do the usual thing like changing color, changing fonts, etc. Then you dive deep into media queries, cross-browser properties, and finally to variables. Quick note on CSS variables 📝 Of course, some basics first. CSS Custom Properties or CSS Variables allows us to store a value stored in one place, then referenced in multiple other places. Sometimes specific values need to be reused throughout a document. A typical example is when you get a specific color palette from designers, and you need to add specific hex values of colors, font-sizes, or even some responsive breakpoints. You assign these values to your custom-made CSS properties called variables here. This is useful not only because they can be used at multiple instances and makes editing the values easy but also, it makes the properties easier to read when referring to it later. For example: --headline-color is better to read than #000. Check out this wonderful CodePen example: Usage and syntax Declaring a custom CSS property is created by assigning the double hyphen (--) in front of the variable name, and then the property value is written like any other CSS property. Check out this example: Now, to use this custom property anywhere in your CSS file you can do this: So, you don't need to write lightgray value for the background-color in all places where there is a need to use the var() function and pass in the CSS custom variable inside. Time to start interacting with the web developer's favorite language. So, what these new functions mean? - document.documentElement: this returns the Element which is usually the root element of your HTML document. - style.setProperty(): it sets a new value for a property on a CSS style declaration object. The setProperty() takes in the property name, it's value, and optionally the priority. Yeah, exactly what you're thinking right now. Just as with any other language, we have setters and getters here too. With the setProperty we were setting a new value, and here with getPropertyValue, we return a DOMString containing the value of the above-specified CSS property. Here's a practical example: This will return the value of --accent-color as #663399 when the browser renders the webpage. The removeProperty method will remove the provided property from a CSS style declaration object. So, if you want to dynamically remove the attached custom CSS property, then you can have code similar to this: Using event listeners 👂 First, start off with declaring the CSS variables: By declaring them at :root, we're putting them into the root element of the DOM tree. Typically, it's the <html> element. Next, we will be using these variables in our <div> as follows: As you know by now, it translates into: We have set the initial position of the <div>, now let's interact with an event listener. The result will be similar to this: More resources 🤩 Go ahead and learn more about custom CSS properties from the resources given below:
OPCFW_CODE
This is a presentation that I gave to my Second Marker, explaining what my project is about and my development progress. I also gave a demonstration of the Ember application running. I've included below my notes for the presentation, which was expected to be no longer than 10 minutes. - I’ve worked at two agencies, seen problems - What’s a digital agency? - Small to medium sized business - Web design, small mobile app development, SEO, marketing, branding - Normally targeting small local businesses - How do they develop software? - Designer + developer. Small teams – normally developers work solo - Always use waterfall type of methodology - One meeting to get requirements and choose CMS - Visual design made - Implemented quickly - Client approves implementation - Website given to customer or hosted on agency’s own web server - Very hard to respond to change. Not budgeted. No time. Strict deadlines that are hard to meet. - No slack time. - Clear benefits moving to agile, respond to change, less pressure - Agile project management system - Aid agency staff in time management, while making sure their interactions with the application are short and without lag or delays. - Give agency staff flexibility between projects and support queries. - Track project requirements using agile methods. - Facilitate regular communication between agency staff and customers. - User stories for requirement tracking - Support tickets (pestering developers to deal with them) and sidetracking - Calendar with sprints and deadlines, seeing how sidetracking affects deadlines - [show wireframe] - Development methodology: - Using Scrum - Weekly sprints, see supervisor on Thursdays for review and next sprint planning - User stories for tracking requirements Technical work and issues - Single Page Application - Using Ember - Node.js to transpile - Babel for ES2015 - Integrated automated testing suit, QUnit, etc - Mirage to simulate server requests - [explain diagram] - Using RESTful JSON API - De facto standard - Libraries for both client and server - Investigating in current sprint - Node.js, Express, MongoDB (NoSQL) - Decide on JSON API library - What I’d do differently - Mirage is version 0.1 – wish I implemented server-side earlier - (discuss alternatives of Ember) - A persistent storage system on the server-side using MongoDB - JSON API on server-side - User account management - Calendar of work schedule - Creation of support tickets - Allowing agency staff to pause their current work to deal with support tickets - Live updates as information such as user stories and calendars are changed by other users - Usability tests Answers to potential questions - Future work: - An add-ons platform to enable digital agencies to add other features, such as an invoicing system. - An API to use live data from SmallScrum. This would allow digital agencies to create bespoke information radiators, among other possibilities. - Enable the application to work offline in certain situations, such as when creating user stories or viewing the user’s calendar.
OPCFW_CODE
Validating your code can be a real shocker at times. Although we all try to write standard code, sometimes it just seem an impossible task. One of the growing needs in html today, is for that of a quality html validator (syntax checker). While searching around for my own purposes, I was surprised how few options HTML authors have for validators today. You have your basic two choices, a high quality pricey commercial product, or the official standard freebie. The Commercial Choice: The quality commercial package is the CSE HTML Validator [htmlvalidator.com]. At $129 USD it is a bit steeper price tag than most of us are use to paying for software these days - but, it is the best thing going in HTML validators. Nothing compares to its features and function. From simple syntax checking to more advanced cascade error preventing tag trees, it does it all. There is a demo version (50 validations), and a freebie lite version. Even the lite version is better than most of the competition. If you are serious about html development, you need this tool. The Freebie Choice: The W3C Validation service [validator.w3.org] is straight from the body endowed with creating web standards. It is also free and web based. Simply put in the url you wish to validate and let it run. Opera users have a leg up here with a simple right click and validate to validate any page in the browser. You may also upload a file from your computer [validator.w3.org]. The biggest draw back to the W3C validator is it is unforgiving. It is not only strict, it suffers terribly from cascade errors with mismatched tags. Often you'll have to look through hundreds of errors to find the real one that will fix the whole shabang. The biggest strength though, is that when your page does validate, you are assured that it should be viewable in all modern browsers. It will also correctly deal with the newer languages like XML. There is also a CSS Validator at the W3C. In a nice touch, you can cut & paste, upload, or validate by url: The main page for the W3C validator is located at: Bobby checks your page for accessibility features: [cast.org...] Weblint is now becoming dated. It has not been updated in ages. Still has some life left in it: [weblint.org...] Web Design Groups Online Validator. Much in the save vein as the W3C's validator, it allows checking from online by URL and supports a wider group of character encodings: There is also, A Real Validator [arealvalidator.com] from Liam Quinn that uses an IE front end. It is good, but lacks the power features of CSE. Shareware. Another entry is Dave Raggetts HTML Tidy. HTML Tidy actually changes -- or tries -- the source code to the page. The home team: Yahoo category for validators [dir.yahoo.com]. Open Directory Project [dmoz.org] page on validators. Anyone know of any worthwhile html checkers I have missed? Possibly for other platforms?
OPCFW_CODE
package de.thebotdev.rulesbot.commands.beta; import de.thebotdev.rulesbot.util.commandlib.CommandDescription; import de.thebotdev.rulesbot.util.commandlib.Context; import de.thebotdev.rulesbot.util.commandlib.RBCommand; import net.dv8tion.jda.core.EmbedBuilder; @CommandDescription( name = "Beta", triggers = {"beta", "beta-info"}, description = "Infos on how to become beta", usage = {"beta"}, longDescription = "You want to test out some new functions how are not available for everyone? Then you should check out this command to see how to become a beta and which command you can use then ^^" ) public class BetaCommand extends RBCommand { public void execute(Context ctx) { ctx.send(new EmbedBuilder() .setDescription("Hey,\nwe are glad that you are interested in our beta functions. There are two ways to get" + " access to our beta functions. The first one is of course our" + " [premium program](https://patreon.com/TheBotDev) and the other" + " one is free of charge and can be used when you voted us up on the" + " [Discord Bot List (DBL)](https://discordbots.org/bot/rulesbot/vote)." + " If you do this you will automatically get the beta role and full access to these features" + " which are listed below. Please note that you have to be on our" + " [support server](https://discord.gg/HD7x2vx), otherwise" + " the bot will send you a message with instructions. You will only keep" + " this access for 12 hours, but if you upvote the bot again you will get the role back.\n" + "__**Functions in the beta:**__" + "\n•report -> report users" + "\n•setup_report -> setup a report channel where the bot sends if a user with more then 5" + " reports join your server and accepts the rules\n" + "•setup_channel [mention the channels with should be changed]-> let the bot setup the right" + " channel permissions for you") .build() ).queue(); } }
STACK_EDU
package de.hpi.petrinet.verification; import java.util.Collection; import java.util.HashSet; import java.util.Set; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import de.hpi.PTnet.verification.PTNetInterpreter; import de.hpi.diagram.reachability.ReachabilityPath; import de.hpi.petrinet.Marking; import de.hpi.petrinet.PetriNet; import de.hpi.petrinet.Place; import de.hpi.petrinet.Transition; public class PetriNetSoundnessChecker { PetriNet net; Set<Marking> deadLockMarkings; Set<Transition> deadTransitions; Set<Marking> improperTerminatingMarkings; Set<Transition> notParticipatingTransitions; PetriNetReachabilityGraph rg; Place outputPlace; public PetriNetSoundnessChecker(PetriNet net) { this.net = net; } /** * Must be called before any checks takes place!!! */ public void calculateRG() { PetriNetRGCalculator rgCalc = new PetriNetRGCalculator(net, new PTNetInterpreter()); rg = rgCalc.calculate(); } /** * Checks whether given net ... 1. ... is weak sound. 2. ... has no dead * transitions (each transition must participate at exactly one firing * sequence). */ public boolean isSound() { calcDeadTransitions(); return isWeakSound() && deadTransitions.size() == 0; } /** * Checks whether each transition of given net participates in at least one * process instance that starts in the initial state and reaches the final * state * * @return */ public boolean isRelaxedSound() { calcNotParticipatingTransitions(); return notParticipatingTransitions.size() == 0; } /** * Checks whether ... 1. ... any process instance coming from initial state * will reach the final state 2. ... the final state is the only state */ public boolean isWeakSound(){ // 1. Each leave must be the end marking calcDeadLockMarkings(); // 2. No markings except end markings may have a token in end places calcImproperTerminatingMarkings(); return deadLockMarkings.size() == 0 && improperTerminatingMarkings.size() == 0; } /** * Calculate all markings which are leaves and are deadlocks (which aren't final markings) */ public void calcDeadLockMarkings() { if (deadLockMarkings != null) return; deadLockMarkings = new HashSet<Marking>(); for (Marking m : rg.getLeaves()) { if(m.isDeadlock()){ deadLockMarkings.add(m); } } } /** * Calculates dead transitions, i.e. these transitions which aren't on a path from * beginning to end */ public void calcDeadTransitions() { if (deadTransitions != null) return; deadTransitions = new HashSet<Transition>(); // Assume, that all transitions are dead deadTransitions.addAll(net.getTransitions()); // Remove these transitions which are in reachability graph deadTransitions.removeAll(rg.getFlowObjects()); } /** * Calculates markings which have token in end state and in other states */ public void calcImproperTerminatingMarkings() { if (improperTerminatingMarkings != null) return; improperTerminatingMarkings = new HashSet<Marking>(); for (Marking marking : rg.getMarkings()) { // if end marking have token and end marking doesn't have all tokens // of the net if (marking.getNumTokens(net.getFinalPlace()) > 0 && marking.getNumTokens(net.getFinalPlace()) != marking.getNumTokens()) { improperTerminatingMarkings.add(marking); } } } /** * Calculates the set of transitions needed for checking relaxed soundness. * 1. Get all leaves which are valid final markings * 2. From each leaf, go back to root and collect transitions on the way * 3. Compare collected transitions to the total set. */ public void calcNotParticipatingTransitions(){ if (notParticipatingTransitions != null) return; notParticipatingTransitions = new HashSet(net.getTransitions()); for(Marking marking : rg.getLeaves()){ if(marking.isFinalMarking()){ for(ReachabilityPath<Transition, Marking> path : rg.getPathsFromRoot(marking)){ notParticipatingTransitions.removeAll(path.getFlowObjects()); } } } } public Set<Marking> getDeadLockMarkings() { return deadLockMarkings; } public JSONArray getDeadLocksAsJson() throws JSONException{ return markingsToJsonWithPath(this.getDeadLockMarkings()); } public JSONArray getImproperTerminatingsAsJson() throws JSONException{ return markingsToJsonWithPath(this.getImproperTerminatingMarkings()); } public JSONArray getDeadTransitionsAsJson(){ JSONArray deadTransitions = new JSONArray(); for(Transition trans : this.getDeadTransitions()){ deadTransitions.put(trans.getResourceId()); } return deadTransitions; } public JSONArray getNotParticipatingTransitionsAsJson(){ JSONArray notParticipatingTransitions = new JSONArray(); for(Transition trans : this.getNotParticipatingTransitions()){ notParticipatingTransitions.put(trans.getResourceId()); } return notParticipatingTransitions; } public Set<Transition> getDeadTransitions() { return deadTransitions; } public Set<Marking> getImproperTerminatingMarkings() { return improperTerminatingMarkings; } /** * Calculates the path of given marking (how to get to given marking from initial marking). * @param m Marking * @return json representation including the marking and the path * @throws JSONException */ private JSONObject markingToJsonWithPath(Marking m) throws JSONException{ JSONObject markingWithPath = new JSONObject(); markingWithPath.put("marking", m.toJson()); markingWithPath.put("path", rg.getPathFromRoot(m).toJson()); return markingWithPath; } /** * Calls markingToJsonWithPath() on each marking of markings * @param markings * @return [markingToJsonWithPath(marking1), markingToJsonWithPath(marking2)] * @throws JSONException */ private JSONArray markingsToJsonWithPath(Collection<Marking> markings) throws JSONException{ JSONArray markingsWithPath = new JSONArray(); for(Marking marking : markings){ markingsWithPath.put(markingToJsonWithPath(marking)); } return markingsWithPath; } public Set<Transition> getNotParticipatingTransitions() { return notParticipatingTransitions; } }
STACK_EDU
Frequently Asked Questions about BthPS3 Got questions? Who can blame you 😅 we can provide some answers, though! Read on, traveler! How to fix this setup message? Your Bluetooth isn't working 🙂 If you're on a Laptop, make sure you haven't disabled wireless either via a physical switch or a key combination (depends on the device model). On Desktop, make sure you actually have a Bluetooth dongle plugged in 😉 If you had other solutions like ScpToolkit or AirBender installed, make sure they have been removed completely and you run stock drivers. If you don't see the little Bluetooth tray icon in your taskbar, chances are your Bluetooth isn't working or turned on. Fix it and setup will be happy 😘 What Bluetooth hosts are supported? In short: all of them manufactured within the last decade and running proper stock drivers (means no ScpServer/ScpToolkit, no AirBender, stock as the manufacturer intended). For details see this article. There's a catch Only host radios using USB are supported! This includes the majority of external dongles or integrated cards (they use USB under the hood to connect to the rest of the system). So if your device is using something more exotic like I²C or UART, I'm afraid that's not gonna work 😔 What controllers are supported? The genuine original Sony hardware, anything else is a nice-to-have that may or may not work ✨ This is unfortunately impossible to answer a 100% correctly. These drivers have been designed with compromises in mind. They aim to support the original genuine Sony SIXAXIS/DualShock 3 (and Navigation, Move) controllers while operating within the realms of possibilities the Microsoft Bluetooth stack offers and allows. The DualShock 3 (or DS3 in short) has been a fairly popular piece of hardware and many clones have arisen over time, some coming close to the quality of the original, some... well, not quite as much. Aftermarket devices spoof (forge) the Hardware Identification Information that Windows sees and the labels and manufacturer notes on the housing itself. There simply is no rock-solid way to properly identify these devices to separate the good from the ugly. That's the inconvenient truth, any other statement would be a wild guess and not facts. For details see this article. Can I use my wireless Keyboard/Mouse/Headphones with this? Yes, that's the whole purpose of this design 😉 BthPS3 extends the existing vanilla Bluetooth stack, it doesn't replace it (like ScpToolkit and alike did). This means it can never be as close to the original PlayStation Bluetooth stack (we need to play by Microsoft's design rules, remember?) as other solutions but the trade-off of keeping your stock wireless functionality should be worth it. How many devices can I connect at the same time? There is no definitive answer to that one, as it depends heavily on the Bluetooth host hardware (quality, antenna design, size and position) and the amount of "noise" in your environment (Bluetooth is a fairly "weak" protocol compared to all the other radio chatter that's constantly happening in a common household). Users have reported all sorts of working constellations; like up to 6 controllers connected and working concurrently without any human-noticeable delay. So it's up to you to figure this one out! 😁 Can it emulate another common controller, like Xbox One? Controller emulation is not the job of these drivers, they provide the plumbing required to get them connected to Windows (and stay connected and keep talk), nothing more, nothing less. Other drivers (which you can find on this site) handle the controller-specific work required. Is there any noticeable input lag over Bluetooth? Another stellar question! With no definite answer 😅 The truthful answer would be: don't know, don't care since it hasn't been measured with scientific equipment. The more down-to-earth answer comes from simple experience and interaction, human to machine: no. You might feel it working better or worse compared to USB, real or placebo. Those who ask this question usually just wanna hear "nope, it's all fine" so that they can move on. Well, there you have it, you can move on now 😘 Why is the DualShock 4 even supported? Because I can 😜 literally. It wasn't much extra work to add DS4 compatibility, as under the hood it operates quite similar to the DS3, without the unnecessary quirks. The DualShock 4 works natively without any custom drivers on Windows if paired in "PC mode" (PS and share button pressed at the same time until the light bar flashes rapidly), but a little known "secret" about this device is, that by default it operates in "PS mode" (PlayStation Bluetooth compatible) which BthPS3 can emulate! For now this doesn't really have any real-world advantages but leaves a backdoor for experimentation, if adventurous developers wanna talk to it they way the PlayStation originally does. How do I uninstall this? In case you don't want/need the software anymore or you're getting this setup message: Simply head over to Apps & features and uninstall from there: Follow the instructions of the uninstaller and you're all set! 👋
OPCFW_CODE
The main window¶ Using cragl -> connect or the shortcut Shift + A launches the connect main window, which like the following: At the top you see icons for all installed tools. Clicking the cragl logo at the top right navigates you to our website. The connect window contains three tabs - all tools - installed tools The following describes all sections. The first tab all tools shows you all available cragl tools. 1) Use this input to search for a specific tool. 2) This list shows you all available tools. To get more information for a tool, simply double click the small tool image or drag it to the right hand side of the window. 3) This is the current tool to show more information on. 4) This section gives you more information about the tool. 5) In here you find the following buttons: - buy: Clicking it opens the web browser that lets you buy one or multiple licenses. - install: Once purchased a license, click this button to enter your install code or login with your account. This will download the tool and install the license for you automatically. - trial: Click this button to download a fully featured time restricted trial for the current cragl tool. - more info: Click this button to browse to our website to get more information about our tool. The second tab installed tools helps you managing your installed cragl tools. 1) The top table shows you information about your current connect version. In the update column you can update connect directly inside the connect window. Connect informs you automatically when there is an update available. 2) The bottom table shows you all installed cragl tools. If you don’t want to load a cragl tool for the time being you can decide to exclude it from loading when you launch NUKE. Just un-check the load option of a specific tool. When you close connect you’ll be asked to restart NUKE so that the changes apply. The next time you launch NUKE the tool won’t be loaded anymore. To restore it just hit the according “load” checkbox again. To uninstall a cragl tool click the according “uninstall” button in the uninstall column. The third tab settings shows you some additional settings to adjust connect to your needs. 1) This is the connect install location. 2) Here you can set a proxy in case your machine is using one. 3) Here you can set the port if your machine uses a proxy. 4) Click this button to test the proxy configuration. 5) This is the mac address of your machine. Node locked licenses are locked to this mac address. 6) Enabling this checkbox will cache tool images and will speed up the loading time of connect. We recommend keeping this checkbox enabled for better performance. 7) Click to flush the image cache. 8) Enabling this checkbox will show you a notification when an updated for one of our tools is available. We recommend keeping this checkbox checked so that you are informed about our tool updates and will always have access to the latest and greatest. 9) Keeping this checkbox checked ensures connect is connected to our server so that it can retrieve important tool information. In case your machine is offline and there is no need to connect to our server you can turn this off. In some cases this might also speed up launching connect and other tools. 10) Click to show the connect log. Each tool logs information into the connect log. 11) Click this button to browse to our website. 12) Click this button to send us some feedback about our tools.
OPCFW_CODE
Radiology requires countless hours searching for tiny lesions, creating distance and contour annotations, and filling out checklists to determine stages of disease - these tasks are onerous and error-prone, resulting in high costs and frequent misdiagnoses. Thankfully, the global impact of deep learning is now improving this process for radiologists.Using the latest deep learning technology in an intelligent cloud platform, Arterys, a startup focused on streamlining the practice of medical image interpretation and post-processing, is working to address these deficiencies. Using a AI-based contouring algorithm, the company is reducing the time required to calculate ventricular volumes from 30-60 minutes, to just a handful of seconds.Dan Golden, Director of Machine Learning at Arterys, will join us at the Deep Learning in Healthcare Summit, to share the technology behind their software and how they strategised for proving its safety and efficacy to become the first technology ever to be cleared by the FDA that leverages cloud computing and deep learning in a clinical setting. I asked Dan a few questions ahead of the summit to learn more.Can you tell us a bit more about your work?I’m the Director of Machine Learning at Arterys, where we have an incredible team of deep learning researchers creating the next generation of clinical radiological decision support systems. We’re a small team, so my responsibilities are quite broad; beyond coordinating team projects, I work on everything from data sourcing and cleaning, to model training, to scientific study design for regulatory clearance. Our team has done some really amazing work; we were really excited to get FDA clearance in January 2017 for our first deep learning-based product. That product, which is a web-based, zero-footprint, cardiac MRI postprocessing suite, is the first technology combining cloud and deep learning to be cleared by the FDA.How did you begin your work in the deep learning field?I started working on medical machine learning while a postdoc in the Stanford radiology department in 2012. I worked on a few different projects, using MRI and CT images to predict outcomes for cancer patients. At the time, we were still making models using hand-engineered features, with the lofty goal of calculating p-values below 0.05. Back then, the prospect of creating a real clinical application seemed far away. Once I moved into industry in 2013, the need to create a real commercial product inspired me to focus on the latest and greatest technology which, at the time, was the nascent field of deep learning. I’ve never looked back; with hand-crafted features you can certainly publish a paper, but with deep learning you can go so much further and really create a transformative product. What are the key factors that have enabled recent advancements in medical imaging?At Arterys, we’re confident that the future of medical imaging will be in the cloud. Cutting edge deep learning research is certainly crucial, but equally important are how the recent advances in cloud infrastructure allow our products to scale along with the deep learning infrastructure that underlies them. GPU-enabled cloud instances and the proliferation of worldwide availability regions have been critical to the international success of our product. Our application is 100% cloud-based and the distributed architecture that allows us to process multi-gigabyte studies with real-time distributed rendering and deep learning inference in dozens of countries simultaneously would not have been possible even a few years ago. Which areas of healthcare do you think deep learning will benefit the most and why?Advances in deep learning in the last few years have transformed the fields of both computer vision and natural language processing. It’s no surprise that medicine can benefit from these advances given the copious amount of images and free text in patient medical records. Previous attempts to automate the prediction of patient outcomes were crippled by the inability to efficiently process unstructured medical data, such as clinician-dictated free-text reports and radiological and histopathological images. With deep learning, these extremely important data sources can now be subsumed, which will allow automated systems to be accurate enough to really influence patient care. What deep learning advancements in healthcare would you hope to see in the next 3 years?A patient’s electronic medical record is an incredibly complicated morass of disparate data types; it can include clinicians’ free-text notes, confirmed or suspected symptoms and diagnoses, billing codes, radiological images and reports, lifestyle information, laboratory test results, and so much more. The human brain is well equipped to make sense of this multimodal information for individual patients, but any individual deep learning model is not. Recent work on deep learning-based image captioning and text-based image retrieval gives me hope that we’ll soon be able to combine all these sources of data into one beautiful and efficient model; the most accurate predictive model will surely be the one that can understand the electronic medical record in its entirety. What do you see in the future for Arterys?Although we’ve worked hard to automate some of the most complicated parts of the cardiac post-processing workflow, we’re not done yet. We expect to continue adding automated features to our cardiac product, while also expanding our product offerings to include efficient workflows and automation for other diseases. We’ve only scratched the surface of what deep learning can do in this space, and we’re excited to keep making radiologists’ jobs easier and more effective! Confirmed speakers include Junshui Ma, Senior Principal Scientist, Merck; Nick Furlotte, Senior Scientist, 23andMe; Muyinatu Bell, Assistant Professor, John Hopkins University; Saman Parvaneh, Senior Research Scientist, Philips Research; David Plans, CEO, BioBeats; and Fabian Schmich, Data Scientist, Roche. View more details here. Tickets are now limited for this event. Book your place now. Can't make it to Boston? Join us at the Machine Intelligence in Healthcare Summit in Hong Kong on 9-10 November. View all upcoming events here.Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.
OPCFW_CODE
Lovelyfiction Adui – Chapter 1023 – The Dao of Conquest! birthday wish -p1 Novel–Infinite Mana In The Apocalypse–Infinite Mana In The Apocalypse Chapter 1023 – The Dao of Conquest! robust tearful The Celebrity of Conquest started to spin and rewrite brilliantly because it pulsed with great bright white mild that laundered over Noah endlessly. Seven Fatal Sins- Sin of Gluttony There are still 6 a lot more! Seven Fatal Sins- Sin of Gluttony Several Universes’ worth of light of Conquest, just where exactly would he put it? What could he Empower using the obtained Light-weight of Conquest that n.o.system possessed actualized prior to?! chronology of ancient kingdoms A distorted humanoid form with large moves and ma.s.s of flesh that checked such as an outsized abomination, its over 100 meter large stature appearing extremely grotesque as deformed limbs chance out from several regions of its human body…and something could see numerous very sharp toothed mouths twisting horribly across its system. Every single one on the powerful Seven Deadly Sins abilities was bathed using the Lightweight of Conquest as they became empowered to perform something that they had never accomplished right before! “Permit the Light of Conquest stream within the Seven Lethal Sins.” Thus it was just Noah alone that seasoned the droves of the Lighting of Conquest adjoining him and sinking into his Starting point as being the lightweight display died straight down shortly after. To improve picture it, Noah stared at his extremely longer Rank Solar panel where his numerous Overall Knowledge lay. Those under <> acquired started to s.h.i.+ne which has a outstanding bright white Åuster during the glowing blue rank panel, Noah sensing his Soul humming by using a one of a kind strength as soon following, he discovered the look of an identical number of terms from each individual Toxic Sin. Seven Life threatening Sins- Sin of Wrath the romance of aircraft carrier This…was only just one manifested Sin. It was, certainly, the Absolute Capabilities of your Seven Deadly Sins that he pa.s.sively accustomed to improve himself while they led to the crazy improves he currently held. Out from the several choices he viewed when he stared at his vast volume of abilities within his Reputation Board, this particular one shone having a fantastic great Åuster while he trustworthy himself enough to just accomplish this process. There had been still 6 even more! “Permit the Light of Conquest circulate in the Seven Toxic Sins.” The saying read through…. The lighting seemed extremely domineering as none believed what it really was however, but those residing this heavy from the General Core on the Dimly lit Universe had extended since learned to disregard the ridiculous and absurd occasions that took place throughout it occasionally! A fantastic crimson gentle later, whatever was named out stumbled on fruition for a horrifying Manifestation from the Sin of Gluttony manufactured its appearance for the first time. Every one of your effective Seven Fatal Sins knowledge was bathed while using Light-weight of Conquest as they grew to become strengthened to carry out anything they had never done prior to! His simply call reverberated throughout the deathly chaotic void as in the next subsequent, a h.e.l.lish-looking demonic runic circle sprang out within the void of s.p.a.ce, distributing over to be over a hundred meters in size mainly because it shone that has a great reddish colored light-weight! The saying examine…. A cackle arrived of his skeletal cranium simply because this Tyrannical Lich Emperor would be the people to evaluation the complete Knowledge motivated from the Light-weight of Conquest. A distorted humanoid condition with big rolls and ma.s.s of flesh that searched just like an huge abomination, its over 100 meter taller stature searching extremely grotesque as deformed arms and legs golf shot from various regions of its human body…then one could see numerous sharpened toothed mouths twisting horribly all over its entire body. Noah’s eye s.h.i.+ned brilliantly since he stared at it regarding his most important physique, his sight reflecting this Legend of Conquest in a number of several universes as currently, his Primordial Ruination Replicate within the Necrotic World was at the forefront of the Undead Legion as above it, an outstanding bright Legend was s.h.i.+ning gorgeously!
OPCFW_CODE
Magento 2 is one of the leading eCommerce platforms in the world, offering everything from website hosting and custom development solutions to an intuitive administrative interface. Magento 2 Developers are experienced professionals who can help clients create an online shopping experience tailored to their specific needs, set up a complete and secure shopping cart system, and customize their site as needed. These developers are knowledgeable in all aspects of the platform, and may even special expertise in areas like product engineering, custom marketing solutions and search engine optimization. Here's some projects that our expert Magento 2 Developers made real: - Setting up Google product feeds for seamless cross-platform integration - Designing unique logos and creative advertising materials - Integrating user-friendly social media platforms like Facebook and Instagram - Developing custom software modules for optimized functionality - Troubleshooting existing issues with themes and payment gateways - Migrating eCommerce stores from other providers like OpenCart and Shopify to Magento 2 Our team of developers have provided satisfactory solutions to all sorts of needs in the past. If you're looking for robust solutions tailored to your specific project goals, you can hire a Magento 2 Developer on Freelancer.com to make your vision a reality!From 59,696 reviews, clients rate our Magento 2 Developers 4.84 out of 5 stars. Hire Magento 2 Developers Responsibilities: Should have strong knowledge of Node.js and frameworks Hands-on experience with JS, ES6, Node.js, , MongoDB, Building REST APIs and Graphql, Restify Strong experience with MongoDB Experience in Integration with data storage solutions [RDBMS, No SQL DB] Experience working with WebSockets is a plus Experience working with Redis and its various applications like task queues etc Should have experience and knowledge of scaling, data protection, and security considerations Good understanding of revision control tools, such as GitHub, JIRA Practical knowledge of Git/Bootstrap/Grunt/Babel/Webpack In-depth understanding of the entire web development process (Design, Development, and Deployment) Understanding of AWS Services – EC2, S3, Cloud front, SES, Code Deployment Exper... ***Please provide evidence of previous experience with Magento 2 development when applying for this project. If you do not have this then please do not apply*** I am looking for a skilled Magento 2/Adobe Commerce Developer to assist with various tasks for my project. The ideal candidate should have experience in theme customization, module development, and performance optimization. Specific tasks for this project include: - Update M2 Version to the latest version (currently on 2.4.2-p1) as well as the hosting platform & associated databases - Our site has light to moderate amount of plugins, payment modules as well as a small amount of custom work already that will need updating/testing. - Fix a varity of small bugs that currently exist, currently organized by a Trello board. - Ex... I am looking for a skilled freelancer who can help me with publishing a Figma file to Magento2. Size of the Figma file: Small (up to 10 artboards) Specific requirements: No, but I want the implementation to look as close as possible to the design. Timeline for the project: Urgent (1-2 weeks) Ideal Skills and Experience: - Proficiency in Figma and Magento2 - Strong understanding of web development and design principles - Experience in converting Figma designs to Magento2 - Attention to detail and ability to replicate designs accurately I have created 3 design Figma files for production on my Magento site. I am looking for a professional to review the attached files and publish them to the Magento site. Of course, the site needs to be responsive, Since there are 3 files, it will take 3...
OPCFW_CODE
in the newest glyphs version, when you place an background image (with 1000 pt height) and edit the sidebarings (issue only with left one), it will jump off to a weird position far up over the place. and another thing that sometimes happen, if you trace the background a bunch of times for certain characters, suddenly glyphs just highlight the whole placed image, as if it will trace only the rectangle itself. only a restart of glyphs helps then. Do you mean using the tracing tool? Or manual drawing over the image? tracing tool, yes. you also got that problem? for me it always happens, sometimes the BG image even jumps so far, that it is hard to access again. another thing i have noticed: with cmd+shift+cursor you can still edit RSB in 10er steps, with cmd+cursor in 1er steps, with ctrl+shift+cursor the LSB in 10er, BUT with ctrl+cursor the expected 1er doesn’t do anything. another thing: when you place a bitmap tif that is 1000pt in height, it is fortunately already fitting nicely in place at 100%. Unfortunately though, it always lands bottom-side-aligned to the baseline. Taking for granted, that the image is setup to fit exactly (like you did programming glyphs to expect it to be 1000pt = 1000upm) the image shouldn’t do that, but rather line up with bottom and top of the 1000upm rect. Otherwise every image needs manual shifting in place, very exhausting with hundreds of scans. “Otherwise every image needs manual shifting in place, very exhausting with hundreds of scans.” I’ll be Eric could provide a script to do that. Ahem. There already is one. Or four of them, actually: https://github.com/mekkablue/Glyphs-Scripts/tree/master/Images uh, very nice! but still, why not make it default fitting, if it already »reads« the image size to fit? But thank you very much, mekkablue, I can work with that. It does not read the image site to fit. It just reads the images size and uses one point in the image for one font unit. So a image that is 300pt high will be 300 units high in Glyphs. I will think about the default placement. ah, i understand. but no rush on that. scripting is also fine. The image jumping bug still needs to be fixed though. Can you try the latest update if the image jumping is fixed? I did. It still jumps. Just tried on OS 10.6.8 and this seems to be fixed over here (just spot checking), thanks!! Will try later at home on 10.9. And a question: When I removed the background image in my folder structure (i know, you shouldn’t do that, but it can happen sometimes ) and i reopen a glyphs file, it asks for that image to locate in the finder, which is super. Unfortunately it does not tell me which filename it expects. Could this be possible to be integrated (like inDesign does this with missing links)? There are scripts for that: For the Filename declaration: Where can I see, what file Glyphs should search for (except from the red annotation in the Glyph itself. For that i have to skip the search-for-dialogue.) I think I am missing something, sorry for that. Mekkablue, thank you too. What do you mean with file name declaration?
OPCFW_CODE
How to create an extensible rope in Box2D? Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something? Update: I don't care whether the rope is "deformable", i.e. whether it actually behaves like a rope, or whether it collides with other geometry. Just a straight rope will do. I can do graphical gimmicks while rendering; they don't need to exist inside the physics engine. I just want something a RopeJoint whose length I can change at will. possible duplicate of How do I make a rope from point A to B in Box2D? I'm not so sure that rope in the demo you showed is a physics object. It looked like it wasn't colliding with the character or the cave walls. Could just use a spline. I use Box2D quite a bit and have no idea how to achieve this and doubt that its possible. @Byte56: Not a dupe. The "extensible" bit is what is different. Also, I don't care if my rope is straight; I can add some fancy animation on top of that. @Byte56: I wrote the rope in the demo -- it is true that it doesn't collide with anything. I'm using a spline to render it nicely but internally it's just two points and a maximum distance. And I can change the length, which is what Box2D doesn't seem to let me do... Do you want to make a "deformable" rope? I believe its not possible in box2d, as it's a rigid body physics engine; check soft body dynamics, I would sugest the Bullet library. If you really want a "deformable" rope, I'll put my previous comment on an answer. My "chain" would approximate a deformable rope, but this is not what I'm looking for -- see update. Ok, I naively assumed that LibGDX wrapped all of Box2D, so this would be purely a Box2D problem. It turns out that vanilla Box2D, at least in trunk, has a function called b2RopeJoint::SetMaxLength. I've added it and got a pull request merged within minutes. It is now available (and working) in LibGDX nightlies. If you want a "deformable" rope, that isn't possible in box2d, as box2d is a rigid body physics engine; check soft body dynamics, I would sugest the Bullet library. A "simpler" solution using box2d would be to "emulate" the variation of length by hiding some of the rope (with fixed length) outside the visible world / not drawing some of the rope. This would work like pulling the top extremity (b) of the rope (c) up/downwards. I would add some static objects (a) so that the ninja doesn't escape the visible world for too long... Sorry, it seems my description still isn't clear. I don't care whether the rope is deformable; softbodies are overkill especially since I'm targeting a mobile platform. Your other idea is interesting, but I don't quite see how I would set this up using rigid bodies and joints. I could use a stick to simulate the rope, but it would constrain the distance between the ninja and attachment point to be constant, whereas I just want to constrain it to a maximum. I added the "deformable" only for completeness (you said you "didn't care", not that "didn't want" :P)
STACK_EXCHANGE
<?php ##################################################################################### # # Functions for combining payloads into a single stream that the # JS will unpack on the client-side, to reduce the number of HTTP requests. # ##################################################################################### # # Takes an array of payloads and combines them into a single stream, which is then # sent to the browser. # # Each item in the input array should contain the following keys: # # data - the image or text data. image data should be base64 encoded. # content_type - the mime type of the data # function mxhr_stream($payloads) { $stream = array(); $version = 1; $sep = chr(1); # control-char SOH/ASCII 1 $newline = chr(3); # control-char ETX/ASCII 3 foreach ($payloads as $payload) { $stream[] = $payload['content_type'] . $sep . (isset($payload['id']) ? $payload['id'] : '') . $sep . $payload['data']; } echo $version . $newline . implode($newline, $stream) . $newline; } # # Package image data into a payload # function mxhr_assemble_image_payload($image_data, $id=null, $mime='image/jpeg') { return array( 'data' => base64_encode($image_data), 'content_type' => $mime, 'id' => $id ); } # # Package html text into a payload # function mxhr_assemble_html_payload($html_data, $id=null) { return array( 'data' => $html_data, 'content_type' => 'text/html', 'id' => $id ); } # # Package javascript text into a payload # function mxhr_assemble_javascript_payload($js_data, $id=null) { return array( 'data' => $js_data, 'content_type' => 'text/javascript', 'id' => $id ); } ##################################################################################### # # Send the multipart stream # if ($_GET['send_stream']) { $repetitions = 300; $payloads = array(); # # JS files # $js_data = 'var a = "JS execution worked"; console.log(a, '; for ($n = 0; $n < $repetitions; $n++) { # $payloads[] = mxhr_assemble_javascript_payload($js_data . $n . ', $n);'); } # # HTML files # $html_data = '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><title>Sample HTML Page</title></head><body></body></html>'; for ($n = 0; $n < $repetitions; $n++) { # $payloads[] = mxhr_assemble_html_payload($html_data, $n); } # # Images # $image = 'icon_check.png'; $image_fh = fopen($image, 'r'); $image_data = fread($image_fh, filesize($image)); fclose($image_fh); for ($n = 0; $n < $repetitions; $n++) { $payloads[] = mxhr_assemble_image_payload($image_data, $n, 'image/png'); } # # Send off the multipart stream # mxhr_stream($payloads); exit; } ?>
STACK_EDU
The MISQ Scholarly Development Academy Background and Application Details The purpose of this page is to outline the background to the MISQ Scholarly Development Academy and to provide details on how to apply. If you are interested in applying, read on! The MISQ Scholarly Development Academy initiative follows in the footsteps of other MISQ initiatives that demonstrate our journal’s role as a platform for engagement in our field (Rai 2017). As a platform for engagement, we have long moved beyond just processing manuscripts. We run workshops for authors and reviewers, give seminars around the world, engage in social media, publish research curations, and so on. The common thread through all of our activities is our missions of supporting IS scholars and scholarship. The motivation for the new initiative proposed here is that we need to do our part to help IS scholars who are systematically disadvantaged from producing the finest scholarship because they are suffering disproportionately in the emotional toll of an academic life, especially in the time of COVID. For further background on the Academy, please see the September 2021 Editor’s Comments Purpose and Scope of the Scholarly Development Academy Our goal is to identify segments of our scholarly field who are disadvantaged (both in general and also due to COVID-19) and offer a program to help them. Over time, we hope to address many deserving segments of the field, e.g., those who suffer from gender biases, racial or ethnic biases, physical disability biases, and so forth. In the first year, as a test case, we will focus on untenured female scholars. We are also welcoming of trans and non-binary scholars who partially or sometimes identify with the female gender and feel they would benefit from participating in a women-centred initiative. Gender bias is a well-known scourge in society (Reskin 2000) and in science (Huang et al. 2020; Winslow and Davis 2016). Even when revealed, gender biases are often ignored, discounted, or unsupported (Cislak et al. 2018; Garcia-Gonzalez et al. 2019; Handley et al. 2015). We know that COVID- 19 has exacerbated negative outcomes for women in society (Dang and Nguyen 2021) and in science (Deryugina et al. 2021; Myers et al. 2020; Pinho-Gomes et al. 2020). We also know that there are gender biases in the IS field and we have seen calls to address them (Beath et al. 2021; Gupta et al. 2019; Windeler et al. 2020). In short, the evidence shows that we must do something. In the second year, we plan to focus on scholars from the Global South (Dados and Connell 2012). Global South refers to both economically disadvantaged nation states as well as “peoples negatively impacted by contemporary capitalist globalization” (Mahler 2017). The term Global South “references an entire history of colonialism, neo-imperialism, and differential economic and social change through which large inequalities in living standards, life expectancy, and access to resources are maintained” (Dados and Connell 2012, p. 13). Thus, it may include scholars in disadvantaged regions of the world, but also indigenous or BAME scholars in high-income countries. In academia, the dominance of voices from the global north is a well-known issue and, even in collaborations aiming to reduce the inequalities, it is often the scholars of the north that set the agenda (Green 2019). In all of these cases, we will also take an intersectional approach, that is, to recognize that inequalities coexist, such as gender, racial, economic and social inequality. For instance, while gender is our focus in the first test case, we will still account for other factors too (Britton and Logan 2008; Payton et al. 2021; Ryan and El Ayadi 2020). For each segment of the community we support, we will coordinate our activities with other relevant activities in the field. For instance, in the case of female IS academics, we will coordinate this initiative with the AIS Women’s Network (Loiacono et al. 2016) and other efforts to advance women in IT, such as the ImPACT IT project (Loiacono et al. 2020). In the case of IS academics of the global south, we will coordinate the initiative with the IFIP 9.4 working group. We are also leveraging our experience with related, successful initiatives. In particular, the IS field is very familiar with mentoring young academics through consortia (Gable et al. 2016). MISQ is also very familiar with running author-development workshops (Rai 2017). The initiative we have planned can be viewed as a combination of a junior faculty consortium and an author development workshop. Structure and Focus The MISQ Scholarly Development Academy will be an annual consortium with two foci: paper development, to help us overcome biases in publishing (Lundine et al. 2018), and career development, to help address biases in access to mentors and career support (Mummery et al. 2021). The two foci are aimed at supporting generativity (i.e., broadened thought-action repertoires and creativity in scholarship) and growth (gains in enduring personal and social resources in one’s academic career) (Frederickson and Losada 2005) needed for scholarly flourishing. Taking a strength-based approach, mentors in the academy will help participants to learn how to build on the strengths of their existing work to enhance its publishability, and build on their personal strengths as a scholar to enhance their research career. Overall, the goal is to support the flourishing of the next generation of IS scholars, help release some of their burden, and renew their passion for scholarship. Given the scholarly focus of MISQ, compared to the broader focus of other institutions (e.g., the Association for Information Systems, which also supports the teaching components of an academic career), this initiative will focus on scholarship and the (re)kindling of joy in scholarship. Much like other junior faculty consortia, applicants will be admitted to one and only one cohort of the MISQ Scholarly Development Academy (e.g., the 2022 cohort, the 2023 cohort, etc.). Potentially, events may later be held for cohorts from a given year, as occur for ICIS Doctoral Consortium reunions. Plenary Introduction Session - February 24, 2022 from 8:00am - 10:30am UTC - for Europe, Asia, Africa, India, and Australia - February 24, 2022 from 8:00pm - 10:30pm UTC - for North and South America Paper Development Session - March 7, 2022 from 8:00am - 11:00am UTC - for Europe, Asia, Africa, India, and Australia - March 7, 2022 from 8:00pm - 11:00pm UTC - for North and South America Career Development Session - October 7, 2022 from 8:00am - 11:00am UTC - for Europe, Asia, Africa, India, and Australia - October 7, 2022 from 8:00pm - 11:00pm UTC - for North and South America The MISQ Scholarly Development Academy will take place entirely online and begin with an introductory plenary session in mid-to-late February. The two foci of the Academy will be addressed through a paper development session in March, and a career development session in October. The specific days/times in those months are still being determined. Information will be updated and posted to this website as more details are identified. To facilitate time-zone differences, each session will be run at two times of the day – one at a time that suits Region 1 attendees and one at a time of day that suits Region 2-3 attendees. We are proud to have a remarkable group of mentors for the inaugural Academy, listed below alphabetically. They represent a cross-section of current MISQ Editorial Board members, past members, and other leaders in the IS scholarly community. They also reflect substantial diversity across topics, methods, world regions, gender, and career experience. We will accept 90-100 mentees in the first cohort, with a ratio of ~ 3 mentees to 1 mentor. We will seek to allocate mentors to mentees based on potential fit (e.g., topic, method, region). We will make the best matches we can, but we cannot entertain specific requests. Ritu Agarwal, University of Maryland College Park Indranil Bardhan, The University of Texas at Austin Kathy Chudoba, Utah State University Debbie Compeau, Washington State University Jens Dibbern, University of Bern Amany Elbanna, Royal Holloway, University of London Xiao Fang, University of Delaware Peter Gray, The University of Virginia Bin Gu, Boston University Traci Hess, University of Massachusetts, Amherst Weiyin Hong, HKUST Dirk Hovorka, The University of Sydney Carol Hsu, University of Sydney Marta Indulska, University of Queensland Tina Blegind Jensen, Copenhagen Business School Mark Keil, Georgia State University Thomas Kude, ESSEC Business School Ting Li, Erasmus University Kai Lim, City University of Hong Kong Magnus Mähring, Stockholm School of Economics Ann Majchrzak, University of Southern California Mary Beth Watson Manheim, University of Illinois Chicago Eric Monteiro, Norwegian University of Science and Technology Ning Nan, University of British Columbia Shan Pan, University of New South Wales Niki Panteli, Royal Holloway University of London Raghav Rao, University of Texas San Antonio Jan Recker, University of Hamburg Michael Rosemann, Queensland University of Technology Sundeep Sahay, University of Oslo Nilesh Saraf, Simon Fraser University Saonee Sarker, Lund University Susan Scott, London School of Economics Priya Seetharaman, Indian Institue of Management, Calcutta Maha Shaikh, King’s College London Choon Ling Sia, City University of Hong Kong Heshan Sun, University of Oklahoma Chuan Hoo Tan, National University of Singapore Monideepa Tarafdar, University of Massachusetts Amherst Hock Hai Teo, National University of Singapore Yu Tong, Zheijiang University Lynn Wu, University of Pennsylvania Xiaoquan (Michael) Zhang, Tsinghua University Moderators/Co-Leaders of the Academy Saonee Sarker, Lund University Mari-Klara Stein, Copenhagen Business School Andrew Burton-Jones, The University of Queensland
OPCFW_CODE
As of December 31 and according to Backblaze, the company had a whopping 2,200 SSDs under its belt, so after such a cautious time as several years of use, they decided it was time to come up with what we’ll see next. , which As a curiosity, there are some very striking data. Annual failure rate from 2019 to 2021 Here there are data that we have to explain because they can be misinterpreted and we will see them later in detail. What you have to understand is that year after year the number of annual breakdowns (AFR) goes from 0.86% to 1.22%. Here we have to ignore the 43.22% and 28.81% This is due to a very curious effect that repeats itself in both SSDs and HDDs: the greatest number of failures occurs at the beginning of the useful life of the product. That said, the AFR is calculated using the following formula: AFR = (disk failures / (disk days / 365)) X 100 Therefore, and knowing this, it will be easier for us to understand the next section. Annual SSD failure rate in 2021 alone This table is particularly interesting because first of all we can see the failures for the most recent disks as well as for the oldest ones. The Crucial SSD has such a high AFR because it only has 80 drives available with less than a month of use where 2 drives failed, hence its high value. The same thing happens in the Seagate, although they only have 3 units, but after 33 months of use, only one has failed. The important thing here is to look at the reliability values and intervals, where Backblaze states that anything less than 2% is acceptable and if it was less than 1% that would be fine. Here you have to count the number of units available, because the lower the value, the worse the interval. Quarterly vs. Cumulative Here the data is taken in another way according to the AFR that we have seen. Quarterly reveal very steep peaks and show when more units have failed in time, cumulatives, on the other hand, are more accurate over time, as they reflect more lasting and equally interesting changes. This is used to see failure spikes at certain times relative to the total time, where the value is always less than the 2% we mentioned. How do old SSDs behave? Here we have some more curious data that mirrors something we’ve commented on before: SSDs typically fail at some point close to their first use and from there they stabilize in terms of failures. In other words, those that fail do so at first and those that last do so without major problems during their useful life. The interesting thing about this graph is to see how the AFR fluctuates over time and as units are added to servers. As you can see, SSDs fail the most after about a year or a month and then gradually stabilize. Likewise, the values always tend to be at or below 1%, so we are really talking about very high reliability in almost all cases.
OPCFW_CODE
BESTPRED is run in standalone mode by running the program bestpred (bestpred.exe on MS Windows) from a command line. When the program is run the parameter file is read, data file read, calculations performed, and output written. Multiple animal records can be processed from the same file. During development of BESTPRED a file of 150,000 records was routinely processed with no difficulty. When running in standalone mode data are read from an input file based on the value of the source parameter (Table 4.1). Records from sources other than 10 (AIPL Format 4) are converted into http://aipl.arsusda.gov/formats/fmt4.htmlFormat 4 records and passed downstream for processing. When using Source 10, only some of the fields from the complete Format 4 record are required (Table 4.2). Empty fields must be blank-filled so that column assignments correspond to the format. The required fields are 17-byte cow ID, herd code, birth date, fresh (calving) date, parity, lactation length, and the number of test day segments. For each test day segment the following fields must be included: test day DIM, number of milkings weighed, number of milkings sampled, the DHI supervision code, milk-recorded days, milk yield, fat and protein percentages, and SCS. Previous days open is optional. Description of input data sources Source & Filename & Contents Up to 20 test day segments may be provided on a Format 4 record. Each segment is 23-bytes long: the first segment begins at column 251, the second at 274 bytes, the third at 297 bytes, etc. Records may end with the final segment in a lactation, and do not need to be padded to 710 columns. Fields Required for a Minimal Format 4 Record Byte Position(s) & Num Bytes & Field Format4.1 & Data Type4.2 & Field Description When a complete Format 4 record is used, herd averages are calculated by subtracting the appropriate yield deviation (columns 201-216) from the standardized lactation yield (columns 188 - 200). If a lactation average is not provided, as in a minimal Format 4, a breed average value, specified in the bestpred.f90 file, is used. Source 15 is identical to Source 10 except that herd average 305-d ME yields for milk, fat, protein, and SCS are read from the file format4.means, which should contain a record corresponding to each lactation in format4.dat. Both files should be sorted in the same order. The cow IDs and calving dates from the two files are checked against one another, and BESTPRED will halt if there is a mismatch. If you want to account for days open in the previous lactation (0 to 999 d) write the value into columns 246 through 248 of your Format 4 file. The value will be passed downstream to the bestpred_fmt4 and bestpred subroutines. When a value of 0 is encountered no adjustement is made. See About this document... for information on suggesting changes. - ... Format4.1 - 0 = Zero filler; A = Alphanumeric data possible; P = Packed decimal; X = Numeric data only (use left zero fill) - ... Type4.2 - CH = Character; CSL = Signed number with sign in leading (first) separate position (zero filled)
OPCFW_CODE
need help in tcl command usage for regsub I am new learner for tcl. I have some issue as below when using regsub. Consider the following scenario: set test1 [list prefix_abc_3 abc_1 abc_2 AAA_0] set test2 abc regsub -all ${test2}_[1-9] $test1 [list] test1 I expected $test1 output is [prefix_abc_3 AAA_0] However regsub has also removed the partial matched string which is prefix_abc_3. Does anyone here have any idea on how to regsub the exact words only in a list? I tried to find solution via net but could not get any clue/hints. Appreciate if someone here can help me. Use backslashes before the [1-9] (yielding \[1-9\] ) because it will otherwise see 1-9 as a command to execute. \m and \M in regexps match the beginning and end of a word respectively. But you don't have a string of words in test1, but a list of elements, and sometimes there's a difference so don't mix the two. regsub only handles strings while lsearch works with lists: set test1 [list prefix_abc_3 abc_1 abc_2 AAA_0] set test2 abc set test1 [lsearch -all -inline -not -regexp $test1 "^${test2}_\[1-9\]\$"] If the pattern is that simple, you can use the -glob option (the default) instead of -regexp and maybe save some processor time. You have solved my doubt. thanks for enlightened me, Potrzebie. Appreciated a lot :) In 8.6: lmap element $test1 {regsub $test2 $element ""} What exactly did you execute? When I type the commands above into tclsh, it displays an error - % set test1 [list prefix_abc_3 abc_1 abc_2 AAA_0] prefix_abc_3 abc_1 abc_2 AAA_0 % set test2 abc abc % regsub -all ${test2}_[1-9] [list] test1 invalid command name "1-9" I'm unsure what you are trying to do. You start by inisitalising test1 as a list. You then treat it as a string by passing it to regsub. This is a completely legal thing to do, but may indicate that you are confused by something. Are you trying to test your substitution by applying it four times, to each of prefix_abc_3, abc_1, abc_2 and AAA_0? You can certainly do that the way you are, but a more natural way would be to do foreach test $test1 { regsub $pattern $test [list] testResult puts stdout $testResult } Then again, what are you trying to achieve with your substitution? It looks as though your are trying to replace the stringabc with a null string, i.e. remove it altogether. Passing [list] as a null string is perfectly valid, but again may indicate confusion between lists and strings. To achieve the result you want, all you need to do is add a leading space to your pattern, pass a space as the substitution string and escape the square brackets, i.e. regsub -all " ${test2}_\[-9\]" $test1 " " test1 but I suspect that this is a made-up example and you're really trying to do something slightly different. Edit To obtain a list that contains just those list entries that don't exactly match your pattern, I suggest proc removeExactMatches {input} { set result [list]; # Initialise the result list foreach inputElement $input { if {![regexp {^abc_[0-9]$} $inputElement]} { lappend result $inputElement } } return $result } set test1 [removeExactMatches [list prefix_abc_3 abc_1 abc_2 AAA_0]] Notes: i) I don't use regsub at all. ii) Although it's safe and legal to switch around between lists and strings, it all takes time and it obscures what I'm tryng to do, so I avoid it wherever possible. You seem to have a list of strings and you want to remove some of them, so that's what I use in my suggested solution. The regular expression commands in Tcl handle strings so I pass them strings. iii) To ensure that the list elements match exactly, I anchor the pattern to the start and end of the string that I'm matching against using ^ and $. iv) To prevent the interpreter from recognising the [1-9] in the regular expression pattern and trying to execute a (non-existant) command 1-9, I enclose the whole pattern string within curly brackets. v) For greater generality, I might want to pass the pattern to the proc as well as the input list (of strings), in that case, I'd do proc removeExactMatches {inputPattern input} { . . . set pattern "^" append pattern $inputPattern append pattern "\$" . . . if {![regub $pattern $inputElement]} { . . . } set test1 [removeExactMatches {abc_[1-9]} {prefix_abc_3 abc_1 abc_2 AAA_0}] to minimise the number of characters that had to be escaped. (Actually I probably wouldn't use the quotation marks for the start and end anchors within the proc - they aren't really needed and I'm a lazy typist!) Looking at your original question, it seems that you might want to vary only the abc part of the pattern, in which case you might want to just pass that to your proc and append the _[0-9] as well as the anchors within it - don't forget to escape the square brackets or use curly brackets if you go down this route. Hi Nurdglaw, Thanks for looking into my doubt. actually you have missed out a $test in the regsub command, that the reason it flagged the error. your trial command % regsub -all ${test2}_[1-9] <missed $test1 here> [list] test1 Your foreach example is one of the way to replace but they might not able to produce the output that i looking for because they will replace the any pattern that match abc_ to null. and what i try to do is replace the whole match exact string abc_ only. below is showing what i expect to get from the result before: test1 is a list [prefix_abc_1 abc_1 abc_0 abc_6 def] with llength = 5 after: test1 is a list with [prefix_abc_1 def] with llength = 2 hope this will make you clear on my question. thanks :)
STACK_EXCHANGE
The last three weeks of the program were intense and stressful but fun. In fact I have been feeling “under the weather” for the past few days most likely because as stress levels went up my immune system went down. Unfortunate but a little rest will do the trick. My goal was to be immersed in software development and meet interesting people in this industry. After nine weeks of Summer Academy I am a much better (and more confident) developer and I have met some amazing people. I have a lot to learn and I am eager to learn. There are many little things that expert programmers know; things that will be learned and applied in time. HackerYou’s extensive network of instructors, mentors and students will be an asset as we move forward. My goal is to keep in touch with the friends that I have made during Summer Academy and make some new friends as my journey continues. Building Real Products Week 7 was spent putting the finishing touches on the art submission project that Paula and I brought with us to the program. It was exciting to see the project take form and become a reality. The Voix Visuelle Submission App is now on Heroku. The images and documents that artists submit are uploaded to the “cloud” (Amazon S3) and we have a Mandrill account for the email notifications that are sent. This app will be used by the artist centre in a few weeks for the first time. Weeks 8 and 9 were spent building our final project. The final project was an education app; I pitched the original idea for the app to several students and a group formed from the ensuing discussion. It was a pleasure to work with Alexander Miloff, Nachiket Kumar and my wife Paula Franzini. We spent nearly a week discussing, planning, interviewing people, validating the idea, drawing sketches and so on. The experience was surreal to be honest. The second week was spent developing the app and testing it live during one of the HackerYou lessons. It’s hard to put into words the amount of learning, excitement, and stress that we went through. I tried to keep the expectations low and not get too ahead of myself; I’m not sure if I succeeded. The result was Curri, an app to help promote student-teacher communication. The app allows instructors to build a clear and detailed curriculum map with “checkpoints” – each of which contain a learning expectation and a success criteria – that can be consulted before, during, and after the learning experience. At the same time, it allows students to quickly and easily give feedback on how they feel about every checkpoint. The end result is that the students know what they are learning and the teachers know who is struggling and with what specific part of the curriculum. Curri is not just a project for HackerYou, we have decided collectively that we would like to keep working on it. Nachiket, Paula, Alexander and myself are collaborating to bring this app to market. In the upcoming months we will improve the app and would like to partner with educators to bring the app into real classrooms. Part of The HackerYou Team HackerYou courses have all of the qualities that make for exceptional learning; they are hands-on, collaborative, authentic and led by experienced and passionate people. Registration is open now for the front-end bootcamp; the course begins January 27th, 2014 and ends March 28th, 2014. What is really exciting (for me at least) is that I will be part of the HackerYou team that makes the course possible. My goal is to contribute as much as I can to help build a great student experience. This is a unique opportunity; I get to combine two passions (web development and education) and work with some great people. While in Toronto for the next few months I am looking forward to mentoring HackerYou’s part-time programs and will try to be present at tech meetups and events. See you there?
OPCFW_CODE
Problem running JavaFX in maven plugins daniel.armbrust.list at gmail.com Fri Jan 4 06:28:41 UTC 2019 As a followup, the easiest workaround I found is this silly hack: String s = System.getProperty("javafxHack"); int i = 0; i = Integer.parseInt(s); System.setProperty("javafxHack", i + ""); System.setProperty("javafx.version", "mavenHack" + i); Which I run before calling PlatformImpl.startup(); This way, each classloader that launches JavaFX gets its own copy of the dll. It would be nice if the JavaFX code itself handled the dll's better to not have this issue - this is a regression from JDK 8. But I do realize its probably a rare usecase... I run into it because I have multiple instantiations of a plugin that runs during a maven lifecycle that utilizes the tasks API, among other things. But this will also break for anyone that tries to use parts of JavaFX in a server like tomcat where classloaders are isolated. But that might not be a common issue either unless the very old feature request about JavaFX not supporting running Headless get fixed. I currently use our own hacked implementation of a ToolKit to inject when running headless, in order to make the tasks API work. On 12/31/2018 09:54 PM, Dan Armbrust wrote: > I'm trying to migrate a codebase to JDK 11 / OpenJFX, and have run into an issue I > didn't have under JDK 8. > We have a complex maven build process - parts of which include building our own maven > plugins, and then executing those plugins in a different portion of the build. > Maven uses isolated Plugin Classloaders for each plugin execution: > When my plugin executes, its going down a path that needs to start up the JavaFX > subsystem - mostly to get support for tasks and such (we are actually building headless, > with a hack of a HeadlessToolkit shimmed in to make JavaFX actually work headless) but - > it would appear that because some other class loader in my build process already hit the > JavaFX startup once... I fail on a native library load: > Loading library glass from resource failed: java.lang.UnsatisfiedLinkError: Native > Library /home/darmbrust/.openjfx/cache/11/libglass.so already loaded in another classloader > java.lang.UnsatisfiedLinkError: Native Library > /home/darmbrust/.openjfx/cache/11/libglass.so already loaded in another classloader > at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2456) > at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2684) > at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2617) > at java.base/java.lang.Runtime.load0(Runtime.java:767) > at java.base/java.lang.System.load(System.java:1831) > at com.sun.glass.utils.NativeLibLoader.loadLibraryInternal(NativeLibLoader.java:157) > at com.sun.glass.utils.NativeLibLoader.loadLibrary(NativeLibLoader.java:52) > at com.sun.glass.ui.Application.loadNativeLibrary(Application.java:110) > at com.sun.glass.ui.Application.loadNativeLibrary(Application.java:118) > at com.sun.glass.ui.gtk.GtkApplication.lambda$static$2(GtkApplication.java:109) > at java.base/java.security.AccessController.doPrivileged(Native Method) > at com.sun.glass.ui.gtk.GtkApplication.<clinit>(GtkApplication.java:108) > at com.sun.glass.ui.Application.run(Application.java:144) > at com.sun.javafx.tk.quantum.QuantumToolkit.startup(QuantumToolkit.java:258) > at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:269) > Any suggestions on how to deal with this? > I'm running on linux with: > openjdk 11.0.1 2018-10-16 > OpenJDK Runtime Environment 18.9 (build 11.0.1+13) > OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode) > I'm going to try this hack and see if it works: > But it would be nice to have a proper solution for this. > Also, on a completely unrelated note, where the heck is the JavaFX bug tracker these > days? There seems to be no end to confusing information out there about where the bug > tracker is, multiple github mirrors have trackers, and the place that should clarify > this says nothing: https://openjfx.io/ > Multiple Oracle pages still point to the Jira, many other pages point to the > bugreport.java.com, others point to https://bugs.openjdk.java.net/ but that shows no > javafx project. > Is this one official now? https://github.com/javafxports/openjdk-jfx/issues More information about the openjfx-dev
OPCFW_CODE
On a project I’m working on there is an integration to the payment provider Klarna. I was having trouble to get it to work on my local development environment. Everytime a purchase was made the visitor(me) was supposed to get redirected back to the dev-environment. I was… but to the wrong url: https://local-dev-env-url/+CSCOE+/wrong_url.html In my case this was connected to the validation_uri setting. What is the validation_uri ? “This checkout function will allow you to validate the information provided by the consumer in the Klarna Checkout iframe before the purchase is completed.” – Klarna This is an example of what is sent to Klarna in my local environment, and you can see the validation_uri at the bottom. Since I’m running this code locally the Klarna API can’t reach my validation uri. That was what was causing my problem. For debugging purposes you can skip to send the validation_uri since it is an optional setting according to the documentation. To also debug the validation logic you need to make sure the Klarna API can reach that url. Hopefully this helps someone else in the same situation as I was. But I have a feeling you can encounter the same error but it’s being caused by something else. A big thanks to the Klarna support for helping out with this! When I ran the last command to install dislocker this error showed up: By unlinking ruby as explained by jricks92 on Github the install went through with no errors: 2. Drive identifier Now we need to know the identifier of the bitlocker encrypted disk. In the terminal we’ll run the command diskutil list (on macOS). The identifier i’m interested in here is called disk2s1 3. Encrypt with dislocker First we need to create a folder where a virtual NTFS partition called dislocker-file will be created. I’ll call mine externalhdd and i’ll create it in the mnt folder. Now it’s time to use Dislocker to decrypt the disk. -V /dev/disk2s1 tells dislocker what disk to decrypt. -u tells dislockers to ask the user for the password the disk is encrypted with. -- /mnt/externalhdd passes the path to the folder we created to store the virtual ntfs-partition. 4. Create a block device Now we need to create a block device before mounting the disk. hdiutil - manipulate disk images (attach, verify, create, etc) attach - Attach a disk image as a device imagekey - specify a key/value pair for the disk image recognition system. I can’t find information on what the diskimageclass=creatdiskimage means in the man pages of hdiutil. nomount - indicate whether filesystems in the image should be mounted or not. After running this command i got the line /dev/disk3 printed in the console. Now we’ll use that to mount the drive. Start by creating a folder where the drive will be mounted Then we run this command to mount it (only readable): And by now, if you haven’t encountered any errors, you should see the disk in Finder. There are more to dislocker than this post shows, take a look at the man pages to get more info: Start by installing electron-installer-debian package: 2. Create config file Then create a new file called debian.json that will contain settings we need to create the package. With dest we set where the .deb package will be saved. The icon options points to an icon the app will get. categories sets the category where the application will be seen in menus. I’ve choosen Utility for this app since there is no real good place for it. You can take a look at the available categories for your app. lintianOverrides is used to quieten Lintian, which is a debian package checker. There are a lot of other options you can set. You might want to check how to set for example dependencies. But this will currently be enough for the Electron tutorial app. 3. Package the app Now we need to make sure we have a packaged app. First I need to make an update to the packager script since I’ve missed a setting there. Thank you Felipe Castillo for the help! The package script called package-linux in package.json that currently looks like this: Needs to be updated with a setting called appname. So it will now look like this instead: Now we can run the package script: 4. Creating the debian package When the electron-packager is finished, we can run the electron-installer-debian to create a .deb package. src points to the folder where the packager saved the app. arch tells electron-installer-debian which architecture to build for. config points to the file containing the settings we defined in step 2. 5. Adding script to package.json So that we don’t need to remember the electron-installer-debian command everytime we can add it to package.json below create-installer-win: Now we can run This is what the package-installer looks like in Mint Linux when opening release-builds/electron-tutorial-app_0.1.0_amd64.deb After doing that step I was happy and tired and didn’t add that key into the info.plist again before going on vacation. Bad move. I spent an hour figuring out why my app wouldn’t start when running react-native run-ios. Instead i got the error No bundle URL Present.. By adding the following to info.plist it started to work again: However this is not the only thing that could produce this error. Checkout this Github issue if the solution above does not help.
OPCFW_CODE