Document
stringlengths
395
24.5k
Source
stringclasses
6 values
import com.qualcomm.robotcore.eventloop.opmode.Autonomous; import com.qualcomm.robotcore.util.RobotLog; @Autonomous (name = "Auto Red") public class Test_Auto extends SkyStoneAutonomousMethods { public void runOpMode() { /* * Initialize the drive system variables. * The init() method of the hardware class does all the work here */ telemetry.addLine("DO NOT RUN CODE ROBOT STILL INITING"); telemetry.update(); Init(); // init robot hardware init_vuforia_2(); //init camera telemetry.addLine("Ready to jump to hyperspace"); // Wait for the game to start (driver presses PLAY) telemetry.update(); waitForStart(); VuforiaStuff.skystonePos StoneRember = vuforiaStuff.vuforiascan(false,true); // look for skystone telemetry.addData("Init Arm Tilt Encoder",robot.LiftMotor.getCurrentPosition()); telemetry.addData("Init Extension ticks",robot.ExtendMotor.getCurrentPosition()); telemetry.update(); //sleep(5000); armTiltWithEncoder(-1000,0.50); // rotate the arm up, but don't wait for it to finish moving armExtNonBlockling(2893,0.8); // extend the arm, but don't wait for it to finish moving //sleep(5000); /* while(robot.ExtendMotor.isBusy() || robot.LiftMotor.isBusy()){ telemetry.addData("Arm Tilt Encoder",robot.LiftMotor.getCurrentPosition()); telemetry.addData("Extension ticks",robot.ExtendMotor.getCurrentPosition()); telemetry.update(); } */ // sleep(3000); switch (StoneRember) { // distance 11 and 10 to far forward case LEFT: //SkyStone is left frontgap(15,1,57 ,sensorSide.LEFT, sensorFront.DEATHSTAR ); //needs to be fixed telemetry.addLine("LEFT"); break; case CENTER: //SkyStone is on center telemetry.addLine(":CENTER"); frontgap(15,1,75,sensorSide.LEFT, sensorFront.DEATHSTAR); //fixed break; case RIGHT: //SkyStone is on the right //strafe(25,-0.5); //Strafe over to the skystone telemetry.addLine("Right"); frontgap(15,1,90,sensorSide.LEFT, sensorFront.DEATHSTAR); //fixed 11 to far forward //strafe(60,-1); break; } RobotLog.d("8620WGW Test_Auto Before 1st GrabBlock. Arm Angle=" + robot.LiftMotor.getCurrentPosition() + "Arm Extension=" + robot.ExtendMotor.getCurrentPosition()); grabBlock(); armExt(2000,1); drive(15,-1); rotate(85,1); switch(StoneRember) { //KEY: 1: drive under bridge with gap TODO test the distances of the gap drive case LEFT: gap(140, 1, 53, sensorSide.RIGHT); //1 break; case CENTER: gap(130, 1, 53, sensorSide.RIGHT); //1 break; case RIGHT: gap(120, 1, 53, sensorSide.RIGHT); //1 break; } //frontgap(110,1,69, sensorSide.RIGHT, sensorFront.WOOKIE); //drive to waffle armExtNonBlockling(2800,1); //TODO strafed to far to the left armTiltWithEncoder(-1000,0.50); // rotate the arm up, but don't wait for it to finish moving strafe(60,1, 100, sensorFront.WOOKIE); // lines up on block 100 //drive(10,1); // drive to place block decrease this drive was 30 robot.OpenServo.setPosition(0);//set power zero and wait half a second. armTilt(.98,0.8); // tilt to clear skystone rotate(170,1); //rotates to align the waffle grabers on waffle armExtNonBlockling(1500,1); armTiltWithEncoder(900,0.25); //drive(10,1);//todo get closer to waffle robot.RightWaffle.setPosition(0.5); //moves the waffle grabber to push out the skystone if it is in the way robot.LeftWaffle.setPosition(0.5); strafe(15,1, 0, sensorFront.NOSENSOR); //strafes onto waffle was 20 robot.RightWaffle.setPosition(0); //grabs waffle robot.LeftWaffle.setPosition(1); sleep(400); //TODO can this be shortned or taken out rotate(255,1); //rotates to align the waffle with building zone needs to be 240 to 250 was 225 robot.RightWaffle.setPosition(1); //grabs waffle robot.LeftWaffle.setPosition(0); strafe(80,1, 0, sensorFront.NOSENSOR); //strafe waffle into build site drive(70,1); telemetry.update(); } }
STACK_EDU
If you’re unfamiliar with the term, I would define self-hosting as the practice of hosting your own server and services either in your home or place of business. People do this for an array of reasons (which we’ll get into in this post). It’s become increasingly popular over the last few years for enthusiasts, developers, and nerds, in general, to set up “homelabs” in their basements, under their desks and in their TV cabinets. I started self-hosting around 2 years ago after I got a touch of some FOMO from friends with servers set up at home. To be honest I expected to be “over it” a few months after I started. To my surprise, I’m still self-hosting now, and having more fun with it than ever. It’s been a fun way for me to learn in a safe/forgiving environment, experiment with new tech, and save some money on online services like Google Drive. This post is going to outline a few of the reasons I love self-hosting, and why you might love it too. Reuse of otherwise unused hardware My self-hosted setup is mostly secondhand hardware. The only thing I’ve purchased from new are things you typically don’t want to buy second-hand (e.g. Hard Drives). By re-using older hardware, I saved quite a bit of money and got up and running almost instantly. I think I only spent around $350 to get a server up and running, with all of that cost being IronWolf 4TB hard drives and a compact case for the server. I started with a full “desktop-grade” server, but you can start with something as simple as an old laptop, or a Raspberry pi you got at a conference that has been collecting dust for a few years. Learning in a safe environment When I started self-hosting I had heard of docker but had not used it. Now, I use docker containers at work daily, and they’re a big part of how I do development. I would never have learned as much, as fast, unless I was experimenting on the weekend with my self-hosted services. I’ve also learned a lot about: - Network management/structure - Proxy software (NGINX in particular) - Monitoring and alerts - SSL certificates - Virtual machines - Virtual private networking (VPNs) - I’m sure much more I’d read a lot about all of these things in school and had even dipped my toes in them previously, but being able to sit down and implement these things has solidified concepts in my head. I definitely wouldn’t say I’m an expert in these topics, but I’d be confident enough to propose solutions using this tech now that I’ve used it. Another benefit of learning in a self-hosted environment as opposed to on the job is you don’t have to worry about taking down production for a whole company, just your personal stuff. Privacy and ownership If you’re not paying for the product, you and your data are the product - someone from the internet Privacy was one of the biggest reasons I got into self-hosting. I was able to migrate lots of my data off of Amazon, Facebook, and Google’s servers by spinning up self-hosted alternatives, for example: - Google Drive / Google Photos: replaced by NextCloud + a nightly encrypted backup to BackBlaze B2 - Amazon S3 buckets: replaced by MINIO - Notion / Google Keep: replaced by Trilium notes - Google Analytics: Umami + PostgreSQL - Cloud databases: local PostgreSQL, MySQL, MariaDB, and Redis containers - Google Home: Home Assistant virtual machine - Nord VPN: WireGuard as a personal VPN on my phone, and laptops - a whole bunch of dev stuff Don’t get me wrong, I still use Gmail for personal email, and I keep all my recent photos on Google Photos still. That said, I was able to move a lot of my data back into my control, and that feels good! Easier development environments Having a server under your desk with every open-source database installed, and the ability to spin up any niche tool you might need in a few seconds can make creating development or staging environments easy. For example, when working on freelance projects I typically create a new database for the app or site I’m working on and use that as a development database. Then when I want to leave an app running for testing, I’ll just spin up a docker container on my server and send over access to the client for testing. All for effectively $0. Before I was self-hosting, a separate test instance (app + database + Redis) could run anywhere from $10-20/month. The Unraid operating system I use Unraid as the operating system on my main server, it allows you to easily set up and manage almost anything you’d want for self-hosting through a local web interface, and even a terminal when I need a bit more access: - Network drives with parity (via Samba / NFS) - Docker containers - Virtual machines - Unraid specific plugins - a whole bunch more All you need to do to get rolling with Unraid is download the OS, flash it to a hard drive, and boot off it on your machine. To be honest, I don’t think I would have got into self-hosting as much as I did if Unraid wasn’t so dang easy to get going with. In general, the community is welcoming and willing to help out. I’ve asked a few questions online about my setup, and have always gotten honest and helpful communication back. This could be a bit of selection bias, but as a programmer/developer by trade I was really surprised given how toxic Stack Overflow, Linux, and developer help forums can be. Here’s a list of some of my favourite self-hosted spots online: Even if you don’t plan on getting into self-hosting now, but are generally interested in it, I’d suggest checking these out and seeing all the cool stuff people are doing. While it’s probably pretty obvious that I’m a fan of self-hosting, there are a few notable downsides to messing around with self-hosting Some of them only apply if you’re going to be exposing services to the internet, but I’ll make that clear. Given that you’re running the hardware yourself, there’s always the risk that a hard drive could die, RAM can go bad, or a CPU can just give up. When this happens your services will likely be down (unless you’re running something high availability; I’m planning to do this with some Raspberry Pi’s so stay tuned for that). Besides hardware, you’re going to want to keep your software up to date and check in regularly. There are monitoring tools that can help with this (monitor uptime, usage, and logins, have the service alert you if anything looks off), but you’re still going to want to check-in manually at least once a week in my opinion. When you’re exposing your services, virtual machines, and apps to the internet there are some inherent risks. If you don’t properly test and secure your system someone will find a way into your network, your machine, and your data. Be careful with how you’re doing things, some common advice would be: - Stick to the principle of least privilege: Give out very limited access by default, and regularly audit what you’re making available outside of your network. - Stay up to date: Make sure to monitor the available updates on your containers and virtual machines, and patch whenever possible. While this isn’t a catch-all for security (like most things on the list), it helps to always be on the newest version. - Authenticate everything: This is more of a personal belief, but if you’re going to be putting something on the internet, make sure there’s a well-tested authentication scheme, preferably with 2FA protecting the service (e.g. NextCloud w/ strict passwords and 2FA) - Only open the ports you need open: The only ports I have open on my router are 80 (HTTP), 443 (HTTPS), and 51820 (WireGuard VPN). This is a pretty easy one to manage and doesn’t mean as much as it used to but it’s still a good rule of thumb. - Use VPN access for really sensitive stuff: for sensitive data and services, straight up don’t expose them to the internet. Just don’t risk it. Setup a VPN on your network, connect to the network via the VPN and just connect as you would locally. I use WireGuard which is trivial to set up via Unraid As I mentioned at the top of this post, self-hosting is a significant time investment. You’ll get out what you put in. If you don’t have the time to sit down and mess around on a semi-regular basis (at least until you’ve got things configured and set up) you’re going to be more frustrated than it’s worth. This one doesn’t have much to do with self-hosting itself, but depending on the control your internet service provider (ISP) has over your router, you might not be able to port-forward your router, or even use enough data to make self-hosting worth it past local services. Before getting into it, I would recommend making sure your router allows port-forwarding, and your ISP doesn’t have restrictive data caps (upload/download). If after reading all of this, you still want to give it a try, I say go for it! Plug in that laptop, raspberry pi, or old desktop and get tinkering. I’ve spent countless hours tinkering with my setup (probably too many, to be honest). If you have any questions about self-hosting or want to share your own setup feel free to comment below or reach out on the contact page!
OPCFW_CODE
specific values for scale parameter can break dims 🐛 Bug haven't dug deeper yet, but reposting here from https://forum.image.sc/t/image-layer-in-napari-showing-the-wrong-dimension-size-one-plane-is-missing/69939/10 d = np.random.rand(5, 170, 240) viewer = napari.view_image(d, scale=[3.533, 1.0, 1.0]) slider only has 4 positions (not 5) ... where a slightly different first scale value works fine Found the problem. In short: (17.665-0.0) // 3.533 == 4.0 # what we do in `Dims.nsteps` (17.665-0.0) / 3.533 == 5.0 # what apparently we should do Some floating point error that's so small it's not even displayed, I suppose. The result of the above expression is passed to int(), so it's floored anyways... so I supposed we don't need the floor division. But this makes me wonder if the opposite case can happen as well? (i.e: we "spill" into the next integer) Hello , we would like to reopen this issue as we believe that the problem is still present. Please see discussion in https://github.com/NEUBIAS/training-resources/issues/563. For certain images and scaling settings, Napari does not show some Z-slices (typically the last one). There is an example code in https://neubias.github.io/training-resources/multidimensional_image_basics/index.html#xyzc (drop-down menu > skimage naperi). The loading function OpenIJTiff can be found here https://neubias.github.io/training-resources/functions/OpenIJTIFF.py The image has 41 slices, when loaded with certain value of scales (certain values) it shows in the slider 0-39 slices. The last slices is just not displayed. We observed the issue on several installations. These are the Conda napari versions installed napari 0.4.17 pyh275ddea_0_pyqt conda-forge napari-console 0.0.7 pyhd8ed1ab_0 conda-forge napari-plot-profile 0.2.2 pypi_0 pypi napari-plugin-engine 0.2.0 pyhd8ed1ab_2 conda-forge napari-skimage-regionprops 0.10.0 pypi_0 pypi napari-svg 0.1.6 pyhd8ed1ab_1 conda-forge napari-tools-menu 0.1.19 pypi_0 pypi napari-workflows 0.2.8 pypi_0 pypi @manerotoni Could you please check the version in main? There have been some changes with the way that we compute extents in the last few weeks actually. Also if you have a specific stack that we can test with that would be very useful! @manerotoni Hi, thanks for reporting. I tried: import numpy as np import napari dummy = np.random.rand(41, 2, 297, 284) viewer = napari.view_image(dummy, scale=[0.1000000, 1, 0.0222057, 0.0222057]) On 0.4.17 i get a slider from 0 to 39, so it reproduces the bug. However, in my dev env with main napari, I get 0-40, so it appears that it has been fixed or the behavior is somehow different. I'll look into it more. Also, can you provide full napari --info? As it's related to floats it could be platform dependent? I'm on macOS arm64. Ok, git bisect points to this PR as fixing the issue: https://github.com/napari/napari/pull/5751 @brisvag @Czaki that PR is tagged for 0.5.0 but this seems like something people can easily and frequently hit in the wild: with the above snippet it seems to occur anytime the number of slices is odd. Should we reconsider? Just for clarity, this is actually a different bug (before it was a floating point math precision issue, now it's a "excluded vs included upper limit" issue. #5751 is marked as 0.5.0 because it's actually a followup on the (quite significant) changes to dims from #5522, which was also marked as 0.5.0. I'm surprised though that this bug exists after #4889 and before #5522... @brisvag I'm surprised though that this bug exists after https://github.com/napari/napari/pull/4889 and before https://github.com/napari/napari/pull/5522... We actually discussed this in #5522 (link). I said: I'm super curious/concerned about off by one / off by a rounding error here. I'm getting vibes of https://github.com/napari/napari/issues/1686 (and I think other similar bugs, though I can't find the other references), because generally np.arange(rng.start, rng.stop, rng.step) will have either nsteps or nsteps+1 elements depending on floating point rounding error here. Do you have a good grip on those issues here? To which you replied: I don't have a good grip... But this issue is kind of expected since we're converting from a continuous space to integers. As for the floating point precision, doing it this way (simple division) should be ok based on previous issues. 😂 The fix is to use something more like linspace rather than arange, as you noted in https://github.com/napari/napari/pull/5751/files#r1178387420. But I'm not surprised that #5751 got us either all the way there or part of the way there. Good job past me I guess xD Hopefully then #5751 is indeed the final solution. So the question is: do we feel this is important enough that we should get all the #5522 + #5751 stuff in for v0.4.18? Thanks for the quick response. Looking forward to the update. Closing, since the issues described have been fixed by the PRs and are now merged. These fixes will be available in the next v0.5.0 release. Anyone who wants it sooner than that can install napari from main with pip install git+https://github.com/napari/napari.git pyqt5
GITHUB_ARCHIVE
Can you confirm if this works for you as I've managed now to reference Class Projects within a Asp.Net 5 solution. community.dynamics.com/crm/b/develop1… #MSDYN365 #msdyncrm 2weeksago Follow @ramontebar Recent Posts Dynamics 365 (CRM) Scheduled Workflows using MicrosoftFlow First look at Microsoft Dynamics365 Too many active business processflows New Dynamics CRM EmailSignatures Where is Font package in lualatex gives the cm style figure Bash: Is it always safe to use `eval echo`? Had to double check but it looks like if you check under solution Items > global.json, it shows wrap as a project. have a peek here I've added a Github link to a sample solution where I've tried to check all cases. I downloaded Eclipse C++ and the Cygwin toolchain. This will solve your problem. Let me know if I can clarify anything, or whether it works 🎉 1 pranavanmaru commented Feb 14, 2016 Using VS2015 to Build an MVC6 site and I'm not facing anchor share|improve this answer edited Nov 22 at 14:52 HebeleHododo 2,87811636 answered Nov 22 at 14:49 Col_Parity 113 add a comment| up vote 0 down vote The problem you are reporting seems Note you might be not able to do this with existing libraries. I'm not sure what has changed. But the binary is not created. Thank you for this. –Paxton Sanders May 28 '14 at 20:39 This solved my problem. Restore packages and builds solution again (this should not be necessary for me) You get MSB3274 warning The primary reference ... Could not find a solution. Type String Could Not Be Resolved Back to the top Search: Forum General C++ Programming "string could not be resolved" "string could not be resolved" Dec 4, 2011 at 2:36am UTC ramus313 (48) here is the Learning resources Microsoft Virtual Academy Channel 9 MSDN Magazine Community Forums Blogs Codeplex Support Self support Programs BizSpark (for startups) Microsoft Imagine (for students) United States (English) Newsletter Privacy & cookies rclabo referenced this issue in xunit/xunit Oct 27, 2016 Open dotnet test in IDE mode fails during discovery of net451 tests and only attempts to run for the first defined target I selected New > Create new sourcefile and named it Main.cpp I entered some basic code in it like this: #include danpantry commented Jun 13, 2016 • edited @abpiskunov Unfortunately, no, as it is closed source, however the solution is quite literally just that project file (with a binary of your choosing) Program G++ Not Found In Path using Microsoft. I didn't mean to confuse the situation. If I add a project reference in the web app to the class library, I get the NU1001 "The dependency ChartCapture.Common >= 1.0.0-* could not be resolved" error, and I cannot Basically, the CRM SDK 2015 assemblies have been compiled with .NET Framework 4.5.2, but CRM Developer Toolkit was compiled using NET Framework 4.5. I add an existing class library project (non-packaged) to the solution. Symbol Cout Could Not Be Resolved Eclipse Windows The last removed the error messages. Unresolved Inclusion Iostream Now to answer your question: F3 opens the correct header file. Try fully qualified name: ClassLibrary1.Class1 c; CS0246 This type or namespace 'ClassLirbary1' could not be found. Why was the plane going to Dulles? share|improve this answer answered Jun 3 at 11:22 Myst 126112 add a comment| up vote 2 down vote Install C++ SDK: Help > Install New Software > Work with: path for http://howtobackup.net/could-not/or-namespace-name-crystalreport1-could-not-be-found.php My Twitter RT @jukkan: New #MSDYN365 Phone app version is now available on Android, includes support for #PowerBI dashboards & tiles. Onwards with the tutorial. Eclipse Unresolved Inclusion Sign In · Register Welcome Guides Recipes APIs Samples Forums Components Videos Forum › Xamarin Platform › Xamarin.Forms Categories Recent Threads Activity Unanswered Best Of... Hopefully fixed soon! 👎 3 Member sayedihashimi commented Dec 17, 2015 Sorry for the delay here. Could you just create a small sample that reproduces your problem and share that? Does anyone face this issue before? Let's say we have the next CRM solution: For plugins and workflows, we should update the references to the new assemblies and also the .NET Framework in the project properties: How To Setup Eclipse For C++ This now has better support for referencing .NET Framework class libraries from an ASP.NET Core project on .NET Framework. 👍 2 balachir closed this May 17, 2016 rchamila commented May abpiskunov commented Feb 16, 2016 @pranavanmaru I am not sure if I understand your comment. Jim Looked promising, but what it did was start a file with this code: #include However, I am still getting an unresolved inclusion error on the include files. Reload to refresh your session. This documentation is archived and is not being maintained. Fill in your details below or click an icon to log in: Email (required) (Address never made public) Name (required) Website You are commenting using your WordPress.com account. (LogOut/Change) You are share|improve this answer edited Jan 12 at 0:36 Michel Floyd 11.4k31333 answered Aug 15 '14 at 13:51 Frank 10314 add a comment| up vote 7 down vote Try out this step: You will find there will be not symbol could not be resolved problems. Anyway Here's the problem: Everything works. The errors may not disappear instantly, you may need to refresh/build your project. It's really annoying because it affects xunit test project and when you try to make test driven it gets in the way. abpiskunov commented Jun 13, 2016 Well if it would be just that, we could repro it :). Boggle Checking Dictionary Problem - STRCMP Linked List Help How To Check If Memory Could Not Be Allocated? So much to remember! Nevermind, didn't know you can just reference assemblies without going through wraps in RC2... I then added the new class library to the project.json via right-click 'Add - Reference', restored packages, and confirmed that the class library appears in the solution explorer. Didn't change anything. This is a higher version than the currently targeted framework ".NETFramework,Version=v4.5" warning MSB3275: The primary reference "[Your Assembly Name].dll" could not be resolved because it has an indirect dependency on the
OPCFW_CODE
Builder pattern build() method output After reading the Design Patterns book and looking at Builder pattern examples online, I have noticed that there are two different ways of returning the object in the final "build()" method. I was wondering what is the difference between: Simply returning the maze we have been building public class MazeBuilderImpl implements MazeBuilder { private Maze maze; @Override public MazeBuilder builder() { maze = new Maze(); return this; } // adding rooms, doors etc. @Override public Maze build() { return maze; } } Passing the maze as a parameter in a constructor and returning that public class MazeBuilderImpl implements MazeBuilder { private Maze maze; @Override public MazeBuilder builder() { maze = new Maze(); return this; } // adding rooms, doors etc. @Override public Maze build() { return new Maze(maze); } } First one seems worse. The original builder can subsequently be used to modify the state of the built object, since it still holds a reference to it. MazeBuilder builder = new MazeBuilderImpl(); Maze maze = builder.rooms(1).build(); builder.rooms(2); // modifies the state of maze ... The copy in the second one prevents that from happening, and allows the builder to be reused. Both are bad though, since Maze is implied to be mutable. Builders are best for building immutable objects. It requires re-declaring the fields of Maze as fields of MazeBuilder, i.e. if Maze has a String id field, so will MazeBuilder. Lombok can help with reducing that boilerplate. cheers Michael, that makes perfect sense! Been using Lombok's @Builder very often but tried to actually recreate the Builder to understand how it works under the covers. @Kris If you're using IntelliJ, somewhere under Code > Refactor, there is an option to "delombok" Lombok-annotated code, which will convert your source to match exactly what it's generating. Can be useful to see what's going on. Not sure about other IDEs. @Kris, note there are several different "Builder" pattern implementations. Lombok generates something like the Builder pattern from Josh Bloch's Effective Java. The GoF Builder pattern is more complex (and IMO less useful). Mark Seemann has blogged about some of the different Builders. @jaco0646 I always skipped Builder in GoF's book before, because I never felt like I needed to read it, but after reading your comment I did, and I'd go one further than "complex": it's actively bad, and not something a beginner should be trying to emulate. thanks guys, I think the tip for de-lomboking the code worked well and I am happy with my understanding of Builder from that. If you return same object as in builder as follow: @Override public Maze build() { return maze; } then you will be referencing same object present in builder. And it can be still muted (changed). So your object could be still changes from somewhere. It still one object that you can affect directly or throuht builder. If you return different object, you dont have to worry about builder affecting it after calling build(). But if your object can be immutable, then its common to have copy of objects fields in builder and construct object in build() method. public class MazeBuilderImpl implements MazeBuilder { private Room roomA; private Room roomB; private Door doorA; private Door doorB; .... // adding rooms, doors etc. @Override public Maze build() { return new Maze(roomA, roomB, doorA, doorB, ...); } } Thanks for writing out the example kubacech! Will give upvote but tick Michael's answer because he was first.
STACK_EXCHANGE
Using tools is essentials for your success as an SEO. Tools not just save countless hours of manual research, but can also give you access to unforeseen data. There are different tools available for various areas of SEO. For instance, the tools used for link building are different than keyword research tools. In this article, we will have a look at some of the best keyword research tools. What is Keyword Research? Keyword research is one of the fundamental steps for your journey in creating a successful website. Get this step wrong, and you have mostly wasted all your efforts. Keyword research is basically the idea of finding the topics for your content. The topics need to have a high amount of traffic on Google, but at the same time, must have low competition. For instance: “gaming PC,” “best treadmills,” “best double wall oven,” are examples of keywords. Levels of Keyword Research Generally, there are three levels of keyword research. Here is a brief look at them: This is the step in which you decide on the topic of the website. For instance, running shoes, skincare products, or digital cameras. This basically decides the theme of the website. This step is relatively simpler and does not require any tool. You just need to consider your interest. Primary Keyword Research This is the keyword research step that lets you choose the primary keywords for the site. A site can have many primary keywords. Most of your pillar content is based on these keywords. If you are making an affiliate website, then your primary keywords must have a commercial intent. A primary keyword must strictly follow the guidelines for a “good keyword.” Long Tail Keyword Research These are basically keywords that are very easy to rank for. They do not have a high search volume, but having hundreds of these can significantly make a huge difference. What is a Good Keyword? A good keyword is the one that is not dominated by high Domain Authority (DA) sites. These are the keywords that have at least a 2,000 search volume and a high commercial intent. Top 4 Keyword Research Tools Now that you have the basic idea about keyword research. Let’s take a look at some of the tools that can take the manual work out of the equation. The first three are paid software; however, they can really expedite your keyword research process. Most of these tools are very famous and actively used by most bloggers and affiliate marketers. Why go through the pain of doing the keyword research yourself when you can get a list of keyword from your competitors. You will find keywords that you would have never imagined that they even existed. Other than keyword research, the SEMRush plenty of other tools that you can explore. This is a paid software; however, for the functionality that it offers, the price is entirely justified. 2. Long Tail Pro (LTP) This is also one of the most sought-after tools for keyword research. This is also paid software. Unlike the SEMRUSH, this tool does not let you spy on your competitors. However, there is one feature that sets this software apart from the rest, the Keyword Competitiveness (KC) factor. This is a proprietary feature in LTP that lets users measure the toughness of the keyword. The lower the KC, the easier it would be to rank. Many SEOs use this software just for that one functionality. Other than that, this software can help you generate thousands of keywords. This is similar to LTP; however, I personally prefer KWFinder over LTP. It has a much cleaner interface. Like LTP, it also has a proprietary feature that lets you see the keyword difficulty. It is also very quick and very easy to navigate through. One great feature of KWFinder keyword research tool is the suggestion it provides. I find the ideas generated by this keyword to be much better than LTP. This is a tool that works best in conjunction with either LTP or KWFinder.com. This is basically a keyword suggestion tool. To understand this software, you need to know what Google Suggest is. This is a function which you must have seen or used several times without knowing that it is also an awesome keyword tool. When you type something in Google Search Bar, and it shows you a list of suggestions underneath, that is Google Suggest. Sometimes the keywords suggested in the drop-down list hold a lot of potentials. Keywordtool.io basically help you grab that list from a seed keyword that you enter in the search bar. Once generated, you can import the list either into KWFinder.com or LTP to check their search volume and competitiveness. I hope this has given you some insight into keyword research in general and some of the tools that you can use. Keyword research is an important step; therefore, using the right tools is essential.
OPCFW_CODE
package uk.smarc.android.opengl; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import android.opengl.GLES20; import android.util.Log; /** * Represents a drawable triangle. */ public class Triangle implements Drawable { private static final String TAG = "Triangle"; private static final int BYTES_PER_FLOAT = 4; /** * Buffer to hold vertices of the triangel. */ FloatBuffer mVertices; /** * Number of coordinates per vertex in mCoords. * i.e. the dimension of the bundle space spanned by all the coordinates of a * vertex, e.g. spatial, colour, texture... */ private final int COORDS_PER_VERTEX = 3; /** * Coordinates of this triangle. Specified in the usual counterclockwise * winding order. */ private float[] mCoords; // OpenGL stuff /** * We need a vertex shader... */ private final String mVertexShaderCode; /** * and a fragment shader. */ private final String mFragmentShaderCode; /** * We also need a program that'll run both shaders. */ private int mProgram; /** * Handle to the position of the vertices in the vertex shader. */ private int mPositionHandle; /** * Handle to the position of the colour in the fragment shader. */ private int mColorHandle; /** * Colour that we'll draw the triangle as. */ private float[] mColor; /** * Number of vertices we have. In general, this is * mCoords.length / COORDS_PER_VERTEX; in practice, this is a triangle. So we have 3. */ private int mVertexCount; private int mVertexStride; private int err; public Triangle() { Log.d(TAG, "Creating triangle"); mCoords = new float[] { 0.0f, 0.622008459f, 0.0f, // top -0.5f, -0.311004243f, 0.0f, // bottom left 0.5f, -0.311004243f, 0.0f // bottom right }; mColor = new float[] { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f }; mVertexCount = mCoords.length / COORDS_PER_VERTEX; mVertexStride = COORDS_PER_VERTEX * BYTES_PER_FLOAT; // Initialise a ByteBuffer for storing the vertices. ByteBuffer vb = ByteBuffer.allocateDirect( // Need space for the number of coordinates, but in bytes. mCoords.length * BYTES_PER_FLOAT); // Use the device hardware's native byte order, so that no extra // conversions are necessary. vb.order(ByteOrder.nativeOrder()); // Turn the bytebuffer into a floatbuffer, and add the coordinates. mVertices = vb.asFloatBuffer(); mVertices.put(mCoords); mVertices.position(0); // ** We now set up the OpenGL shaders, and compile them (since they're // already known). mVertexShaderCode = "attribute vec4 vPosition;" + "void main() {" + " gl_Position = vPosition;" + "}"; mFragmentShaderCode = "precision mediump float;" + "uniform vec4 vColor;" + "void main() {" + " gl_FragColor = vColor;" + "}"; // Compile and load the shaders // This is expesnive, so we do it here once only. int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, mVertexShaderCode); int fragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, mFragmentShaderCode); // Create a program to run the shaders. mProgram = GLES20.glCreateProgram(); GLES20.glAttachShader(mProgram, vertexShader); GLES20.glAttachShader(mProgram, fragmentShader); // Create the OpenGL program executables. GLES20.glLinkProgram(mProgram); err = GLES20.glGetError(); if (0 != err) { Log.d(TAG, "glGetError: " + GLES20.glGetError()); } } @Override public void draw() { // Log.d(TAG, "Drawing triangle"); // So, we're going to need the program we created in the constructor. GLES20.glUseProgram(mProgram); // We'll need a handle to the vPosition member of the vertex shader. mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition"); // Next we tell OpenGL it's where we're going to store some vertices GLES20.glEnableVertexAttribArray(mPositionHandle); // Use that to get a handle to the triangle's vertices. GLES20.glVertexAttribPointer(mPositionHandle, mVertexCount /* number of vertices */, GLES20.GL_FLOAT, false, mVertexStride /* stride is 0, since the vertices are tightly packed (we dont' have the texture coordinates or anything else in the same array)*/, mVertices); // Get a handle to the fragment shader's vColor member: mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor"); // Use that to set the colour for drawing the triangle. GLES20.glUniform4fv(mColorHandle, 1, mColor, 0); // Finally, draw the triangle GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, mVertexCount); // Disable the vertex array now that we no longer need it. GLES20.glDisableVertexAttribArray(mPositionHandle); err = GLES20.glGetError(); if (0 != err) { Log.d(TAG, "glGetError: " + GLES20.glGetError()); } } /** * Compile the shader code given. * @param type e.g. GLES20.GL_VERTEX_SHADER or GLES20.GL_FRAGMENT_SHADER. */ private static int loadShader(int type, String shaderCode) { int shader = GLES20.glCreateShader(type); GLES20.glShaderSource(shader, shaderCode); GLES20.glCompileShader(shader); return shader; } }
STACK_EDU
So I've been using iDisguise recently, but it just won't work properly. Can someone help me. Code: language.yml reload-complete: §6[iDisguise] Reload complete. no-permission: §cYou are not allowed to do this. console-use-other-command: §cUse /odisguise from the server console. cannot-find-player: §cCannot find player %player%. wrong-usage-no-name: '§cWrong usage: account name required' wrong-usage-two-disguise-types: '§cWrong usage: two disguise types given' wrong-usage-unknown-arguments: '§cWrong usage: unknown arguments §o%arguments%' invalid-name: §cThe given account name is invalid. event-cancelled: §cAnother plugin prohibits you to do that. disguise-player-success-self: §6You disguised as a %type% called %name%. disguise-player-success-other: §6%player% disguised as a %type% called %name%. disguise-success-self: §6You disguised as a %type%. disguise-success-other: §6%player% disguised as a %type%. status-player-self: §6You are disguised as a %type% called %name%. status-player-other: §6%player% is disguised as a %type% called %name%. status-self: §6You are disguised as a %type%. status-other: §6%player% is disguised as a %type%. status-subtypes: §7(%subtypes%) status-not-disguised-self: §6You are not disguised. status-not-disguised-other: §6%player% is not disguised. outdated-server: §cYour Minecraft version does not support the given disguise type. undisguise-console: §cYou are not a player so you cannot undisguise. undisguise-not-disguised-self: §cYou are not disguised. undisguise-not-disguised-other: §c%player% is not disguised. undisguise-success-self: §6You undisguised. undisguise-success-other: §6%player% undisguised. undisguise-success-all: §6%share% out of %total% disguised players undisguised. undisguise-success-all-ignore: §6Undisguised every disguised player ignoring other plugins. help-info: §a%name% %version% - Help help-base: §6§o %command% §6- %description% help-types: '§7Types: %types%' help-types-available: '%type%' help-types-not-supported: §m%type% help-types-no-permission: §m%type% help-help: Shows this message help-player-self: Disguise yourself as a player help-player-other: Disguise a player as a player help-random-self: Disguise yourself as a randomly chosen mob help-random-other: Disguise a player as a randomly chosen mob help-reload: Reload config and language file help-status-self: Shows your disguise status help-status-other: Shows a player's disguise status help-undisguise-self: Undisguise yourself help-undisguise-all: Undisguise everyone help-undisguise-other: Undisguise a player help-disguise-self: Disguise yourself as a mob with optional subtypes help-disguise-other: Disguise a player as a mob with optional subtypes help-subtype: Apply one (or multiple) subtypes join-disguised: §6You are still disguised. Use §o/disguise status§r§6 to get more information. move-as-shulker: §cYou must not move while you are disguised as a shulker. update-available: '§6[iDisguise] An update is available: %version%' update-already-downloaded: §6[iDisguise] Update already downloaded. (Restart server to apply update) update-downloading: §6[iDisguise] Downloading update... update-download-succeeded: §6[iDisguise] Download succeeded. (Restart server to apply update) update-download-failed: §c[iDisguise] Download failed. update-option: §6[iDisguise] You can enable automatic updates in the config file. easter-egg-birthday: §eYAAAY!!! Today is my birthday! I'm %age% years old now.
OPCFW_CODE
Transcript from the "Printing The API Response" Lesson >> We were playing brave here, because we haven't tested this. But now, we can go to main.go, that is still completely empty and try to use this. So what do I wanna try first? Well, I will try to go and call my api. So what's the name of the package, api? [00:00:18] Should I use cex? I don't know if it's cex or cex. Cex sounds complicated. Anyway, for cryptocurrencies at least, but anyway, cex, let's say. Should I start writing cex.? >> api., that's the name of the package, correct. And then I have the GetRate. It's going to import API, and then I can parse on BTC and try to receive BTC, it's Bitcoin. [00:00:50] So I can try to receive the answer that is going to be a rate or a possible error. And for that, I will just bring both. It was just a quick print for debugging purposes. I'm not going to use this as the final version. So let's try this, so go run.. [00:01:14] First, look at this. This is Lulu. Who is Lulu? Lulu is a firewall that I have to on my Mac that I'm going to disable right now you will see why. Because it's detecting a new app in my Mac trying to get to the network, and let's say it's okay, this is new. [00:01:36] We won't allow that. I can say Allow, okay? That means that something's happening, it's going into the network, okay, and something will happen also. There is a timeout. So at one point, if the firewall is, and there we have it. We have something that we are receiving. Probably, I don't know how to read that, okay? [00:01:56] That's probably my print that I have here. So it went to the network and we received something. If I try again, If Lulu is again giving me an alert, I say Allow, okay? Now, I try again. LuLu gave me an alert. Why? Lulu, my firewall is giving me, I'm saying allow, and that should stay forever. [00:02:24] And did I change the rule? Why is Lulu giving me the same alert on every time I run in the app? Does anyone know why? Go is a binary, and actually we are running. But if you remember, running is not actually like running the same app. It's building a new app every time in a temporary directory that I'm not seeing and running it. [00:02:52] Every time I'm executing run, it's building a new app. So from an OS point of view, it's a new app that is trying to get to the network, okay? And also now, we got the JSON. Why the JSON wasn't there the first time, it was because Lulu was blocking it for a while and then it goes to a timeout. [00:03:12] But we see the JSON as a string, so it's working. And what we see here, the the the rate and the error. And yeah, that they're pretty bad, okay, because the rate is actually a memory address. It's a pointer, so we will need to do something better with that, okay? [00:03:31] But in fact, I just tried to show the price of the rate and I will explain something interesting. I will delete Lulu from memory for now, okay, and this is my output, my rate. It's not probably the notation that I expect, okay, but we'll make that better. So what's going on here? [00:03:56] Rate, what's the type of rate? So if I write right here, what's the type? It's not my structure. It's a pointer to my structure, okay? But then I used dot and it worked, so I didn't need to do something like this, get into the object and then called price or something like that, okay? [00:04:21] Well, that is because the dot syntax, when the dot operator actually works with pointers and it goes to the object, it goes to the instance to make our life easier. So if we have the structure instance or a pointer to the structure instance, how you call methods or how you access properties is the same, just the dot, it's like a shortcut, okay? [00:04:51] Make sense? Cool, so, so far I have my code working in terms of I'm receiving a JSON from the server, but we need one more step, right? So we need to parse the JSON, because right now, it's just a string.
OPCFW_CODE
Flatten tool in Blender 2.8 is blurry So I'm still working on the tutorial from Blenderguru with the anvil and another problem has occurred. Now in sculpting mode I'm supposed to use the flatten tool on the edges of the anvil. In the video when he flattens, its perfect, crisp indents. When I try the flatten tool makes it look like its extruding and not recessing into it self. This is what I want: And this is what I have: It looks super blurry. I've tried to put 0 on the location of the item. I've tried to scale the plane offset to a negativ. I've tried different Refined Method inside Dyntopo. Please help Thank you Setting the location to 0 or scaling the plane offset should not influence the sculpting. Regarding what could help, i hope my answer does the trick. i know that the tutorial says use the flatten brush, but he is using an older version of Blender and the newer versions got a lot updates especially regarding sculpting. I tried the flatten brush myself and could not get the same results as shown in the tutorial. Me using the new 2.82.7 version. BUT i managed to find a brush that surprisingly resulted in the same look, the scrape brush. Here the little visual help: Sorry about the dithering, the matcap is not very gif-friendly. Also regarding the blurriness, i would guess it comes from either to low details (actually means to big value for the dynatopo details) and maybe even not smoothed surface which seems to be a bit buggy. With fine geometry and not smoothed surface you get a blurry effect. It could help if you would show the setup of the dynatopo that resulted in the blurry effect. Something that really seems to be tripping up the sculpting is if your object has a UV Map or some modifiers, dyntopo tends to bother you as it did show in the tutorial as well. And undo can then easy deactivate dyntopo. If you sculpt then without dyntopo on, the result may also add to the blurry factor. Just a possibility, so if you try again and it becomes blurry, look if dyntopo is active. Here a little visual to prove my point regarding the older version of Blender (here 2.79b) and sculpting brushes: As it should be visible, the flatten and scrape brush did the same thing in 2.79b, while in the newer version 2.82.7 the flatten brush sure does it's own thing, means scrape it is. I agree with @Xylvier. And the official document says alike: Flatten, the vertices are pulled towards the plane which at the average height above/below the vertices within the brush area. Scrape, works like the Flatten brush, but only brings vertices above the plane downwards.
STACK_EXCHANGE
As of early November 2019, support for IE 11 has been dropped from SE's official compatibility matrix. Edge is still supported, though I guess in 6-8 weeks I might ask about Edge (EdgeHTML) support in the era of everything being blink-based... if I'm feeling cheeky. I vote to leave Internet Explorer 11 supported in the current form. The first reason is, I occasionally use Windows RT, on which the latest browser is IE11, and there's no other way to install any other browser on that platform. (I still feel that my purchase of a first-generation Surface in 2012 is a great investment, and it's lighter than even my 2016 ... To state it: IE 11 is still supported by the Manufacturer until 2025. The EOL is in 2025. The same date that Windows 10 support will be dropped from Microsoft. I don’t contest the support drop from SE, but any argument that it’s because it’s no longer supported by the manufacturer are not the real fact. What you have problems with is opening this image. Is a 4908x4408 px plot GIF image with 1,203.9 KB of data. Your browser either, runs out of memory due a memory leak (that we cannot help, report a bug) or your system is quite tight on resources (we cannot help either). If you're looking for an alternative browser, then Software Recommendations is probably what you want, as noted by @Robert Longson. Be aware that SoftwareRecs.SE is pretty strict with questions, so be sure to read its help center and question quality guidelines before you post your question there. If you want a general way of lowering CPU usage on a browser,... Normally accepted answers are shown on top. If the question asker accepts his own answer, this doesn't apply. Such an answer doesn't get pushed to the top. It is sorted just the same way as if it wouldn't be accepted, which is randomly among those answers with the same score. This is not a bug but by design. If it really behaves differently in chrome, then ... Rob's comment (above) triggered "more thinking", and I quickly found the culprit: I use an 'add-on' in Firefox called AdBlockPlus. I've used it for years, and never had a problem with Stack Exchange sites. However, when I added stackexchange.com to its whitelist, the 'banner warning' disappeared. So - question in an answer: What changed? What switch ... Are you trying to post a question here, on SO? If so, keep in mind that you should tell us what you're trying to achieve (with possibly some background information), a piece of code (don't paste ALL of your code, just a piece of it) which is relevant to your question. After explaining and showing your code you could add a question if you haven't done so ... I don't use the tab title, so I'm not bothered by your examples. What does bother me though is that tags often chew up valuable space in Google results instead of showing more of the title: "proper nouns - Why are the United States often referred to as ..." "legislative process - How would the United States of America grant ..." "marvel cinematic universe - ... Assuming you mean questions like bugs happening while using a browser that turns out to be not supported according to the official FAQ, I usually won't downvote or close as off topic as it still about Stack Exchange. First of all, I'll post a comment saying the browser is not supported, linking to the FAQ. If there's value to others, I might close as ... Firefox for Mobile is not officially supported and never has been on our network. It has always been in a state of "it works but may break" in our list of supported browsers on Meta (which in fact still mentions version 10.0.3 from back in 2012). This is probably in the hands of the browser and out of the hands of the SE team. The browser has not accessed that page before. It is not in the browser's history. Therefore, as far as the browser cares, that link has not been visited by it. You sure did click on it, but the browser didn't handle it and as far as it's aware you've never been to that page ... An OpenID identifier is a URL that represents an identity on the Internet. OpenID was invented by LiveJournal as a way for users to prove their control over a particular blog. It has since come to be used by Google, AOL, Yahoo!, Stack Exchange, and other web sites as a decentralized authentication platform to let someone with an account on one site (the "... Unfortunately, we can't really support this browser. Actually, navigating to the CSS site directly (https://cdn.sstatic.net/stackoverflow/all.css?v=62fd31659efc), and manually accepting the certificate seemed to fix my problem. Thanks for your help everyone! While recommendation questions are off-topic in most Stack Exchange sites there is one that might be suitable if you're looking specifically for a browser add-on: Software Recommendations. It has pretty strict requirements for questions to be on topic so be sure to read its help centre and question quality guidelines carefully. Outside of Stack Exchange ... While HTTPS works (most of the time) it's not officially supported as yet. There are issues with getting certificates for the child meta sites (the current naming scheme means that SE need a certificate per site, rather than one certificate that covers a range of sites). Therefore, any problems you get are likely to be transient. Which browsers are officially supported is in Which browsers are officially supported? And what else do I need?. It shows that all of your browsers should be supported. CSS files are downloading from https://cdn.sstatic.net. Check that you have marked its SSL certificate as trusted.
OPCFW_CODE
How should I make a musical performance into a skill check that is interesting and dynamic? So in the past I have ran 3.5 DnD while one of the characters was a bard that was in a band with other bard NPC's and occasionally, they hosted concerts in Inns. The way I did it back then was a series of Perform(instrument) checks, for Intro, Bridge 1, Bridge 2, Chorus, Bridge 1, Bridge 2, Chorus, Outro. Then I got the average of those rolls and if they met a mental DC I had set at the beginning, the concert was a success. I never had to present too many of them as skill challenges, so we only had it twice, so I never got around to developing a better system. I was thinking maybe each part (Chorus, Intro, etc.) would have a different value for a "meter" called "Hype" and if they failed different checks they'd get different penalties for the duration of the song. What is the best way to incorporate something like this do you think? (I know the way I ask this makes the question seem chatty and like a discussion, but I am actually asking for something specific, I think) Seems specific enough to me ... Just in case you were considering it, I have brought a guitar to game and tried to roleplay my bardic performances. Never found a way to do it without completely hijacking the game. @valadil I have no musical talent of any kind :D @OddCore, me neither, but that doesn't stop me from trying ;-) To be honest, I wouldn't abstract this with skills. Roll perform 8 times over seems more tedious than interesting. I had an idea for my last game but never got to try it. If the bard ever got into a musical duel, we were going to play Encore as a minigame. The bard would be able to use his musical skills to get hints from the GM. The other PCs could give him hints as well, but they'd have to use bluff or some other skill to pass him hints without getting caught. Since a musical performance is a combination of skill, improvisation, and appeal to a crowd, dont just roll some perform checks, let your players roleplay their songs. Give them bonuses for good descriptions like "i fingertap the hell out of the solo on our latest song: blood and ice" or "i powerstance and headbang on the tune of The Astral Saga". Circumstancial bonuses could also apply from a previously good roll, roleplayed as reactions from the crowd. Maybe some one knows the song and sings allong, inspiring others to sing with him the chorus, or people start to dance, inn wenches show their mammary glands (yes i have a rock'n'roll or metal concert in mind). Also, static bonuses/penalties may apply, especially if they open a concert with The Ogre Slayer, while playing for a bunch of ogres (captain obvious strikes again).
STACK_EXCHANGE
What are the rules surrounding a hyphen following an abbreviation? For instance, if something is owned by Apple Inc. does that make the compound phrasal adjective 'Apple Inc.-owned'? Or would I omit the period? You can only omit the period if you expand the word to its full form (Apple Incorporated). But even with that, the hyphen would look wrong. Rephrase it. Owned by Apple Inc. Why Inc? Apple-owned should be evident from the context. This is a question of style, and different style manuals may give different recommendations. I will be following the Chicago Manual of Style (CMOS). If you can, rephrase The best recommendation in any unusual case is to rephrase. In your case, you could do it along the lines suggested by either Jason Bassford or jimm101. However, if you really can't or would prefer not to rephrase, read on. Don't remove the period First of all, CMOS says that abbreviations which end with a lowercase letter should contain a period after that letter (with some rare exceptions that don't apply here). Second, for Apple Inc., the period is part of the proper name, and CMOS says that respecting the form of the proper name takes precedence over whatever other rules there may be (see e.g. here). So on both counts, CMOS would frown upon removing the period after Inc. A hyphen may follow a period CMOS does allow for things like U.S.-oriented, as documented in this entry from their Q&A (here): Q. I’m interested in how you would treat the following issue of double punctuation: U.S.-oriented. I decided to omit the hyphen, which I would have otherwise used, because I didn’t like the way it looked following an abbreviation period. A. It may look a little odd, but the hyphen is conventional there, because omitting it could cause readers to mistake “oriented” for a verb. If your publication’s style permits, you can follow CMOS 16 in omitting the periods: US-oriented. Replace the hyphen by an en dash There is another detail here worth mentioning, coming from the fact that Apple Inc. is a compound. With open compounds CMOS suggests, though doesn't exactly demand, replacing the hyphen with an en dash. In your case, that would be like this: Apple Inc.–owned (en dash) as opposed to Apple Inc.-owned (hyphen). Here's the relevant passage from CMOS: (begin quote) 6.80: En dashes with compound adjectives The en dash can be used in place of a hyphen in a compound adjective when one of its elements consists of an open compound or when both elements consist of hyphenated compounds (see 7.82). Whereas a hyphen joins exactly two words, the en dash is intended to signal a link across more than two. Because this editorial nicety will almost certainly go unnoticed by the majority of readers, it should be used sparingly, when a more elegant solution is unavailable. As the first two examples illustrate, the distinction is most helpful with proper compounds, whose limits are made clear within the larger context by capitalization. The relationship in the third example depends to some small degree on an en dash that many readers will perceive as a hyphen connecting music and influenced. The relationships in the fourth example are less awkwardly conveyed with a comma. the post–World War II years Chuck Berry–style lyrics country music–influenced lyrics (or lyrics influenced by country music) a quasi-public–quasi-judicial body (or, better, a quasi-public, quasi-judicial body) A single word or prefix should be joined to a hyphenated compound by another hyphen rather than an en dash; if the result is awkward, reword. non-English-speaking peoples a two-thirds-full cup (or, better, a cup that is two-thirds full) An abbreviated compound is treated as a single word, so a hyphen, not an en dash, is used in such phrases as “US-Canadian relations” (Chicago’s sense of the en dash does not extend to between). (end quote) To avoid possible mistakes it should be pointed out that the rule about full stops(periods) is different in modern British English. You should not put a full stop in if the last letter of the abbreviation is the last letter of the word, so Dr Smith but Prof. Smith. Not in my (British) university! The Profs have to manage without their full stops. But @liguisticum got it right in the first sentence: this is style not grammar and style guides vary. +1 for the thorough coverage. In particular, if the poster wants to attach "-owned" to "Apple, Inc," an en-dash, not a hyphen, is the appropriate connector—at least according to Chicago (and some other U.S.) punctuation conventions.
STACK_EXCHANGE
This tutorial shows how to prepare your local machine for Node.js development, including developing Node.js applications that run on Google Cloud Platform. Follow this tutorial to install Node.js and relevant tools. Read Node.js and Google Cloud Platform to get an overview of Node.js itself and learn ways to run Node.js apps on Google Cloud Platform. - Install Node Version Manager (NVM) - Install Node.js and npm (Node Package Manager) - Install an editor - Install the Google Cloud SDK - Install the Google Cloud Client Library for Node.js - Install other useful tools Install Node Version Manager (NVM) Node Version Manager (NVM) is a simple bash script for managing installations of Node.js and npm. NVM does not support Windows; check out nvm-windows for managing your Node.js installation on Windows. Installing NVM is simple; check out the installation instructions for details on installing NVM on your platform. Install Node.js and npm (Node Package Manager) Once NVM is installed you can install Node.js and npm. To install the latest stable version of Node.js you would run: nvm install stable To make it the default version run the following: nvm alias default stable You can check what version of Node.js you're running with: npm is the Node Package Manager for Node.js and should have been installed alongside Node.js. You use npm to install Node.js packages from the npm repository, for example: npm install --save express For additional reading, read Run Express.js on Google Cloud Platform. Install an editor Popular editors (in no particular order) used to develop Node.js applications include, but are not limited to: - Sublime Text by Jon Skinner - Atom by GitHub - Visual Studio Code by Microsoft - IntelliJ IDEA and/or Webstorm by JetBrains These editors (sometimes with the help of plugins) give you everything from syntax highlighting, intelli-sense, and code completion to fully integrated debugging capabilities, maximizing your Node.js development efficacy. Install the Google Cloud SDK The Google Cloud SDK is a set of tools for Google Cloud Platform. It bq, which you can use to access Google Compute Engine, Google Cloud Storage, Google BigQuery, and other products and services from the command line. You can run these tools interactively or in your As an example, here is a simple command that will deploy any Node.js web application to Google App Engine flexible environment (after deployment App Engine will attempt to start the application with gcloud app deploy Install the Google Cloud Client Library for Node.js The Google Cloud Client Library for Node.js is the idiomatic way for Node.js developers to integrate with Google Cloud Platform services, like Cloud Datastore and Cloud Storage. You can install the package for an individual API, like Cloud Storage for example: npm install --save @google-cloud/storage To use this client library, you must first authenticate. Complete the steps at getting started with authentication. Install other useful tools For a comprehensive list of amazing Node.js tools and libraries, check out the curated Awesome Node.js list.
OPCFW_CODE
Business analysis and software design for the German SyNergy Research Cluster SyNergy, the "Munich Cluster for Systems Neurology", is a collaboration between Ludwig-Maximilians University Munich (LMU), Technical University of Munich (TUM), Helmholtz Munich, DZNE Munich and the Max-Planck-Gesellschaft. SyNergy promotes integrative research into a broad range of neurological diseases, with the aim to better understand the underlying mechanisms of these diseases and eventually improve therapeutic options. The central focus is to foster close collaboration between the SyNergy members across the boundaries of the traditional medical fields of neurodegenerative, inflammatory and vascular diseases. Setting up collaboration between the research groups is complicated because the data is siloed in the various research labs of the participating institutes. There is no overview of what data is available in the cluster and the data is often difficult to find. Therefore, SyNergy has asked The Hyve to advise them on a solution that would best fit their needs. How we solved it First, we investigated the state of data management within the cluster. After collecting basic information via a questionnaire, we conducted interviews with leaders of the research labs to understand all relevant details related to their business, the data and metadata used and generated, data standards used, data flows, people and roles involved, et cetera. We combined and analyzed the data management information collected and proposed a direction for a solution on how to best make the data from all participating research institutions findable. We presented and discussed the solution direction in a validation workshop to be sure we did not overlook any important needs regarding data management and to achieve consensus between all stakeholders on the direction to take. Finally, we created a use case diagram, proposed a data model and specified all functional requirements related to metadata upload, search, permissions and data FAIRness as well as the non-functional requirements, mainly related to security. We investigated the suitability of several existing, potentially suitable open source research data management tools to verify if there was already a turn-key solution that could be applied to SyNergy’s requirements. The outcome of this investigation is described in the section below. The outcome of the business analysis Our investigation showed that SyNergy did not need a new data storage solution or a solution to exchange the actual research files since this functionality will be provided by one of the academic partners of the cluster. The main need of SyNergy is to make the data generated in the various research labs findable. They would like to be able to see the location of all existing files via a link or text. Therefore, we proposed the use of a metadata catalog. We investigated the suitability of the open-source metadata catalogs COLID, CEDAR, Gen3 and Bento. In the cases of COLID, CEDAR, and Bento, the data model turned out to be not flexible enough to meet all the needs of the SyNergy consortium. In the case of Gen3, the model is more flexible but still, part of it is fixed. There is also no easy metadata submission using user interface forms (metadata templates are the main way of metadata input) and access management seems overly complex. After ruling out the suitability of the investigated open source RDM tools, we compared the efforts of building a new custom tool with the efforts of adjusting Fairspace, our in-house developed data management tool, to meet the SyNergy requirements. Fairspace was built by The Hyve for Institut Curie and FNS-Cloud. It is an open-source, mature, and production-ready tool with the advantage of having a flexible metadata model that can be fully customized, an intuitive search and browse interface, and metadata storage that ensures FAIR data management. The figure below shows the key features of Fairspace. Adapting Fairspace to the SyNergy needs requires less effort than building a custom tool from scratch. Therefore, we proposed a modified version of Fairspace for the SyNergy cluster. The result of the project was an architectural design of the Fairspace solution. We proposed a single (meta)data model to fit the SyNergy research data from the various research labs and provided mocks to show how the metadata search and browse user interface would look when applied to the SyNergy data model. The metadata submission module will be adapted to enable uploading metadata via a user interface form or file. A vocabulary page will be introduced to manage the metadata dictionaries. Finally, we proposed to make a change to the authorization module to support the user roles and permissions as required by SyNergy. In conclusion, business analysis and software design projects like the one executed for SyNergy bring several benefits to the organizations needing to make decisions on software solutions: It provides the organization with realistic requirements linked to the context of their business and provides detailed and timely documentation. The organization saves valuable time and resources by outsourcing a requirements gathering project and receives a future-proof solution proposal. The organization can plan and scope the implementation project more efficiently when it is tuned to their stakeholders’ priorities and budget availability. The organization engages with The Hyve on a smaller scale and cost-effective project. More specifically, for the solution proposed in the current project, the added value of Fairspace for Synergy is to allow for the findability of the data generated in the various research laboratories. Some Fairspace features need to be disabled and others will be added or customized. Selecting the right configuration for an organization is something The Hyve can guide with a business analysis and software design project.
OPCFW_CODE
- “pay (CS) teachers more”: ain’t going to happen with current Union/Local Authority cabal; witness Mr Swinney’s travails. If it needs to happen for our economy, it should. Could we create a new grade of teacher? The current system is failing, so it needs to be fixed, and it should not be because a union said no. Our kids are likely to face the most difficult economic environment for many decades, and we need to make sure they are fully supported in making the best of the opportunities that schools provide them. If we did not have enough Maths or English teachers, we would do something about it, so why not in Computer Science, and show all our kids the possibilities of this information age? - ditto “Get politicians to define a high-level KPIs” As Max Tegmark said on Sunday Brunch yesterday there is very little informed public discussion on such as AI (the new CS) apart from doom mongering. Parents are tax-payers, and, if they feel strongly about supporting their kids and in building a strong economy, they can push politicians to make changes. If a school drops Computer Science, then parents should have some say on this. Other countries are investing heavily in creating a coding generation, but we sit back and watch our kids being switched-off before they even get to the point of enjoying creating things with technology. - “Cloud”, “Crypto”, “Blockchain” etc That’s a lot of tech to have school teachers master. Have you seen the maths in the next big thing: Differential Privacy, as advertised by Apple and used by such as Google in Federated Learning. Such are too advanced. It doesn’t have to be complex. Drop the complexity in fact, and make it simple. Cloud is simple. With a few few lines of Python and I can create a face recognition system, and two lines gives me a machine learning method. In another two lines I can create amazing charts in any form I like. If we use Python, it will just work no matter which system we use. In the Cloud, things are much simpler than they used to be, and in a few lines of code we can achieve things that would have take thousands of lines of code. Blockchain is also not a difficult concept to understand, but it will have a massive change in our society, so why not tell kids about it, and let them debate the best way forward. For cryptography, just teach them the basics of Bob and Alice, and public key encryption, as it will be a fundamental part of the world that they will be entering. The more complex things can come later, we just need to engage our kids with the opportunities (and the flaws) of this new world. Teach them about why the see the green padlock, and why it is so important (but how others can trick you with it).
OPCFW_CODE
The derivation of the demand function for an input, when there are multiple inputs, is - as before - based on the criterion for profit maximisation derived in chap 4 the following criteria are true for profit maximisation (see eqs. Appendix i: derivation of household demand functions the first-order condition (2c) derived from the 'household utility maximization model' in section 22 can be rewritten as follows. Demand much of the preceeding material in the consumer theory section is focused on the relationship between a consumer's preferences and a utility function that represents these preferences. Read this article to learn about the technique of deriving demand curve from price-consumption curve the price-consumption curve (pcc) indicates the various amounts of a commodity bought by a consumer when its price changes the marshallian demand curve also shows the different amounts of a good. View notes - deriving_demand_functions_examples from economics 101 at university of toronto deriving demand functions - examples1 what follows are some examples of dierent preference relations and. Using the aggregate demand curve function of y = 2(m/p) and the aggregate supply curve y=3,000, i equaled both equations together to derive p first y = 2(m/p) and y=3000 3000 = 2 (m/p. Claim 2 if the demand function is q = 3m p (m is the income, p is the price), then the absolute value of the price elasticity of demand decreases as price increases. Lecture notes on elasticity of substitution express responsiveness of demand for a good to its elasticity of demand so revenue is an increasing function of. What is the demand function for quasilinear preferences update cancel ad by honey how do you derive the demand function for a stone-geary utility function. And use a straight line to connect them, thus we derive the demand curve for x d to derive a budget line, we need to use the budget constraint function: pxx+pyy=i. The demand curve shows the amount of goods consumers are willing to buy at each market price a linear demand curve can be plotted using the following equation qd = a - b(p) q = quantity demand a = all factors affecting price other than price (eg income, fashion) b = slope of the demand curve p. Monotone comparative statics finite data and garp econ 2100 fall 2018 lecture 7, september 19 is the demand (price) function c() is the cost function. Suppose that the demand and price for strawberries is related by a linear demand function of the form p dx() where p is the price (in dollars) and x is the demand in hundreds of quarts. Functional forms in consumer theory 1 cobb-douglas utility and log-linear demand systems consider a utility function given by u = v(x)= yn i=1 xαi i = x α1 1 x α2 2 x α3 3 (1. Costs functions the economic cost of an input is the minimum payment required to keep the input in its present employment it is the can we derive a demand curve. The most important point elasticity for managerial economics is the point price elasticity of demand this value is used to calculate marginal revenue, one of the two critical components in profit maximization (the other critical component is marginal cost) profits are always maximized when. Derive the demand functions for each of the three firms so far i have been able to derive the demand function for firm 1 by first solving for xi for the indifferent consumer between firm 1 and firm 2. More generally, what is a demand function: it is the optimal consumer choice of a good (or service) as a marshallian demand is homogeneous of degree zero in money. Factor demands (and more generally, deriving a cost function) deriving a cost function outputs are produced from inputs q = f(x1,x2) is a production function, it gives the quantity of output as a function of the quantities of inputs. Derive the equation for the consumer's demand function for clothing correct as this is my first attempt at deriving demand functions also, is the utility.
OPCFW_CODE
On SLES12 SP3 automount is on by default. When I insert a DVD containing HP Proliant Service Pack, the dvd is auto mounted, I can change to the /run/media/root/SPP directory and execute launch_sum.sh since its permissions are 744. In fact all files on the DVD have permissions 744 and all directories are 777. The same iso (HP Service Pack) on a USB Stick (written via HP USB Drive Key Utililty) also will automount. But the file permissions for launch_sum.sh are 644 and will not execute (permission denied). All files on the USB device have permissions 644 except for launch_sum.bat which is 755 (this is the start script for use on windows systems). The mount command shows the USB stick is mounted with fmask=0022 and dmask=0077 but those values seem to be ignored. If I unmount the usb device, then manually mount it: mount /dev/sdc /mnt then the permissions are as expected and launch_sum.sh can be executed. I’ve been trying to use /etc/auto.master and /etc/auto.misc to control autofs but have had no success. I removed the mount entry for the cd in auto.misc but it seemed to have no effect and I suspect not even being used. The Admin Guide has lots of information on autofs, but nothing dealing specifically with usb devices. Has anyone tried changing usb automount behavior and succeeded? It appears that in the past few days you have not received a response to your posting. That concerns us, and has triggered this automated reply. These forums are peer-to-peer, best effort, volunteer run and that if your issue is urgent or not getting a response, you might try one of the following options: - Visit http://www.suse.com/support and search the knowledgebase and/or check all the other support options available. - Open a service request: https://www.suse.com/support - You could also try posting your message again. Make sure it is posted in the correct newsgroup. (http://forums.suse.com) Be sure to read the forum FAQ about what to expect in the way of responses: If this is a reply to a duplicate posting or otherwise posted in error, please ignore and accept our apologies and rest assured we will issue a stern reprimand to our posting bot… Your SUSE Forums Team Getting closer. Found that by creating 99-udisk2.rules in /etc/udev/rules.d/ I can alter the location of the usb automount from /run/media// to somewhere else, i.e. /media/. Just have to figure out how to change the file permissions of this automount, MODE=0777 has no effect. have you checked if the “noexec” option is set for those auto-mounted file systems? Yes, noexec is on for the automounts. Can’t find anyway to disable it or affect the mount parameters in any way.
OPCFW_CODE
Manual Reference Pages - UNZOO (1) unzoo - zoo archive extractor [-l] [-v] <archive>[.zoo][<file>..] -x [-abnpo] [-j <prefix>] <archive>[.zoo] [<file>..] This manual page documents briefly the This manual page was written for the Debian distribution because the original program does not have a manual page. unzoo is a program that lists or extracts the members of a zoo archive. A zoo archive is a file that contains several files, called its members, usually in compressed form to save space. unzoo can list all or selected members or extract all or selected members, i.e., uncompress them and write them to files. It cannot add new members or delete members. For this you need the zoo archiver, called zoo, written by Rahul Dhesi. If you call unzoo with no arguments, it will first print a summary of the commands and then prompt for command lines interactively, until you enter an empty line. Usually unzoo will only list or extract the latest generation of each member. But if you append ;<nr> to a path name pattern the generation with the number <nr> is listed or extracted. <nr> itself can contain the wildcard characters ? and *, so appending ;* to a path name pattern causes all generations to be listed or extracted. A summary of options is included below. list the members in the archive <archive>. For each member unzoo prints the size that the extracted file would have, the compression factor, the size that the member occupies in the archive (not counting the space needed to store the attributes such as the path name of the file), the date and time when the files were last modified, and finally the path name itself. Finally unzoo prints a grand total for the file sizes, the compression factor, and the member sizes. list only files matching at least one pattern, ? matches any char, * matches any string. list also the generation numbers and the comments, where higher numbers mean later generations. Members for which generations are disabled are listed extract the members from the archive <archive>. Members are stored with a full path name in the archive and if the operating system supports this, they will be extracted into appropriate subdirectories, which will be created on demand. extract all members as text files (not only those with !TEXT! comments) extract all members as binary files (even those with !TEXT! comments) extract no members, only test the integrity. For each member the name is printed followed by -- tested if the member is intact or by -- error, CRC failed if it is not. extract to stdout extract over existing files without asking for confirmation. The default is to ask for confirmation. unzoo will never overwrite existing read-only prepend the string <prefix> to all path names for the members before they are extracted. So for example if an archive contains absolute path names under UNIX, -j ./ can be used to convert them to relative pathnames. Note that the directory <prefix> must exist, unzoo will not create it on This manual page was written by Thomas Schoepf <email@example.com>, for the Debian GNU/Linux system (but may be used by others). |--> ||UNZOO (1) ||August 23, 2002 | Visit the GSP FreeBSD Man Page Interface. Output converted with manServer 1.07.
OPCFW_CODE
AwesomeWM: root API Functions [ ] buttons [ ] keys [ ] cursor [ ] fake_input [ ] drawins [ ] wallpaper [ ] size [ ] size_mm [ ] tags A few questions regarding some of these: What are your plans for the wallpaper function? Is it going to call way-cooler-bg? Keys (most likely going to be a table of similar to awesome?) What are your plans for the wallpaper function? Is it going to call way-cooler-bg? way-cooler-bg will probably be deprecated, since there's no need to have an external program for this if you can do it in Lua. Instead Way Cooler itself will handle the wallpaper and any transformations needed for the background image can be done by Lua / Cairo. Keys (most likely going to be a table of similar to awesome?) Yes, if it is exposed via the Awesome API it should have the same layout as what Awesome returns (otherwise there would be too much breakage...). The key should be pretty simple, as I assume it returns info directly from XKB. In the future we can attempt to do fancy things e.g with libinput. Assuming you are directly porting the API, you want something similar to this for the wallpaper? https://awesomewm.org/doc/api/libraries/gears.wallpaper.html Yep, exactly like that Ok, I'll look into working on the wallpaper stuff....and gonna start learning some Rust ;) Cool! If you need any help, just ping the gitter channel and I should be able to help I tried to implement root.tags() ontop of #508. However, I failed. My failure resulted in #507. The problem is that I somehow keep a a reference to all tags with .activated = true. I tried to do this with a table that is saved in the registry. Since in awesome the result of root.tags() preserves its order, I wanted to do a sequence (i.e. integer keys refer to tags). However, I failed to remove tags from this list. My attempt follows bellow. The problem is that assert!(found); in set_activated() fails. diff --git a/src/awesome/root.rs b/src/awesome/root.rs index 56c7ca4..7ab66a2 100644 --- a/src/awesome/root.rs +++ b/src/awesome/root.rs @@ -84,6 +84,57 @@ fn set_wallpaper<'lua>(_: &'lua Lua, _pattern: *mut cairo_pattern_t) -> rlua::Re fn tags<'lua>(lua: &'lua Lua, _: ()) -> rlua::Result<Table<'lua>> { let table = lua.create_table()?; - // TODO FIXME Get tags + let activated_tags = lua.named_registry_value::<Table>(super::tag::TAG_LIST)?; + for pair in activated_tags.clone().pairs::<Value, Value>() { + let (key, value) = pair?; + table.set(key, value)?; + } Ok(table) } + +#[cfg(test)] +mod test { + use rlua::Lua; + use super::super::root; + use super::super::tag; + + #[test] + fn tags_none() { + let lua = Lua::new(); + tag::init(&lua).unwrap(); + root::init(&lua).unwrap(); + lua.eval(r#" +local t = root.tags() +assert(type(t) == "table") +assert(type(next(t)) == "nil") +"#, None).unwrap() + } + + #[test] + fn tags_does_not_copy() { + let lua = Lua::new(); + tag::init(&lua).unwrap(); + root::init(&lua).unwrap(); + lua.eval(r#" +local t = tag{ activated = true } +local t2 = root.tags()[1] +assert(t == t2) +t2.name = "Foo" +assert(t.name == "Foo") +"#, None).unwrap() + } + + #[test] + fn tags_some() { + let lua = Lua::new(); + tag::init(&lua).unwrap(); + root::init(&lua).unwrap(); + lua.eval(r#" +local first = tag{ activated = true } +local second = tag{ activated = true } +local t = root.tags() +assert(t[1] == first) +assert(t[2] == second) +"#, None).unwrap() + } +} diff --git a/src/awesome/tag.rs b/src/awesome/tag.rs index 849ab20..86a4d21 100644 --- a/src/awesome/tag.rs +++ b/src/awesome/tag.rs @@ -2,12 +2,14 @@ use std::fmt::{self, Display, Formatter}; use std::default::Default; -use rlua::{self, Table, Lua, UserData, ToLua, Value, UserDataMethods, AnyUserData}; +use rlua::{self, Table, Lua, UserData, ToLua, Value, UserDataMethods, AnyUserData, Integer}; use super::object::{self, Object, Objectable}; use super::class::{self, Class, ClassBuilder}; use super::property::Property; use super::signal; +pub const TAG_LIST: &'static str = "__tag_list"; + #[derive(Clone, Debug)] pub struct TagState { name: Option<String>, @@ -55,6 +57,7 @@ impl UserData for TagState { } pub fn init(lua: &Lua) -> rlua::Result<Class> { + lua.set_named_registry_value(TAG_LIST, lua.create_table()?)?; method_setup(lua, Class::builder(lua, "tag", None)?)? .save_class("tag")? .build() @@ -132,7 +135,28 @@ fn set_activated<'lua>(lua: &'lua Lua, (obj, val): (AnyUserData<'lua>, bool)) } tag.activated = val; } - if !val { + let activated_tags = lua.named_registry_value::<Table>(TAG_LIST)?; + let activated_tags_count = activated_tags.len()?; + if val { + let index = activated_tags_count + 1; + println!("Setting tag {:?} at index {}", &tag.get_object()? as *const _, index); + activated_tags.set(index, obj.clone())?; + } else { + let tag_ref = &tag.get_object()? as *const _; + let mut found = false; + for pair in activated_tags.clone().pairs::<Integer, AnyUserData>() { + let (key, value) = pair?; + if tag_ref == &Tag::cast(value.into())?.get_object()? as *const _ { + found = true; + /* Now remove this... */ + for index in key .. activated_tags_count-1 { + activated_tags.set(index, + activated_tags.get::<_, Value>(index + 1)?)?; + } + break; + } + } + assert!(found); set_selected(lua, (obj.clone(), false))?; } signal::emit_object_signal(lua, My next best idea would be to create a counter AtomicUsize in a lazy_static! and use .fetch_add(1, Ordering::Relax) to give each tag a unique identifier that can be used for comparisons. However: Ewwwww. Another idea would be to just ignore the order of tags and use the keys of the table to reference them (i.e. __tag_list[tag] = true). However, for this I would first like to have an okay that I may ditch the order here. (In theory it should not be needed, awful.tag implements its own scheme for establishing an order of tags)
GITHUB_ARCHIVE
What was the function of ring-shaped module in 22nd century Vulcan ships? All 22nd century Vulcan ships had a ring-shaped module. Not just starships, but small shuttle pods also had this module. As Vulcans do everything logically, there must be some purposes of those modules. A repeating structure in all ships can't be a coincidence. Over half decade has passed since I last watched Star Trek: Enterprise TV Series. So, I'm unable to recall the functions if they were ever mentioned in the show. Do you've those things in mind? Other sources are warmly welcome. Here's small Shuttle pod: That ring is more obviously a drive than anything else I've seen that didn't spit fire out the back. In an article for Star Trek : The Magazine ("Designing the Ti'Mur"), Doug Drexler; Senior Production Artist for ST: Enterprise described the circular hoop feature as warp nacelles: 'Ah,' I thought, as I mulled over the Vulcan ship design question for Enterprise. 'This is the perfect place to fit the hoop ship.' "The script stated that Trip would be ga-ga over this Surak-class starship. After laying eyes on it, there was no question in my mind that he [Matt Jeffries] went to bed that night puzzling out the exotic shape. The other change involved eliminating any physical connection between the main body of the ship and the hoop, so they are actually separate elements. "We liked the defiance of conventional structural support," Drexler explains. "It makes the Vulcans look like they control powers beyond human ken. This was true of the original TV Enterprise. Those struts that support the nacelles defy what we understand today. It says that these people are masters of technologies that we don't yet understand. It speaks volumes for the technology at play." - JULY 2002 ISSUE 39 STAR TREK: THE MAGAZINE Rick Sternbach; Senior Production Illustrator for ST: Enterprise described the circular hoop feature as an annular warp ring: Abandoning the preliminary design lines which echoed the design of the long range shuttle, Sternbach arrived at a final version in September 1991 and his notes on the final design read, "Vulcan Ship V Variant of Annular; No windows or other details; basic body shape." Later he recalled, "The commandeered Vulcan ships in "Unification" followed a pretty familiar approvals flow of initial idea, producer changes, and final concept to go to the model maker, in this case Greg Jein. Since we hadn't seen much in the way of Vulcan ship technology, beyond the motion picture shuttle, it was a bit daunting to home in on a true Vulcan style, and I can't say I'm terribly happy with the final result. Hindsight always invokes a desire for more design time, which might have helped. Perhaps different proportions on the annular warp ring, more curves, and more positive-negative surface detailing." - Star Trek: The Magazine Volume 3, Issue 8, page 104 Michael Okuda, Art Supervisor for ST: Enterprise has offered an in-depth treknobabble explanation of how the rings create a warp field: One of the most radical experiments in early Earth starship design was the Enterprise XCV. Unlike the traditional nacelle-and-saucer configuration, the XCV uses an annular propulsion system, based on Vulcan vehicle designs. This ship however, employed cyclotron accelerators to create a high-energy proton flux. The protons circled through the massive outer rings of verterium gallenide segments, generating a symmetrical subspace field. Each of the two coleopter ring structures contained two counter-rotating cyclotrons. The cyclotrons in each ring operated slightly out of phase with each other, generating the propulsive field imbalance that carried the ship through subspace at warp speeds. @Keen -As in "annular confinement beam" (a beam that goes around the person) @user1027 true. I'll try to see if we can fix it after I complete my CPR heart and lung recovery course and get caught up on all my DHS national defense paperwork. The rings are warp nacelles. Not everyone needs the whole phallic symbolism of Kirk's ship :-) What's Warp Nacelles doing in Shuttle Pod?? @SachinShekhar most starfleet shuttlecraft had warp, though slower and shorter-range than their motherships @Jeff, it could be argued that the Vulcan ships (particularly the lowest one, above) show even greater, er, symbolism. If Starfleet nacelles are phallic, then the Vulcan nacelles would be their, er, counterpart? The Vulcan warp rings look similar to the proposed Alcubierre drive, and may have been influenced by that concept. It was first proposed in 1994, and Enterprise premiered in 2001. Due to work by NASA's Dr. Harold "Sonny" White, a ring configuration came to be seen as more efficient than Alcubierre's original geometry; however, White's design ideas were not published until 2003 (White, H., “A Discussion on space-time metric engineering,” Gen. Rel. Grav. 35, 2025-2033) and did not become widely known until 2011 (Warp Field Mechanics 101), so the timing seems a bit problematic. http://www.regeeken.com/wp-content/uploads/2014/05/warpdrivediagramedited.jpg
STACK_EXCHANGE
T- Rex Terra Accents Sheet Moss snake ( 10). Other Pythons We guarantee our animals to be alive sheet , , healthy, to your satisfaction when you receive them three days after! I beaked for the most part do know how to take care of him but sheet I would love for beaked any additional information that might be out there. reptiles - carpet pythons find out more about australia zoo' s amazing animals! It is red named for its hooked snout which it uses to dig burrows, for its reddish- brown back scales. Jan 09 ( Rhamphiophis rubropunctatus) , 5: 33 pm sheet I was just wondering if anyone here has experience keeping these snakes, · Red Spotted- Beaked Snake by Sand Boa Man » Tue Oct 14 could sheet give me beaked any tips on their snake care. I have a rufous beaked snake and care I would like to know more about how to care for him. such sheet as beaked the Beaked Sea Snake, having venom up to ten times more powerful than a. Aspidites melanocephalus Colour Tan to cream base beaked colour with regular transverse bands of rusty red/ orange/ brown with dorsal smudgings of black sometimes forming a dorsal care stripe. He is somewhere between 4. Sedge plants have the characteristic strappy leaves similar to many grasses just like grass, they reproduce from seed , rhizomes. Red beaked snake care sheet. com Jen from LLLReptile Menifee goes over how to set up a care Mangrove Snake! Growing Sedge Plants. Milk snakes are docile make great pets especially for first- time snake owners. Sedge crowds out other invasive species sheet comes in many hues heights. While they' sheet re lower maintenance than other pets, they still require proper care. Please read our full animal guarantee beaked for further details. 0m but usually 2. Zoo Med Aspen Snake Bedding ( 101). Rhamphiophis oxyrhynchus: Red Beaked Snakes are mildly venomous snakes endemic to East Africa. The name " Corn Snake" is a holdover from the days when southern farmers stored harvested ears care of corn in a wood frame or log building called a crib. Red beaked snake care sheet. Zilla Beaked Moss Reptile Bedding ( 15). beaked 10 Unusual and Amazing sheet Snakes. The Corn Snake ( Pantherophis guttatus) red , Red sheet Rat Snake is a North American species of Rat Snake that subdues beaked its small prey red by constriction. To check on availability of mangroves sheet , for supplies care care sheets. sheet How to Care for a Milk Snake. Ease of beaked Care Properly housed, care these pythons. Pinesnakes healthy, , Rear Fanged & Other Snakes We guarantee our animals to be alive, to your satisfaction when you receive them, , Bullsnakes three days after! Head and neck is jet back Max. underground reptiles care supplies some of the best tortoises for sale in the world! snake It is an evergreen plant that does much of its growing in the cooler red seasons and may go dormant in hot temperatures. View a list of Animal Care Manuals today. The Association of Zoos & Aquariums offers Animal Care Manuals ( ACMs) created by leading biologists nutritionists, reproduction physiologists, aquarium employees , behaviorists , veterinarians, researchers to equip zoo red volunteers with comprehensive care guides for various species. The rufous beaked snake ( Rhamphiophis oxyrhynchus) is a species of mildly venomous snake endemic to East Africa. The snake' s venom one of its components of which is a neurotoxin called rufoxin, circulatory shock in small mammals, causes hypotension beaked but is not dangerous to humans. The last species to beaked have been studied is sheet care the red- red eared slider which also breathes during locomotion, indicating that there may be mechanical interference beaked between the limb movements red , but takes smaller breaths during locomotion than during small pauses between locomotor bouts the breathing apparatus. we sheet have one of the greatest selections you will find including sulcatas marginateds, russians , leopard tortoises, yellowfoots, redfoots more. 4m Temperament A docile natured snake if kept well fed. It hunts small animals during the day with the help of its venomous bite. Please read our full animal guarantee beaked for further details. 0m but usually 2. Zoo Med Aspen Snake Bedding ( 101). Rhamphiophis oxyrhynchus: Red Beaked Snakes are mildly venomous snakes endemic to East Africa. The name " Corn Snake" is a holdover from the days when southern farmers stored harvested ears care of corn in a wood frame or log building called a crib. Red beaked snake care sheet. Zilla Beaked Moss Reptile Bedding ( 15). beaked 10 Unusual and Amazing sheet Snakes. The Corn Snake ( Pantherophis guttatus) red , Red sheet Rat Snake is a North American species of Rat Snake that subdues beaked its small prey red by constriction. Your Favorite Kind Of Snake' s? including the rufous beaked snake! Got a soft spot for the red tails too Not to mention some of the morphs that have come about. The beaked sea snake, long thought to be a single species, is actually two distinct species, according to a paper published in the journal Science Direct. The snakes, which inhabit estuaries and lagoons throughout the Indo Pacific, are known to be aggressive, with venom that is more potent than that of a cobra or rattlesnake. I have both rufous beaked and red- headed beaked snakes. red beaked snake care sheet Any other basic care tips that might not be found on a care sheet? Rufous Beaked Snake? Pictures gallery of Sumatran blood python Snake.
OPCFW_CODE
A small, but cathartic, rant on what’s irritating me today. I’ve been working on an SQL statement. It’s a join over several tables, with some additional filter conditions. Here’s what it looks like, approximately and obfuscatedly: SELECT DISTINCT context_vertex_name, context_form_value AS port, ... FROM context_info i, context c, context_vertex_form vf, context_form f, ... WHERE i.context_id = c.context_id AND i.context_name = "FOO" AND c.context_vertex_name = "BAR" AND vf.context_vertex_id = c.context_vertex_id AND vf.context_form_id = f.context_form_id AND ... GROUP BY ... There are several annoyances in that. The first is that it uses the non-standard double quote delimiter for strings. I often see this in the Sybase-dominated work environment. Double quote is supposed to delimit identifiers, not strings! It’s useful when you want to use certain characters in a name, for example SELECT EXTRACT(year, NOW() - date_of_birth) AS "Age in Years", ..., or when some helpful DBA has given you table and column names with certain characters in them that you need to safely reference. It also controls the case-folding policy in SQL (when the DBA has helpfully used mixed case identifiers). Next is an old one that is unfortunately allowed by the standard. The “AS” keywords are omitted in FROM-clause. The table names and their aliases are separated only by a single space character, which is only slightly distinguishable from the underscores in tables names. Even syntax highlighting won’t help here, because both tokens are of the same type in the grammar and have the same format. It’s not ambiguous, but it’s a pain to read especially in a big query. Whose idea was it to make AS optional? (It’s not optional for column aliases in the SELECT-clause; the standards committee must have been firing on all cylinders the day they wrote that part.) It’s rather uncommon to see constructions of the form “name name” in formal languages; the only exception I can think of is lambda calculus and other functional languages (such as Haskell) in which a function call is the application of two terms, e.g. “f x” or “first 5 primes” (where currying allows us to treat all functions as taking one argument). Finally, and worst of all, the joins are all implicit! That means, there is a simple list of tables, then additional criteria in the WHERE-clause to filter their cartesian product. There’s not much difference performance-wise between implicit and explicit joins. The optimiser will normally decompose the ON-clauses of explicit joins into the WHERE-clause, and then re-extract useful join criteria from the WHERE-clause when planning how to scan the tables. But there is useful semantic information in distinguishing between join criteria and filter criteria — useful to a human reader who’s trying to understand the query. When you have dozens of criteria, dividing them into those that preserve relational meaningfulness, and those that merely control the subset of results returned, really helps to both comprehend what the intent of the query is, and to alter it to achieve that. Having those criteria mixed in together in one giant WHERE-clause does not. I blame these persistent habits largely on the DBMS we use. It’s so old and nonstandard that people will use whatever works to get the job done. And it’s so ubiquitous in the environment — in cahoots with its barely-evolved offspring, MS SQL Server; don’t get me started on the evils of T-SQL! — that people don’t realise there is a large, established world of (more-or-less-)standards-compliant, well-factored, SQL practice out there.
OPCFW_CODE
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from tqdm import tqdm import gc # Do a linear regression of fish sizes LABELS = ['species_fourspot', 'species_grey sole', 'species_other', 'species_plaice', 'species_summer', 'species_windowpane', 'species_winter'] def change_coords(x1,y1,x2,y2,f_len): max_x = 1280 max_y = 720 x_av = (x2+x1)/2 y_av = (y2+y1)/2 if(x_av-f_len/2)>max_x: x1_new = max_x elif (x_av-f_len/2)<0: x1_new = 0 else: x1_new=x_av-f_len/2 if(x_av+f_len/2)>max_x: x2_new = max_x else: x2_new=x_av+f_len/2 if(y_av-f_len/2)>max_y: y1_new = max_y elif (y_av-f_len/2)<0: y1_new = 0 else: y1_new=y_av-f_len/2 if(y_av+f_len/2)>max_y: y2_new = max_y else: y2_new=y_av+f_len/2 return x1_new,y1_new,x2_new,y2_new df = pd.read_csv('../fish-video/train.csv') df['_'] = df.apply(lambda row: change_coords(row['x1'],row['y1'],row['x2'],row['y2'],row['length']), axis=1) df[['x1_new','y1_new','x2_new','y2_new']] = df['_'].apply(pd.Series) del df['_'] df['x_d'] = df['x2_new'] - df['x1_new'] df['y_d'] = df['y2_new'] - df['y1_new'] X = df[df['x_d']>0][['x_d','y_d']].values y = df[df['x_d']>0]['length'].values clr = LinearRegression() clr.fit(X,y) del df,X,y THRESHOLDs = [0.95] box_list = ['xmin','xmax', 'ymin', 'ymax'] for THRESHOLD in THRESHOLDs: df_sub = pd.read_csv('fish_ssd_v2.csv') # leave only one video # df_sub = df_sub[df_sub.video_id.isin(['09WWcMSr5nbKk0lb', '01rFQwp0fqXLHg33'])] del df_sub['Unnamed: 0'] df_sub.loc[df_sub[LABELS].max(axis=1)<THRESHOLD, LABELS+['xmin','ymin','xmax','ymax']] = 0 # df_sub = df_sub[df_sub.video_id.isin(['01rFQwp0fqXLHg33','09WWcMSr5nbKk0lb'])] current_idx = 0 block_start_idx = -1 block_end_idx = 0 is_block = 0 fish_number = 0 previous_vid = 'dfsdfsdfsdfsdfsddfs' current_vid = 'asfsadfsdafdsafasdfsdfa' def update_block(df, idx_start,idx_end, best_class, best_class_prob, xmin,xmax,ymix,ymax, fish_number, ): # do some tweaking here # df.loc[idx_start:idx_end-1,LABELS+box_list] = 0 # df.loc[idx_start:idx_end-1,best_class] = best_class_prob * 0.95 df.loc[idx_start:idx_end-1,box_list] = xmin,xmax,ymix,ymax # df.loc[idx_start:idx_end-1,second_best_class] = second_best_class_prob * 0.2 df.loc[idx_start:idx_end-1,'fish_number'] = fish_number def get_block(df,idx_start,idx_end): return df[LABELS+box_list][idx_start:idx_end] def process_block(block): best_class = block[LABELS].sum(axis=0).sort_values(ascending=False).index[0] best_class_block = block[block[best_class]>0][box_list].mean(axis=0) best_class_prob = block[block[best_class]>0][best_class].max(axis=0) return best_class,best_class_prob,best_class_block['xmin'],best_class_block['xmax'],best_class_block['ymin'],best_class_block['ymax'] with tqdm(total = df_sub.shape[0]) as pbar: for index, row in df_sub.iterrows(): # print(index) current_vid = row['video_id'] if(row['xmin']>0): is_block = 1 # print('is block triggered') else: is_block = 0 if ((is_block == 1) & (block_start_idx == -1)): block_start_idx = current_idx # print('block_start_idx triggered') if((block_start_idx > -1) & (is_block==0) & (df_sub.loc[current_idx:current_idx+5,'xmin'].sum()==0) & (current_vid == previous_vid)): block_end_idx = current_idx # print('block_end_idx triggered') if((block_start_idx > -1) & (is_block==1) & (current_vid != previous_vid)): block_end_idx = current_idx if (block_end_idx>0): block = get_block(df_sub,block_start_idx,block_end_idx) best_class,best_class_prob,xmin,xmax,ymin,ymax = process_block(block) fish_number += 1 del block update_block(df_sub, block_start_idx, block_end_idx, best_class, best_class_prob, xmin,xmax,ymin,ymax, int(fish_number) ) gc.collect() if current_vid != previous_vid: fish_number = 0 if current_vid != previous_vid: block_start_idx = current_idx else: block_start_idx = -1 block_end_idx = 0 # print('block update triggered') current_idx += 1 previous_vid = row['video_id'] # print(current_vid,previous_vid,fish_number) pbar.update(1) df_sub['x_d'] = df_sub['xmax'] - df_sub['xmin'] df_sub['y_d'] = df_sub['ymax'] - df_sub['ymin'] preds = clr.predict(df_sub[df_sub['x_d']>0][['x_d','y_d']]) df_sub.loc[df_sub['x_d']>0,'length'] = preds df_sub.to_csv('ssd_fish_thres_max_all_class_{}.csv'.format(THRESHOLD)) del df_sub['xmin'],df_sub['xmax'],df_sub['ymin'],df_sub['ymax'],df_sub['x_d'],df_sub['y_d'] df_sub = df_sub.set_index('row_id') df_sub = df_sub.fillna(value=0) df_sub.to_csv('ssd_fish_thres_max_all_class_{}.csv'.format(THRESHOLD))
STACK_EDU
It seems that the cause of the situation of curdling soya milk in coffee is that the acid from the coffee curdles the proteins and/or fats in the (soya or other) milk, being catalysed by the heat of the coffee; so that's what needs to be mitigated. There are a couple related questions on Cooking.SE, including this one that suggests adding salt to the coffee will help a little (table salt, or even bicarb / baking soda). I have tried adding table salt to the coffee first, then adding the coffee to the soya milk, and it seems to help (a little, sometimes; YMMV). Another Cooking.SE question also suggests waiting for coffee to cool. The linked TheKitchn article (linked from the aforementioned previous question and @PythonMaster's answer) suggests that cooling the coffee, or pouring coffee into the soya milk (rather than the other way around) helps via similar mechanisms. In my experience in messing around with this, I have not found the order of adding ingredients makes much difference (i.e., putting coffee into the soya milk). It absolutely depends on the type of coffee, also, because different coffees will have different degrees of acidity. See also this table about relative acidity levels. However, it is certainly the case that adding coffee into the milk will be helpful in general (compare also to tempering of eggs or adding acid into water rather than the other way around). In this order, these allow temperature and acid (respectively) to more slowly come into balance between the two substances as they are being combined. IMHO, I have found the brand (i.e., the ingredients and production methods) of the soya milk to be the biggest factor. I (personally, with not-entirely-scientific -- yet! -- trials) find that soya milk brands that contain emulsifiers (or other additives) are less likely to curdle in coffee. My only guesses are that - these compounds (emulsifiers, thickeners, or stabilisers, such as carrageenan or xanthan gum) interfere with the coagulation, or - something else in the processing (perhaps, e.g., pasteurisation) has denatured the proteins or otherwise chemically changed the nutrients such that they are less likely (cf., UHT or ultra-pasteurised milk) I generally prefer to avoid the additives, so this is an unfortunate conclusion for me personally. Freshness of the milk is also an issue; milk of all types will spoil and slowly start to curdle due to natural processes (fermentation, bacteria, other contamination). There's another question about storing soya milk in the freezer, where we reached similar conclusions. At the risk of making this an even longer answer, here are some other things to try: - Add cold coffee into soya milk. Does it curdle? If so, it seems temperature is not your only enemy. - Check your label; does it have carrageenan or so? If so, this might not be the metaphorical silver bullet. I'm interested (fascinated, even; obsessed, maybe?:) ) with this, so I'd really like to hear feedback. Of the dozens of articles I've read on this, I've still not found a proper solution. Happy experimenting; comments welcomed.
OPCFW_CODE
[Xorp-cvs] XORP cvs commit: xorp/policy pavlin at icir.org Thu May 11 19:21:39 PDT 2006 Module name: xorp Changes by: pavlin at xorpc.icir.org 2006-05-12 02:21:39 UTC XORP CVS repository policy configuration.cc policy_statement.cc policy_statement.hh term.cc term.hh * When the done_global_policy_conf XRL is received, call the appropriate methods to mark the end of each policy, the end of each term in a policy, and the end of each block in a term. This allows us to perform some tiding when (re)configuring * Print a warning at the end of each policy if there are out-of-order terms that won't be used. * If there are out-of-order nodes in a term block, then use best effort to add all nodes at the end of the term: If no more out-of-order nodes can be added in-order, add the first out-of-order node to the end of the set of in-order nodes. Repeat recursively until there are no more out-of-order nodes left. This fixes a potential issue if the policy template file is written such that it contains a leaf node that has no corresponding %create or %set XRL. Such node may create a hole in the sequence of nodes received by the policy manager hence we need the above heuristic. Note that the above heuristic may result in some node misordering if we perform 2+ configuration changes. E.g., if node (H) is a leaf node without an XRL, and we try to commit the following two configurations one after the other: "A B C (H) E F G" "A B C (H) D E F G" Then the result will be: "A B C E F G D" (the correct one should be "A B C D E F G"). Strictly speaking we need node ordering only for rtrmgr multi-value nodes (such as policy terms), and we don't really relay on the node ordering inside the policy terms, hence the above mis-ordering should be harmless. Revision Changes Path 1.12 +4 -1; commitid: 22784463ec987ea6; xorp/policy/configuration.cc 1.9 +32 -1; commitid: 22784463ec987ea6; xorp/policy/policy_statement.cc 1.8 +6 -1; commitid: 22784463ec987ea6; xorp/policy/policy_statement.hh 1.19 +86 -1; commitid: 22784463ec987ea6; xorp/policy/term.cc 1.12 +14 -1; commitid: 22784463ec987ea6; xorp/policy/term.hh More information about the Xorp-cvs
OPCFW_CODE
// // SwiftyPhotos.swift // SwiftyPhotos // // Created by Chris Hu on 2017/12/6. // Copyright © 2017年 com.icetime. All rights reserved. // import UIKit import Photos public typealias ResultHandlerOfPhotoOperation = (Bool, Error?) -> Void public typealias ResultHandlerOfPhotoAuthrization = (Bool) -> Void fileprivate let AlbumsOfIOS = ["Selfies", "Screenshots", "Favorites", "Panoramas", "Recently Added"] public class SwiftyPhotos: NSObject { public var isPhotoAuthorized = false /// all albums public var allAlbums = [PhotoAlbumModel]() /// Album for All Photos public var allPhotosAlbum: PhotoAlbumModel? { return allAlbums.first } /// Photos for All Photos Album public var allPhotosAssets: [PhotoAssetModel] { if let allPhotosAlbum = allPhotosAlbum { return allPhotosAlbum.photoAssets } return [PhotoAssetModel]() } private static let sharedInstance = SwiftyPhotos() public class var shared: SwiftyPhotos { PHPhotoLibrary.shared().register(sharedInstance) return sharedInstance } deinit { PHPhotoLibrary.shared().unregisterChangeObserver(self) } } // MARK: - Authrization public extension SwiftyPhotos { func requestAuthorization(resultHandler: @escaping ResultHandlerOfPhotoAuthrization) { let authorizationStatus = PHPhotoLibrary.authorizationStatus() switch authorizationStatus { case .notDetermined: PHPhotoLibrary.requestAuthorization { (authorizationStatus) in if authorizationStatus == .authorized { resultHandler(true) } else { resultHandler(false) } } case .restricted, .denied: print(">>>SwiftyPhotos : authorizationStatus denied") resultHandler(false) case .authorized: print(">>>SwiftyPhotos : authorizationStatus already authorized") resultHandler(true) default: print(">>>SwiftyPhotos : authorizationStatus unknown") } } } // MARK: - reload public extension SwiftyPhotos { func reloadAll(resultHandler: @escaping ResultHandlerOfPhotoAuthrization) { requestAuthorization { (isPhotoAuthorized) in if isPhotoAuthorized { self.p_reloadAll() } resultHandler(isPhotoAuthorized) } } fileprivate func p_reloadAll() { let handleAssetCollection = { (assetCollection: PHAssetCollection) in let photoAlbum = PhotoAlbumModel(assetCollection) self.allAlbums.append(photoAlbum) } // All Photos let allPhotosAlbum = PHAssetCollection.fetchAssetCollections(with: .smartAlbum, subtype: .smartAlbumUserLibrary, options: nil) allPhotosAlbum.enumerateObjects(_:) { (assetCollection, idx, stop) in handleAssetCollection(assetCollection) } let smartAlbums = PHAssetCollection.fetchAssetCollections(with: .smartAlbum, subtype: .albumRegular, options: nil) smartAlbums.enumerateObjects(_:) { (assetCollection, idx, stop) in guard let albumName = assetCollection.localizedTitle else { print(">>>SwiftyPhotos : failed to fetch albumName of assetCollection") return } if AlbumsOfIOS.contains(albumName) { handleAssetCollection(assetCollection) } } let albums = PHAssetCollection.fetchAssetCollections(with: .album, subtype: .albumRegular, options: nil) albums.enumerateObjects(_:) { (assetCollection, idx, stop) in handleAssetCollection(assetCollection) } } } // MARK: - Album public extension SwiftyPhotos { func isAlbumExisting(albumName: String) -> Bool { if let _ = photoAlbumWithName(albumName) { return true } return false } func photoAlbumWithName(_ albumName: String) -> PhotoAlbumModel? { return allAlbums.filter { (photoAlbum) -> Bool in albumName == photoAlbum.name }.first } @discardableResult func createAlbum(_ albumName: String) -> Bool { if let _ = photoAlbumWithName(albumName) { print(">>>SwiftyPhotos : album \(albumName) is already existing") return false } var isAlbumCreated = false let semaphore = DispatchSemaphore(value: 0) PHPhotoLibrary.shared().performChanges({ PHAssetCollectionChangeRequest.creationRequestForAssetCollection(withTitle: albumName) }) { (isSuccess, error) in if isSuccess == true { print(">>>SwiftyPhotos : succeed to create album : \(albumName)") isAlbumCreated = true } else { print(">>>SwiftyPhotos : failed to create album : \(albumName). \(String(describing: error))") } semaphore.signal() } _ = semaphore.wait(timeout: .distantFuture) allAlbums.removeAll() p_reloadAll() return isAlbumCreated } } // MARK: - Photo public extension SwiftyPhotos { func saveImage(_ image: UIImage, intoAlbum albumName: String, withLocation location: CLLocation?, resultHandler: @escaping ResultHandlerOfPhotoOperation) -> Bool { createAlbum(albumName) guard let photoAlbum = photoAlbumWithName(albumName) else { return false } var isImageSaved = false let semaphore = DispatchSemaphore(value: 0) PHPhotoLibrary.shared().performChanges({ // create an asset change request let assetChangeRequest = PHAssetChangeRequest.creationRequestForAsset(from: image) if let location = location { assetChangeRequest.location = location } // create a placeholder for asset, and add into assetCollectionChangeRequest let assetPlaceholder = assetChangeRequest.placeholderForCreatedAsset // create an assetCollection change request let assetCollectionChangeRequest = PHAssetCollectionChangeRequest(for: photoAlbum.assetCollection) let fastEnumerate: NSArray = [assetPlaceholder!] assetCollectionChangeRequest?.addAssets(fastEnumerate) }) { (isSuccess, error) in if isSuccess == true { print(">>>SwiftyPhotos : succeed to save image to album : \(albumName)") isImageSaved = true } else { print(">>>SwiftyPhotos : failed to save image to album : \(albumName). \(String(describing: error))") } semaphore.signal() } _ = semaphore.wait(timeout: .distantFuture) resultHandler(isImageSaved, nil) return isImageSaved } func deleteAsset(_ photoAsset: PhotoAssetModel, resultHandler: @escaping ResultHandlerOfPhotoOperation) -> Bool { var isAssetDeleted = false let semaphore = DispatchSemaphore(value: 0) PHPhotoLibrary.shared().performChanges({ let fastEnumerate: NSArray = [photoAsset.asset] PHAssetChangeRequest.deleteAssets(fastEnumerate) }) { (isSuccess, error) in if isSuccess == true { print(">>>SwiftyPhotos : succeed to delete asset : \(photoAsset.name)") isAssetDeleted = true } else { print(">>>SwiftyPhotos : failed to delete asset : \(photoAsset.name). \(String(describing: error))") } semaphore.signal() } _ = semaphore.wait(timeout: .distantFuture) resultHandler(isAssetDeleted, nil) return isAssetDeleted } } // MARK: - PHPhotoLibraryChangeObserver extension SwiftyPhotos: PHPhotoLibraryChangeObserver { public func photoLibraryDidChange(_ changeInstance: PHChange) { for (_, photoAlbum) in allAlbums.enumerated() { if let changeDetails = changeInstance.changeDetails(for: photoAlbum.fetchResult) { photoAlbum.changeWithDetails(changeDetails) } } } }
STACK_EDU
Unexpected Error: #<Errno::EINVAL: Invalid argument - tcsetattr> I have a problem when trying 06_drivers_gpio_uart. I got following error. $ DEV_SERIAL=/dev/tty.usbserial-1420 make miniterm Miniterm 1.0 [MT] ⚡ Unexpected Error: #<Errno::EINVAL: Invalid argument - tcsetattr> [MT] Bye 👋 I think the cause of the error is baud. The baud is depending on the platform according to the documentation. I'm using Mac OS. Integer from 50 to 256000, depending on platform) https://rubydoc.info/gems/serialport/SerialPort:set_modem_params The current value is 576000 https://github.com/rust-embedded/rust-raspberrypi-OS-tutorials/blob/ff382c3fafa6257e37c208015cbea3aa3d1b2515/utils/miniterm.rb#L44 Reproduce process Use Mac OS. Compile source. Write binary to SD card. Power on. Run miniterm. Ah, I didn’t know macOS was more restrictive. That’s a pity. Does Miniterm start if you set it to 256000? (You wouldn’t see sane output though, because the Kerne binary is hardcoded to 576000. Ah, I didn’t know macOS was more restrictive. That’s a pity. Does Miniterm start if you set it to 256000? (You wouldn’t see sane output though, because the Kerne binary is hardcoded to 576000. I read up a bit on it. Seems macOS usually doesn’t allow more than 230400 when opening the device, but it seems there is a way to circumvent it using stty. Can you do a test for me? change the baudrate in miniterm.rb back to 230400. Start the Miniterm. In a different console, execute stty -f /dev/USB_SERIAL_NAME 576000 Only now power the RPi and check if you can see the console output. I read up a bit on it. Seems macOS usually doesn’t allow more than 230400 when opening the device, but it seems there is a way to circumvent it using stty. Can you do a test for me? change the baudrate in miniterm.rb back to 230400. Start the Miniterm. In a different console, execute stty -f /dev/USB_SERIAL_NAME 576000 Only now power the RPi and check if you can see the console output. @andre-richter Does Miniterm start if you set it to 256000? (You wouldn’t see sane output though, because the Kerne binary is hardcoded to 576000. Yes. It works fine when I revert ee52e8e288ef6f00416eed94921c2b0e1acfab73 ❯ DEV_SERIAL=/dev/tty.usbserial-1420 make miniterm Miniterm 1.0 [MT] ✅ Serial connected [0] Booting on: Raspberry Pi 4 [1] Drivers loaded: 1. BCM GPIO 2. BCM PL011 UART [2] Chars written: 93 [3] Echoing input now kjkdsafdsa [MT] Bye 👋 I'll try the test using stty. @andre-richter Does Miniterm start if you set it to 256000? (You wouldn’t see sane output though, because the Kerne binary is hardcoded to 576000. Yes. It works fine when I revert ee52e8e288ef6f00416eed94921c2b0e1acfab73 ❯ DEV_SERIAL=/dev/tty.usbserial-1420 make miniterm Miniterm 1.0 [MT] ✅ Serial connected [0] Booting on: Raspberry Pi 4 [1] Drivers loaded: 1. BCM GPIO 2. BCM PL011 UART [2] Chars written: 93 [3] Echoing input now kjkdsafdsa [MT] Bye 👋 I'll try the test using stty. @andre-richter It seems 576000 is invalid value on macOS. stty -f /dev/tty.usbserial-1420 576000 stty: tcsetattr: Invalid argument The value 460800 which is mentioned in the stackoverflow was valid. $ stty -f /dev/tty.usbserial-1420 460800 ~ $ echo $? 0 @andre-richter It seems 576000 is invalid value on macOS. stty -f /dev/tty.usbserial-1420 576000 stty: tcsetattr: Invalid argument The value 460800 which is mentioned in the stackoverflow was valid. $ stty -f /dev/tty.usbserial-1420 460800 ~ $ echo $? 0 Thanks! Then I might need to make the whole tutorials use 460800 to solve this issue and adapt miniterm.rb to spawn the stty when on macOS. Thanks! Then I might need to make the whole tutorials use 460800 to solve this issue and adapt miniterm.rb to spawn the stty when on macOS. @andre-richter I've created PR to fix this problem. I've checked it works on my macOS. Please review & merge it If it looks good. #96 @andre-richter I've created PR to fix this problem. I've checked it works on my macOS. Please review & merge it If it looks good. #96 Hi @sachaos, could you please try out https://github.com/rust-embedded/rust-raspberrypi-OS-tutorials/pull/97 on your Mac and tell me if it works? I bumped to 921_600 baud + some more fixes. I added you as co-author of this new PR to credit your findings. Hi @sachaos, could you please try out https://github.com/rust-embedded/rust-raspberrypi-OS-tutorials/pull/97 on your Mac and tell me if it works? I bumped to 921_600 baud + some more fixes. I added you as co-author of this new PR to credit your findings. @andre-richter Thank you! I'll try the PR. @andre-richter Thank you! I'll try the PR.
GITHUB_ARCHIVE
How to temporarily disable duplicate detection with flows? Salesforce is gradually pushing us to replace workflows and processes with flows. But: workflows have a special property that processes and flows don't have. I refer to Salesforce's Order of Execution documentation, step 11: If there are workflow field updates: [...] Custom validation rules, flows, duplicate rules, processes, and escalation rules aren’t run again. Processes and flows do not have this property: running without being followed up by duplicate detection. So it can happen that processes and flows fail with a DUPLICATES_DETECTED exception. Here's the idea on IdeaExchange to fix that. It was posted 7 years ago and it still hasn't been followed up by Salesforce. Somewhere in the reactions to that idea a workaround is offered that involves using a checkbox, which is then used in the duplicates rules to circumvent them. And yes, that workaround works precisely because it uses a workflow, which has that property of not being followed up by duplicates detection. The idea was originally about processes and the workaround works for processes. But it does not work for flows! Because if you read the Order of Execution documentation very carefully (and you should), pay special attention to: Executes the following Salesforce Flow automations, but not in a guaranteed order. Processes Flows launched by processes Flows launched by workflow rules (flow trigger workflow actions pilot) When a process or flow executes a DML operation, the affected record goes through the save procedure. Executes record-triggered flows that are configured to run after the record is saved. If you replace a process with a flow, you will notice that after a DML operation by the process, workflows will run. But after a DML operation of the flow, workflows will not run. Hold on, you say, step 13 mentions flows. Yes, it does, but only flows that are triggered by something else. If you have a 1-to-1 flow replacement for a process, no workflow will run after the flow. And so, the workaround for evading duplicates detection will no longer work. Which prevents us from porting processes to flows for Account, Contact and Lead objects. In summary: processes and flows may fail because of duplicates detection, where a workflow will succeed. A workaround for this will work for processes, but not for flows. Salesforce wants us to use only flows. I ask you: how are we going to do that? It seems that the Order of Execution will prevent a successful transition for those orgs that use duplicates management. One thing to keep in your back pocket - The create/update element in my flows has a fault path where if the fault is a duplicate detected, I then delegate to apex to upsert the object with DuplicateRuleHeader enabled to bypass dup alerts. @cropredy Good suggestion. Next challenge: the Update-Record element - that is the root of the fault path - specifies which values are assigned to which fields. So not only the record-to-be-updated, but also the set of fields and values has to be passed on to Apex. Any recommendations for how to do that easily? Move the field assignments out of the Update element into an immediately preceding Assignment element (which assigns to a record variable). Update element's Fault path supplies the record variable to the invocable apex What I ended up with: Apex batch jobs that reset the checkboxes that are used to temporarily disable duplicates checks on accounts, contacts and leads. The Apex code uses the ability to perform a save operation with duplicates checking disabled (DuplicateRuleHeader.AllowSave = true). So, not the most elegant solution, but it works and I have finally gotten rid of the last workflows in my org.
STACK_EXCHANGE
import Foundation import ReactiveSwift private var lifetimeKey: UInt8 = 0 private var lifetimeTokenKey: UInt8 = 0 extension Reactive where Base: NSObject { /// Returns a lifetime that ends when the object is deallocated. @nonobjc public var lifetime: Lifetime { return base.synchronized { if let lifetime = objc_getAssociatedObject(base, &lifetimeKey) as! Lifetime? { return lifetime } let token = Lifetime.Token() let lifetime = Lifetime(token) objc_setAssociatedObject(base, &lifetimeTokenKey, token, .OBJC_ASSOCIATION_RETAIN_NONATOMIC) objc_setAssociatedObject(base, &lifetimeKey, lifetime, .OBJC_ASSOCIATION_RETAIN_NONATOMIC) return lifetime } } }
STACK_EDU
A couple of years ago, I wrote two short books for Microsoft Press called Optimizing and Troubleshooting Hyper-V Networking and Optimizing and Troubleshooting Hyper-V Storage. Both these books were written in collaboration with experts from the Windows Server team at Microsoft and consisted of short real-world scenarios of various issues involving Hyper-V hosts and virtual machines with explanations of how the problems were identified and resolved. The books were not meant as exhaustive technical guides for Hyper-V troubleshooting but merely to indicate the general approach and steps involved in fixing different issues customers of Microsoft had experienced. Since the time of writing these two books I’ve continued to take an interest in Hyper-V troubleshooting, though more from a distance than actually being involved hands-on with such issues. This present article and some possible future ones I may write are intended to build upon my earlier publications with more examples taken from real life of problems customers have experienced using Windows Server Hyper-V and how the consultants and support persons involved in these situations have helped resolve the customer issues. Because all of these examples are drawn from actual customer stories communicated privately to me by colleagues, I’ve changed some of the details of these stories for the sake of customer privacy and also to generalize the problems and solutions so the stories can be viewed more as lessons learned or best practices advice than support cases. If any readers of this article have similar Hyper-V troubleshooting stories of their own that they’d like to share with our readers, you can either post them here as a comment to this article or email them directly to me at [email protected]. VM Connect won’t connect to Gen2 VM Bob is running Windows Server 2012 R2 Hyper-V on his host system that is a node in a two-node host cluster. He has created a new Generation 2 virtual machine with a fixed-size disk on the host and has not yet installed a guest operating system on the virtual machine. He starts the new virtual machine and opens the VM Connect tool on the Hyper-V host to connect with the virtual machine. The VM Connect window displays the message “Trying to connect to VM” and then it freezes, preventing him from turning off or shutting down the virtual machine or even closing the VM Connect tool. In fact to close the VM Connect tool Bob has to open Task Manager and kill it. Bob tries to investigate what’s going on by seeing if any other issues might be occurring with the Hyper-V host. He tries creating a second virtual machine, but the Hyper-V Manager console is unresponsive. He opens the Services.msc snap-in in the MMC console and tries to stop the Hyper-V Virtual Machine Management Service (VMMS) and it displays “Stopping” but never finishes. He tries doing the same using the SC.exe command and once again the service doesn’t stop. He then decides to reboot the Hyper-V host itself and discovers that it won’t shut down! So he ends up having to physically turn off the host system. He then starts the system in Safe Mode and everything seems OK. So he starts it again in Normal Mode and is able to open the Hyper-V Manager console. Thinking that maybe he should have mounted an ISO for a guest operating system on the VM before trying to start it, he does this and starts the VM again and tries once again to connect with it using the VM Connect tool, but the same problem occurs. Bob performs a hard restart once again on the host and checks for any new updates that should have been applied from Windows Update. The Windows Server installation is reported as being fully up to date with patches. He tries creating a Generation 1 virtual machine on the host and, lo and behold, he is able to connect to it using the VM Connect tool. So it seems that only Generation 2 virtual machines exhibit this strange behavior. Bob finally consults an expert in the consulting field who works with Hyper-V customers and asks him for any advice he might have concerning his problematic situation. After further investigation by the expert and Bob working together, several things are tried but no resolution is found for the problem. Finally Bob starts examining the state of other hardware on the host system including the system motherboard, storage hardware, and onboard network card. He discovers that newer firmware is available for the motherboard and an updated driver is available for the 10GbE network adapter. After updating both these hardware items the problem involving Generation 2 virtual machines was resolved. Hyper-V troubleshooting: Lessons learned It’s easy to forget when you are maintaining a Windows Server system that it’s not just the operating system that needs to be kept up to date. Each of the individual hardware components of the system that has either firmware or a device driver associated with it should also be checked periodically to ensure its associated software is up to date. With client PCs it’s easy to get by with not bothering to flash new BIOS updates or upgrade to newer device drivers when everything is still working properly and the user of the computer hasn’t complained about experiencing any problems with it. But with high-end server systems such as rack or blade servers used for running Hyper-V host clusters, it’s not uncommon for the manufacturer to release several firmware and driver updates in the months following the release of the new system onto the market. Maybe this is just another symptom of our push in today’s society to get products out the door as fast as possible and then move on to working on the next version of the product, or maybe it’s just the limited nature of human attention span. Either way, as part of your Hyper-V troubleshooting — or any troubleshooting —make sure you keep track of any new firmware or driver updates that come out for your high-end server systems because, as this article illustrates, some strange things can happen if a hardware component is not working exactly as designed on the system. As a final aside, there’s another issue with Generation 2 virtual machines on Windows Server 2012 Hyper-V hosts that can manifest itself when you try and use pass-through disks as the storage for your virtual machines. This issue is documented in Microsoft Knowledge Base article KB3102354 and fortunately there’s a hotfix available for it as the article describes. Photo credit: Wikimedia
OPCFW_CODE
I got a message these days on LinkedIn from a guy who was just getting started in the front-end development area. he was happy that I accepted him on my social network and he wrote me this: "Thanks for accepting. Someday I hope to be a professional like you." As I read his message, I thought to answer him with some advice. Things that no one told me when I was starting and that I had to learn over time, with practice and especially with mistakes. Some advice I wish someone had told me when I started. As I thought of what to say, I thought of seven advices, and I dropped them right below. Always study. Even if for a little while, but look study every day. Make the studies a part of your day, your routine. Just think, if learning a single thing a day may seem like little thinking in the short term, but in the medium/long term, it will have a GIANT impact on who you are and will be, what you can and will do, and whatever you have reached and will reach. Do not give up on a problem, but be aware when it is necessary to stop, take a walk, have coffee. Sometimes we even crash and even with years of experience in the area, this will continue to happen. Research before asking for help. Finding the solution alone brings a very good feeling. But if you can not find the solution, do not hesitate to ask for help and never feel ashamed of it. This is not weakness, it is knowing how to recognize your limits and, if you have done the search before, will be able to contextualize much better for who is to help you. Help whenever you can. All of us, regardless of level and/or experience, need help at some point. If you are always available to help someone, people will always be available to help you. Always want to improve and be better! Seriously, always! Do not settle for what you already know, thinking this will be enough forever. This will not happen. You are responsible for your evolution. You will not always have someone looking at your code and suggesting improvements, so you have to pull yourself up. It is your responsibility to evolve and be a better and more qualified professional. Do not outsource this. Do the best you can, in the place and in the condition that you are, with the knowledge you have. By this I mean that, the circumstances and/or the company and/or the client, should not be able to interfere in the quality of the work that you do. Always be excellent, both for the "your uncle" project and for a multinational. The quality of your work is directly related to your character, and this should not change because of the size of the project/client. Bonus Tip: Visit some of your old codes. This will help you see how you've evolved. Sometimes, in a very short time. Finally, I want to say that this text is not intended to be the "Beginner's Developer's Handbook". There are definitely millions of other useful advice that I am not quoting here, and other billions that I still do not know. I hope in some way to have helped those who are just starting out and have been able to contribute to a development community that has helped me and continues to help many times. I wish you all a lot of success! This text was originally written in Portuguese and translated into English with the intention of improving my writing in English. If you have read any errors and / or have any suggestions, please write in the comments. Certainly I will be grateful.
OPCFW_CODE
The monocrystal is the current most valuable resources in star conflict, you need it for almost everything, weapon, modules, ships…; but since a quite long time there is no place to get that resource except dailys. -the old daily gives us 2 mono’s at each side of conflict -now it’s only 1 but we have scl Thing that not every player have access to it and even those who can play it sometime can’t manage to play and creating a team can sometime take long and quickly became borring and ppl are too lazy to play. that make 4 mono’s per day. you can add 1 or 2 if you get a lucky drop. what i suggest is to add a REAL way to get more mono’s every day, like a way to farm it. How can it be farm ? there is many way it can be introduce: [-] Open Space, witch i think is the best way so far, here we’ll be able to get it in loot, and the chance of getting it will get higher if alien are invading the sector. -Blue marker mission in os, yes those mission! there is plenty of them all over the map and almost no one are doing them due to the reward that we have to admit is low. Make us want to do them! easy just add 1 mono reward once completed! -DEFILER yes the defiler! it’s in os, it’s big, and we can take him down! the loot could drop mono’s and iridium! [-] Hangar contract, i was thinking of a special contract that will appear randomly every day multiple time (like 5 time a day or more) the objective of this contract could be a huge bounty hunt! -The target would be a realy thoght guy to beat but only ONE player will be able to get the reward! once the contract is availble every sc player will see it and all he’ll have to do is to get to the location and kill the target (can be a alien, pirate or cyber) if the player get the kill hit (or the one you deal the more dmg) he’ll get rewarded with 10 to 15 mono’s depending of the difficulty of the target. [-]PVP-PVE yes there too! we could simply get a higher drop rate or just like with mrs summer mission a special icon will appear on a random playe and the guy who take him down will get rewarded with 5 mono’s but only if it’s team won the match! if someone have any idea about how we could introduce a better, fun or just a good way to get more mono’s feel free to post your idea! please take this in consideration dev’s!
OPCFW_CODE
Prestantiousfiction Nanomancer Reborn – I’ve Become A Snow Girl? online – Chapter 851 Priscera strong contain -p2 Novel–Nanomancer Reborn – I’ve Become A Snow Girl?–Nanomancer Reborn – I’ve Become A Snow Girl? Chapter 851 Priscera delicate wobble First And Last Things: A Confession Of Faith And Rule Of Life Nodding their heads these folks were planning to keep town whenever they sensed a group of strong mana impulses steering their way. Midst the Wild Carpathians “Oh son, she actually is livid.” Madison whistled observing the fast deterioration that she delivered upon the job hopefuls. Formerly, whether or not she was surrounded this way, she might have experienced around for your little just before letting them get rid of the contenders. “I’ll deal with this.” s.h.i.+ro reported as she didn’t make any area for disagreement. A Transmigrator’s Privilege “Genuine. Urg… I will still try to remember viewing her rip the head of an dragon off its back and hit one other dragon with it.” Madison s.h.i.+vered recalling the items she obtained to endure in the training with Chelsea. When they were actually strolling, Priscera unexpectedly ended medium move as her deal with paled. Her lips quivered ahead of hacking and coughing up some blood. Seeing and hearing no solution from your female, s.h.i.+ro only narrowed her eyes as she introduced some nan.o.bots from her palm. Summoning her sword, she immediately obstructed their 1st affect ahead of grabbing the throat of the first one. Given that she located the controller, she possessed you can forget use to the parasite. Reducing her sight, s.h.i.+ro remaining the spot with out stating a single thing. Because s.h.i.+ro was going to be caring for some controller, both the designed their in the past to get the event to get a ultimate a.s.sault over the northern border to ensure that Madison could claim her spot being the good queen. Although the compet.i.tion would get started on once the new time commences, if you can find not any other individuals, she would get to be the computerized victor. Seeing that she identified the controller, she acquired forget about use for any parasite. Taking a look at her fingers, she could realize that it had been still shaking from dread as she got a deep air and calmed themselves. “Mn, I would say I pity the demon who p.i.s.sed her off although not actually. I think they are worthy of it soon after stripping the prospects in their will.” Lyrica sighed as Madison predetermined with her. Now that she found the control, she had no more use for that parasite. urban immortal doctor return Conquer with anxiety, Priscera wasn’t in the position to answer by any means whilst Asphil’s entire body tensed up. Her detects instructed her that the sole switch would mean passing away as well as ideal she could do today was vacation still and hope that she leaves without having getting rid of her. She needed to cry in pain but s.h.i.+ro’s fingers only gripped her tonsils more complicated. When they have been wandering, Priscera unexpectedly quit medium phase as her face paled. Her lips quivered right before hacking and coughing up some our blood. Monitoring the alert decrease, she identified the control returning to the northern territory with another high ranking prospect. All of a sudden, Priscera’s epidermis began to remove lower back when the nan.o.crawlers began decreasing her up part by covering. She didn’t worry about cowards who jogged out but making another person similar to this against their will was disgusting to her. Recognising the crooks to be the prospects that ran gone, s.h.i.+ro asked yourself what provided them a lot self confidence to come back. But once she saw then, she immediately lost her grin as her expression was cool. Ever since she identified the controller, she obtained you can forget use to the parasite. After several events, Asphil collapsed on the floor as her back again was packed with perspiration. Take a look at lightnovelworld[.]com for your greater knowledge She got never observed Priscera performing so panicked before so observing her work like that was quite alarming. Observing their lifeless view and slower shifting systems, s.h.i.+ro comprehended so it was probably something similar to a parasite that had over them and was compelling them to return. Checking out her hands, she could realize that it absolutely was still trembling from fear as she had a deep inhale and calmed herself. Crus.h.i.+ng it in their own fingers, she checked out another individuals that were making spells to battle against her though two others attempted to take part in melee deal with. Triggering her a.n.a.lysis expertise, she uncovered the parasite writhing within them within a single look. She remarked that the unwanted organisms connected themselves in the host’s mana so getting them out suggested getting rid of them though not having them out would issue the host to the parasites will or relatively, the controller’s will. “Poor factors.” She sighed as she could have probably supplied them the opportunity with a deal to allow them to survive the good news is she was essentially forced to get rid of them. “Relies.” Without indicating any other thing, s.h.i.+ro jailed the heart and soul into your lantern ahead of dismissing the lantern yet again. By far the most updated books are circulated on lightnovelworld[.]com Crus.h.i.+ng it in her palms, she considered another prospects which were setting up spells to battle against her when two other people tried using to take part in melee deal with. what is the main idea of the story 100 the story of a patriot Take a look at lightnovelworld[.]com to get a better encounter Nodding their heads these folks were intending to depart the town when they sensed a small group of impressive mana signals steering their way. After several instances, Asphil collapsed on a lawn as her back was filled up with perspiration. Leave a Reply
OPCFW_CODE
How can I get another report to be the last page in reporting services? I have 2 reports that need to be printed together. The first report has a header and footer that will repeat (if necessary) on data overflow. The last page, which is a form to be sent back. I basically need a way to print the last page without the repeated header/footers from the first report, and send in parameters to be used in the form. Currently I have a rectangle that does a page break placed before my footer. Inside the rectangle I have my second report (subreport). I have the header and footer unchecked for print on last page. I can get it down to 3 pages (upper-left, upper-right, and lower-right minus the header/footer). There is a property for the Header and Footers, called PrintOnLastPage If you set that to False, it will not print the header and footer on the very last page of the report. I've tried this, with little help. The problem is getting the second report to be it's own page. Currently what I have set up is a subreport within a rectangle that makes a page break; which is placed before the footer. I have both footer and header set not to print on last page. I've also removed the borders on the subreport. With this setup, I get the first page correctly, but 4 pages for the second report. Why is the second report over 4 pages? Is there whitespace above or below? You may need to generate two separate reports in this case, in two separate ReportViewer controls. I'm trying to imagine what is happening. Maybe you could provide a screenshot of the report render? The reason, is because for some reason the header/footer are still used -- even though the subreport on its own only makes 1 page. It looks like it might be doing a page break at the wrong time? Having done this recently, here is how to do it. This processed is marred by SSRS severely lacking in features. First, you cannot insert a page break into a footer. Second, in order to print your last page, it cannot be in a subreport since subreports do not have headers or footers. So, the solution I found in the end is to put the last page (in my case it was a terms and conditions sheet) in your main report's detail section, along with your header and footer (in their respective sections). Then add in the report that should come up in the first page as a subreport inside the detail section, above the last page, and separated from it with a page break, keeping in mind that any header or footer in this report will not show up (but shouldn't be too much of a problem to copy/paste the header and footer into your new "wrapper" report). As for any extra white pages, check your margins. If your report is wider than the printer can support (including margins), a blank page will come out after every page. What I usually do is set the Top and Left margins and leave the Right and Bottom margins to 0, and adjust from there to get it centered. Microsoft should really bite the bullet and do some updating to SSRS... Vanilla SSRS for years has been a sub-par solution compared to any of the many third party reporting packages out there, for no good reason. Just managing report files via the browser is a total pain, with no batch-based tools...
STACK_EXCHANGE
Hackerrank Vs Topcoder HackerRank and TopCoder are two of the most popular programming contests in the world. Both contests offer cash prizes, but which one is better? Difference between HackerRank, LeetCode, topcoder and Codeforces Why choose one platform over the other? Hackerrank and Topcoder offer different programming challenges. Hackerrank focuses on software vulnerabilities, while Topcoder challenges entrants with coding challenges in multiple programming languages. Hackerrank provides a qualitative measure of a programmer’s skills, while Topcoder measures coding speed. Finally, Hackerrank focuses on individuals, while Topcoder offers challenges and competitions with teams of developers. Each platform has its own strengths and weaknesses. Hackerrank offers a qualitative measure of a programmer’s skills, while Topcoder measures coding speed. Topcoder also offers challenges and competitions with teams of developers. Ultimately, the decision of which platform to use depends on the programmer’s needs and preferences. The pros and cons of each platform HackerRank is a platform that helps developers measure their code quality and skills. It has a wide variety of programming challenges and awards, such as the prestigious Pwn2Own contest. TopCoder is a platform that helps software engineers and developers solve coding challenges. It has several programming challenges, such as the Google Code-in contest and the Shellcode Championship. Both HackerRank and TopCoder have their particular strengths and weaknesses. HackerRank has been widely recognized as a quality assessment platform, helping developers measure their code quality and skills. However, its programming challenges are generally more difficult and demanding than those offered by TopCoder. TopCoder, on the other hand, is widely recognized as a platform that helps software engineers and developers solve coding challenges. However, its challenges are generally less difficult than those offered by HackerRank. Additionally, TopCoder does not provide a quality assessment platform, unlike HackerRank. Ultimately, both platforms have their own strengths and weaknesses. It is important to consider which one is more appropriate for the particular task at hand. What kind of challenges can you expect to find on each platform? At hackerrank, there are a range of challenges that you can find on the platform. These can include coding challenges, data science challenges, and algorithm challenges. At Topcoder, you can find challenges in a range of areas, including algorithm challenges, data science challenges, development challenges, and product challenges. How do the ranking systems work on each platform? Hackerrank and TopCoder both rank software projects on a number of factors. Hackerrank looks at how well the software meets certain criteria, while TopCoder looks at how well the software is coded. One of the most important things Hackerrank and TopCoder look at is the quality of the code. They want to make sure the software is well written and error-free. This is often called code quality. Another important factor Hackerrank and TopCoder look at is the difficulty of the project. They want to make sure the software is challenging enough for developers to complete. This is often called project difficulty. Hackerrank and TopCoder also look at how well the software is marketed. They want to make sure the software is easy to find and that it has good reviews. This is often called marketing quality. Finally, Hackerrank and TopCoder look at the team behind the software. They want to make sure the team is experienced and skilled in software development. This is often called team quality. What are the differences in the user interfaces of the two platforms? The two platforms offer different user interfaces that can be tailored to fit different users’ needs. Hackerrank has a more straightforward interface that is more suited for beginners, while Topcoder has a more sophisticated interface that is better suited for experienced developers. Hackerrank is more straightforward and is better suited for beginners. Topcoder is more sophisticated and is better suited for experienced developers. HackerRank and TopCoder are popular online platforms that allow users to compare their code snippets against those of other coders. While both platforms offer valuable insights, HackerRank seems to offer a more comprehensive view of a coder’s overall talent.
OPCFW_CODE
<aside> 🏰 I’m already convinced! Show me the open roles 👇 DAOs are revolutionizing the way that people coordinate to achieving a common goal. In order for DAOs to execute on their missions, they need to secure their financial future. Many learned that current DeFi solutions are meant for consumers, not institutions. That’s where we come in. At Castle Finance, we are building the next generation of financial tools and investment products designed specifically for DAOs. We let DAOs focus on their mission, while we take care of their finances. We’re backed by top VCs in the industry and our team has worked at some of the best web2 and web3 companies in the world. We move fast, build in the open, and are eager to bring on new team members who share our vision of a decentralized future. Governments and other powerful establishments want to stop web3. They want to keep centralized control, keep collecting rent, and keep themselves fat and happy. We can’t let them win. The future depends on it, depends on us to build open, fair, efficient governance and financial systems that everyone can benefit from. This is our mission. We jump out of bed every day, excited to take another step towards a better future. We believe building effective decentralized systems is the biggest problem in the world today. We’d rather fail working on this than succeed on something less important. We’ll never pivot to building YAP (yet-another-ponzi) — we’d rather die. We always seek to collaborate, not compete. We win by disrupting TardFi, not by tearing down other DeFi projects. We’ve partnered with some of the best teams in the space. More information soon 👀. 🧑🔧 Customer Success We win by helping our customers win and we go above and beyond product scope to serve them. We believe these long term relationships will pay dividends for years to come. 🎨 Craftsmanship & Ownership We are craftspeople at heart and take the opportunity to express ourselves through our work. We each take ownership of critical pieces of the company and embody the mandate to make it as beautiful as possible. ⚜️ Authenticity & Transparency Building open and fair systems requires us to exhibit the same traits in ourselves. Plus, we believe everyone performs best when they bring their authentic selves to work. Good or bad, we will always be transparent with you. Adam is a product-person at heart, having previously worked at Microsoft on Azure build systems and Commonwealth building community tools for DAOs. Adam also runs a top-selling DTC brand. He loves surfing, house music, and astrophotography🌌. Charlie is a quant trader and software engineer. Before founding Castle, he built machine learning systems at Amazon and Workday. Outside of work, he plays high-stakes poker and rock-climbs 🧗♂️. DnB-maxi 🎶 Senior Software Engineer <aside> 🧑🔬 Send us an email ([email protected]) explaining why you are the best candidate for the role. Include whatever evidence you want (resume, Github, website, LinkedIn, etc).
OPCFW_CODE
What is load-on-startup element in web xml? xml. The load-on-startup element of web-app loads the servlet at the time of deployment or server start if value is positive. It is also known as pre initialization of servlet. You can pass positive and negative value for the servlet. How do you make sure a servlet is loaded at the application startup? The element indicates that the servlet should be loaded on the startup of the web application. The optional contents of this element must be a positive integer that specifies the order in which the servlet should be loaded. Servlets with lower values are loaded before servlets with higher values. What is load-on-startup configuration? The element load-on-startup indicates that this servlet should be loaded (instantiated and have its init() called) on the startup of the Web application. The element content of this element must be an integer indicating the order in which the servlet should be loaded. What is default servlet in web xml? The Default servlet (or DefaultServlet) is a special servlet provided with Tomcat, which is called when no other suitable page is found in a particular folder. It’s purpose is to display a directory listing, which may be enabled or disabled by modifying the “listings” parameter. What is URL pattern in web XML? The element specifies a URL pattern and the name of a declared servlet to use for requests whose URL matches the pattern. The URL pattern can use an asterisk ( * ) at the beginning or end of the pattern to indicate zero or more of any character. What is Tomcat web xml? XML. The web. xml file is derived from the Servlet specification, and contains information used to deploy and configure the components of your web applications. When configuring Tomcat for the first time, this is where you can define servlet mappings for central components such as JSP. What is the use of servlet mapping in web xml? Servlet mapping specifies the web container of which java servlet should be invoked for a url given by client. It maps url patterns to servlets. When there is a request from a client, servlet container decides to which application it should forward to. Then context path of url is matched for mapping servlets. What is Tomcat Web xml? Which mapping is called first in Web xml? xml file is located in the WEB-INF directory of your Web application. The first entry, under the root servlet element in web. xml, defines a name for the servlet and specifies the compiled class that executes the servlet….Servlet Mapping. How do I deploy a web application to a Tomcat server? It is possible to deploy web applications to a running Tomcat server. If the Host autoDeploy attribute is “true”, the Host will attempt to deploy and update web applications dynamically, as needed, for example if a new .WAR is dropped into the appBase . What happens to an exploded web application when Tomcat is stopped? Note that if an exploded web application has an associated .WAR file in the appBase, Tomcat will not detect if the associated .WAR has been updated while Tomcat was stopped and will deploy the exploded web application as is. The exploded web application will not be removed and replaced with the contents of the updated .WAR file. How do I enable logging in Tomcat? This java.util.logging implementation is enabled by providing certain system properties when starting Java. The Apache Tomcat startup scripts do this for you, but if you are using different tools to run Tomcat (such as jsvc, or running Tomcat from within an IDE), you should take care of them by yourself. What is servlet load on startup in web XML? load on startup in web.xml. The load-on-startup element of web-app loads the servlet at the time of deployment or server start if value is positive. It is also known as pre initialization of servlet. You can pass positive and negative value for the servlet. As you know well, servlet is loaded at first request.
OPCFW_CODE
Back in 2021, we announced a plan to retire legacy Platform API versions on a yearly basis, so that our engineering teams could focus their development efforts on enhancing the latest API versions to improve the overall Salesforce experience when building custom functionality via applications. In this post, we’ll share an important update to the legacy API retirement plan, some tips on how to identify legacy API usage, and how to update those API requests. Important update to the legacy API retirement plan The latest phase of the legacy API retirement plan was announced in early 2022 and took effect during the Summer ’22 release. With this release, we deprecated SOAP, REST, and Bulk API versions ranging from 21.0 to 30.0. As part of our original plan, these API versions would no longer be supported, but would remain available until we retire them in the Summer ’23 release. Following consultation with the community and our partners, we decided to delay the upcoming API retirement to the Summer ’25 release to ensure a smooth transition (see knowledge article). Due to this extension, APIs version 21.0 to 30.0 are still unsupported, but they will remain available until the Summer ’25 release. Since Summer ’21, whenever you issue a call to a legacy API with the REST or Bulk API, you’ll notice a Warning header in the response with a message describing the issue such as this one: Once legacy API versions are retired in Summer ’25, requests to those versions will fail with the following errors: - REST API will return HTTP status - SOAP API will return HTTP status - Bulk API will return HTTP status Now that you know about the latest plan, let’s look at how you can identify if, and how, you’re impacted. Identify legacy API usage You can check for legacy API calls yourself at any time, and there are several ways to do it. All Salesforce API transactions are recorded in Event Monitoring logs. Event Monitoring normally requires a specific license, but we’ve exposed the API Total Usage ( ApiTotalUsage) event to all customers for free, so that you can monitor legacy API consumption and identify clients and integrations that need to be upgraded. API-enabled organizations have free access to the API Total Usage event log files with 1-day data retention. With Event Monitoring enabled, you can access this and all other event log file types with 30-day data retention. Logs contain key fields which guide your investigations: CLIENT_NAMEis optionally provided by clients, but it is especially helpful to pinpoint apps and integrations that are performing API calls that require investigation and adjustments. We’ll share more on this field in the last section of this post. CONNECTED_APP_IDtells you which connected app is at the origin of API calls. CLIENT_IPare helpful to identify the source of legacy API calls, but there’s a chance that these values are shared between several apps in case a technical user/system account (shared user ID) or calls are being made from a physical office location (shared IP address). We’ll share how to use the new Integration User license to address shared user issues in the last section of this post. HTTP_METHODfields give you precious clues about the type of operations that are performed by API clients. We shared several tools for accessing logs in our previous post, and you can also use third-party tools to automate this task. We’ll share an additional option that may be relevant for users concerned about running third-party code with API access to their org. Using Postman to identify legacy API usage You can use the Salesforce Platform APIs Postman collection to inspect your Salesforce logs with these steps: - Set up the Postman collection and authenticate to your org. - List the log files that track API access: - Select the REST > Query request. - Replace the value of the qquery parameter with the following SOQL query: SELECT LogFile, EventType, CreatedDate FROM EventLogFile WHERE EventType IN ('API', 'RestApi', 'BulkApi', 'ApiTotalUsage') - Click Send. - Retrieve the log file IDs from the response. - For every log file returned in the previous step, retrieve and scan the log content: - Select the REST > Logs > Get event log file request. - Set the log file ID in the idpath variable value. - Click Send. - Read the log file’s content in the response. You can either move to the Raw response tab and save it as a CSV file or use the Visualize tab to preview the content directly in Postman. - Look at the URI or API_VERSION column and check for legacy API versions (versions 30.0 and below). After identifying that you are calling legacy API versions, the next step is to update these dependencies to legacy API versions. Update dependencies to legacy API versions The upgrade procedure depends on the type of API that you’re using, but here’s a short overview of what’s required: - For SOAP-based API calls, generate a new WSDL and incorporate it into the impacted integration - For REST endpoints, update the version number in the URI to the current major release - Though you can similarly update URIs for /asyncendpoints in the case of Bulk API, the most rewarding way to upgrade legacy calls is to adopt Bulk API 2.0 and enjoy the simpler workflow and improved limits! Keep in mind that applications (e.g., Data Loader) and packages may also be making legacy API calls because of outdated libraries (Web Services connectors, AJAX Toolkit, SForceOfficeToolkit COM interface, or Force.com Toolkit for PHP, just to name a few). Make sure to upgrade those as well. And remember: no matter the change, be sure to perform regression tests to ensure everything is working as expected. Preparing for the future Whether you were impacted by the legacy retirement plan or not, you should plan for future API version retirement. We’ll leave you with some best practices for API governance. Leverage Salesforce Integration user licenses In Spring ‘23, we introduced a new license type dedicated to integrations: the Salesforce Integration user license. This new license is based on the principle of least privilege access and allows you to create API-only users for system-to-system integrations with specific access rights. The main advantage of this new license type is security, but it also allows for better tracking of integration actions since you’ll be able to assign distinct API-only users to integration and relate integration API calls with specific user IDs in your logs. Five Salesforce Integration user licenses are included in each Enterprise, Unlimited, and Performance Edition org. You can also reach out to your Account Executive should you need more. Specify a client name when building REST API integrations When building new integrations with the REST API, make sure to specify a client name using the Sforce-Call-Options request header like so: The client name that you specify in the header will be visible in the logs under the CLIENT_NAME field. This helps you to debug and audit integration API calls. We hope that this additional delay to our legacy API retirement plan will allow for a smooth transition, and we encourage you to get started today regardless of the new deadline. Migrating to newer API versions is always a safe bet for getting access to new capabilities and improved performance and security. About the author Philippe Ozil is a Principal Developer Advocate at Salesforce where he focuses on the Salesforce Platform. He writes technical content and speaks frequently at conferences. He is a full-stack developer and enjoys working on DevOps, robotics, and VR projects. Follow him on Twitter @PhilippeOzil or check his GitHub projects @pozil.
OPCFW_CODE
Update Vetur type inference for SFCs The Typescript guide says that exporting Vue.extend({...}) will no longer be necessary in Typescript-based SFCs due to Vetur having correct syntax highlighting and type inference. Despite the fact that Vetur deals with this correctly without wrapping, the Typescript compiler will produce an error when the default export isn't wrapped, presumably because it does not make the same assumptions as Vetur. The official Typescript Vue starter guide references this as well, stating that Vue.extend({...}) is still required even with Vetur. This change is to remove that snippet in the Typescript guide so that users aren't confused that their Typescript compilers are erroring out despite Vetur working correctly. Happy to merge with approval from @octref. I wonder under what situation TS would throw errors, since the typing definition we provide already accepts plain ComponentOptions. @HerringtonDarkholme I guess he meant TS won't compile for export default { ... } Is that true that there is no way to config TS to support it? I haven't used much TS + Vue in combination yet, and when I was testing I was always using Vue.extend in TS blocks. @HerringtonDarkholme the two cases I found that created the error are: Exporting a default object and later using that as a Vue constructor: // App.vue <script lang="ts"> export default { // ... } </script> // index.ts import App from './App.vue' new App({ el: '#app' }) gives an error: TS2351: Cannot use 'new' with an expression whose type lacks a call or construct signature. Exporting a default object for a component that references this and then later importing that component: // Message.vue <script lang="ts"> export default { data() { return { message: 'Hello' } }, methods: { clearMessage() { this.message = '' // problematic } } } </script> // App.vue <script lang="ts> import Message from './Message.vue' // ... </script> gives an error: TS2339: Property 'message' does not exist on type '{ clearMessage(): void; }'. In general it looks consistent that exporting a default plain object leads Typescript to not actual assume (even from the 'vue-shim.d.ts' ambient module declaration) that the object itself is a Vue type. This is my module declaration FYI: declare module '*.vue' { import Vue from 'vue' export default Vue } @ferdaber Yes, using this will break ts-loader compilation, because Vetur's wrapping only affects editor, not ts-loader. Vue.extend seems to be required for ts-loader users for type checking. Only JS users or transpile-only TS users can benefit from Vetur's editing. Maybe we can just remove the Vetur section? So TS users (I guess most of them do type check at ts-loader) will not be confused. Is removing ok? @octref @chrisvfritz I agree with removing. On the Vetur side we should separate TS mode from JS mode and stop the default wrapping in TS blocks, so that people wouldn't be "fooled" by Vetur. That default wrapping is only meant for JS. Changed the file to remove the entirety of the Vetur section. This is probably not the forum for this but I'm curious still as to why the TSC will still throw a compile time error when importing a .vue file and using it as a Vue constructor (see the App example above). I had thought that using the module shim allows TS to cast whatever is imported with a .vue extension as a Vue constructor? Probably because ts-loader will read the script content in the vue file rather than reading from module declaration. This is ts-loader's feature. If you encounter this without webpack and ts-loader, please let me know how you do that. Awesome, thanks for the clarification!
GITHUB_ARCHIVE
Doesn't a cache miss time overrule the time taken to directly access the data from ram Suppose there is a computer with three cache's L1,L2(inside the processor) and L3(external to the cpu) and a RAM. Now suppose CPU need to access data "ABC" for some operation which is definitely available in the RAM but due to the architecture- The CPU will first check for "ABC" inside L1 cache, where for instance it is not found(hence a cache miss --> some time wasted checking), then L2 cache is checked and again data is not found(again cache miss --> some time wasted), similarly L3 is checked and data is not found(cache miss --> some time wasted again), now finally RAM is checked where data is found and used. Now here isn't much time wasted just in checking various memories, won't it be much more efficient to directly access the ram for such operations without checking any cache memory. Keeping in mind that as the CPU progresses towards L2 from L1, L3 from L2, and RAM from L3 the physical distance of these memory units increase from the cpu and hence the access time. "L3(external to the cpu)" - not likely! External caches aren't really used nowadays. No problem, we can consider the L3 also internal to the cpu :) . If you had a CPU with caches that were that slow to check (unlikely), you'd design it to probe multiple levels of cache in parallel with sending the request to the memory controller. And maybe design a mechanism to squash those requests if a hit arrives before you get to RAM. But the #1 rule of caching is that caches work! Hits are common. In a typical Skylake-desktop CPU, the extra latency for a cache miss that has to go all the way to DRAM is maybe an extra 20 cycles spent just checking caches on the way down the memory hierarchy, out of >= 250 cycles for DRAM. Hits make it worth it! Think it in reverse way, if data found in L1 then it will save comparatively more time than in RAM same is case with L2 and L3. As Harold pointed out L3 is not external to CPU nowadays. In terms of size L1 < L2 < L3 <<< RAM This shows searching in RAM will take much more time than searching in L* caches. Caches were introduced for this purpose i.e. To save search time. As you said, sometimes yes if data is not present in L1,L2,L3 then you need to access RAM with some penalty of caches but that data will be saved in cache for future access. Thus advantages of having cache hits outweighs this penalty. Generally cache hit ratio should be/ is 90%, if it is not then you need to tune caching policy. I hope this was helpful. Yes it was. Also I'd like to know dies physical distance really matter in terms of data transfer since electrical signal speed is comparable to the speed of light. Glad it was helpful. For your question answer is No. That should be very small (negligible). Important thing here is you save all those instructions (search in L1, search in L2, search in L3, search in RAM) if you find data as fast as possible. complexity increases from L1 to RAM. I would appreciate if you accept the answer if it was helpful :) Surely :) . Thanks alot :) @Brut3Forc3 DRAM is slow because DRAM cells are inherently slow and the interface isn't that fast either, the "distance" people speak of usually means "distance in time" - though it also corresponds to physical distance, it not primarily caused by it. @harold what is "distance in time" ? @harold what is "distance in time" ? what he meant by "distance in time" is, time reqd to access one cell/unit of dram/L1/L2/L3 is considered as distance (which is proportional to physical distance, speed of dram etc.). @harold correct if you meant something else. Sorry but I am still unable to understand it . To avoid long tail of comments, please join chat room named 'Q34585816'. @harold, you are also invited to join "Further from the CPU" is jargon for "takes more time to access", the physical distance does not contribute much to that time but correlates with it anyway (you wouldn't put a harddisk in your CPU, it wouldn't help if you did). How do I join this chat room ?
STACK_EXCHANGE
Some day's ago microsoft Releases exchange 2010 SP1. When you install Exchange 2010 SP1 you need to install some hotfixes. The Exchange Team have made a nice over view witch hotfixes you need for the OS. |windows Server 2008 |Windows Server 2008 r2 |windows 7 & Windows vista A .NET Framework 2.0-based Multi-AppDomain application stops responding when you run the application or Microsoft Connect |Windows6.0-KB979744-x64.msu (CBS: Vista/Win2K8) |Windows6.1-KB979744-x64.msu (CBS: Win7/Win2K8 R2) An ASP.NET 2.0 hotfix rollup package is available for Windows 7 and for Windows server 2008 r2 |Request from CSS AD RMS clients do not authenticate federated identity providers in Windows Server 2008 or in Windows Vista. Without this update, Active Directory Rights Management Services (AD RMS) features may stop working |Request from CSS using the “View and request hotfix downloads” link in the KBA | US-English |Select the download for Windows Vista for the x64 platform. Two issues occur when you deploy an ASP.NET 2.0-based application on a server that is running iis 7.0 or IIS 7.5 in Integrated mode |Request from CSS using the Hotfix Request Web Submission Form or by phone (no charge) FIX: ArgumentNullException exception error message when a .NET Framework 2.0 SP2-based application tries to process a response with zero-length content to an asynchronous ASP.NET Web service request: “Value cannot be null”. rpc over HTTP clients cannot connect to the Windows Server 2008 rpc over http servers that have RPC load balancing enabled. |Request from CSS |Select the download for Windows Vista (x64) An update is available to remove the application manifest expiry feature from AD RMS clients. WCF services that are hosted by computers together with a NLB fail in .NET Framework 3.5 SP1 |X86: Windows6.1-KB982867-v2-x86.msu (Win7) x64: Windows6.1-KB982867-v2-x64.msu (Win7) FIX: An application that is based on the Microsoft .NET Framework 2.0 service pack 2 and that invokes a Web service call asynchronously throws an exception on a computer that is running Windows 7. Some of the hotfixes would have been rolled up in a Windows update or service pack. Given that the Exchange team released SP1 earlier than what was planned and announced earlier, it did not align with some of the work with the Windows platform. As a result, some hotfixes are available from MSDN/Connect, and some require that you request them online using the links in the corresponding KBs. All these updates may become available on the Download Center, and also through Windows Update. These hotfixes have been tested extensively as part of Exchange 2010 SP1 deployments within Microsoft and by our TAP customers. They are fully supported by Microsoft. You can use the Install the Windows Server 2008 SP2 operating system prerequisites on a windows 2008 r2 server. Only you have to run the following powershell command: Import-Module ServerManager Installed Exchange 2010 SP1 on a Windows 2008 R2 Server with problems. I feels that the MMC is faster. Tomorrow upgrading a DAG/NLB cluster to Exchange 2010 SP1.
OPCFW_CODE
Xvimagesink waiting endlessly for PLAYING state trevorgryffits at gmail.com Mon Apr 11 20:09:30 UTC 2016 I use GStreamer 1.4.1 to develop a custom player/converter application. I cannot update GStreamer version right now. I have some problems with playing functionality. As a sink I use xvimagesink (let's call it renderer) located in custom bin (let's call it rendererbin). This bin is the last one in pipeline and this sink is the only one. Whole pipeline is rather long (several dozens of elements grouped in various bins) and contains over a dozen custom elements so I am not posting it here. I have tried to simplify the pipeline, but it seems that described bug occurs more often for the The problem is that player stops after first/second frame of video. It happens rather often (over 10% of all cases). Xvimagesink is stuck in PREROLLing. However, queue before renderer is full and its task on srcpad is waiting (fruitlessly) in gst_base_sink_wait_preroll for PLAYING state. So, why this is change is not happening (AFAIU)? Well, it is not entirely truth. Renderer enters PLAYING state without problems in response of proper "signal" from rendererbin. However, right after this change comes FLUSH_START event resulting in call to gst_element_lost_state. Renderer goes back to PAUSED state asynchronously and signals this with proper message. This message goes to rendererbin (causing same state change) and further - to the pipeline. And now, there are two possibilities: - pipeline managed to complete state change before receiving ASYNC_START message. Pipeline change state to PAUSED and, after receiving ASYNC_DONE message, performs state change to PLAYING - everything works fine. - pipeline is still changing to PLAYING. In this case, ASYNC_START message is ignored, because "element is busy" (bin_handle_async_start), and pending state is not updated to PAUSED. Shortly after, pipeline enters PLAYING, and pending is set to VOID_PENDING. In this way, after receiving ASYNC_DONE, pipeline is not changing states any further. Results? Whole pipeline in PLAYING, except renderer and rendererbin, doing nothing in PAUSED. It's quite possible that I've misunderstood this whole state/preroll thing. If this is the case, please let me know. I know that I can workaround this problem by setting xvimagesink.async to false. However, it seems that it alters synchronization, crucial in this application, because it is combining video frames with timestamped data coming from other sources. I am wondering if is there any better way to fix Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... More information about the gstreamer-devel
OPCFW_CODE
use std::fs; use std::io::{self, Write}; use std::os::unix::fs::OpenOptionsExt; use std::str; use termion::color; use termion::style; use crate::clipboard; use crate::consts::{ PASSWORD_STORE_CHARACTER_SET, PASSWORD_STORE_CHARACTER_SET_NO_SYMBOLS, PASSWORD_STORE_CLIP_TIME, PASSWORD_STORE_GENERATED_LENGTH, PASSWORD_STORE_UMASK, }; use crate::util; use crate::util::EditMode; use crate::{Flags, PassrsError, Result}; pub(crate) fn generate(secret_name: String, length: Option<usize>, flags: Flags) -> Result<()> { let clip = flags.clip; let force = flags.force; let in_place = flags.in_place; let no_symbols = flags.no_symbols; let path = util::canonicalize_path(&secret_name)?; util::create_dirs_to_file(&path)?; if !force && !in_place && util::path_exists(&path)? { let prompt = format!("An entry exists for {}. Overwrite it?", secret_name); if util::prompt_yesno(prompt)? { fs::OpenOptions::new() .mode(0o666 - (0o666 & *PASSWORD_STORE_UMASK)) .write(true) .truncate(!in_place) .open(&path)?; } else { return Err(PassrsError::UserAbort.into()); } } // NOTE: default character sets defined in consts.rs let set = if no_symbols { &*PASSWORD_STORE_CHARACTER_SET_NO_SYMBOLS } else { &*PASSWORD_STORE_CHARACTER_SET }; let len = if let Some(length) = length { length } else { *PASSWORD_STORE_GENERATED_LENGTH }; let secret_bytes = util::generate_chars_from_set(set, len)?; let secret = str::from_utf8(&secret_bytes)?.to_owned(); if clip { clipboard::clip(&secret, force)?; writeln!( io::stdout(), "Copied {yellow}{}{reset} to the clipboard, which will clear in {} seconds.", &secret_name, *PASSWORD_STORE_CLIP_TIME, yellow = color::Fg(color::Yellow), reset = style::Reset, )?; } if in_place { let mut existing = util::decrypt_file_into_strings(&path)?; existing[0] = secret.clone(); let existing = existing.join("\n"); let existing = existing.as_bytes(); util::encrypt_bytes_into_file(existing, &path, EditMode::Clobber)?; util::commit( Some([&path]), format!("Replace generated secret for {}", secret_name), )?; if !clip { writeln!(io::stdout(), "{bold}The generated secret for {underline}{}{reset}{bold} is:\n{yellow}{bold}{}{reset}", secret_name, secret, bold = style::Bold, underline = style::Underline, reset = style::Reset, yellow = color::Fg(color::Yellow), )?; } } else { util::encrypt_bytes_into_file(&secret_bytes, &path, EditMode::Clobber)?; util::commit( Some([&path]), format!("Save generated secret for {}", secret_name), )?; if !clip { writeln!(io::stdout(), "{bold}The generated secret for {underline}{}{reset}{bold} is:\n{yellow}{bold}{}{reset}", secret_name, secret, bold = style::Bold, underline = style::Underline, reset = style::Reset, yellow = color::Fg(color::Yellow), )?; } } Ok(()) }
STACK_EDU
Running Multiple Versions of .NET It can be done on a Windows 2003 server, as long as you remember which version is which. Whether you're a network administrator or an application developer who has been working with Microsoft technologies for a while, you may have experienced the pain of managing development environments on the Windows platform. In the past, you didn’t have the luxury of running multiple environments on the same computer. The latest version of the libraries would always overwrite previous versions, making it difficult for system administrators to provide developers the flexibility to work with various development environments to efficiently do their job. Luckily, the .NET architecture no longer restricts you to such limitations. You can install multiple versions of components on a single server and benefit from their peaceful coexistence by running them simultaneously. Microsoft refers to this as side-by-side versioning. Here's an example: If your developers created an application with .NET Framework 1.0, you can run it under version 1.0 or using .NET Framework 1.1. The default behavior of the client-side applications is a bit different compared to server-side applications. By default, client-side apps use the version of the .NET Framework with which they were built, even if the client is running a later version. The server-side applications built with ASP.NET, on the other hand, use the latest version of the .NET Framework installed on the server by default. (I cover this later, but note that you can specify a specific version that the application should use with the aspnet_regiis.exe command-line tool.) Any computer that will be running .NET applications requires a common language runtime, which is part of the .NET Framework. Therefore, you need to ensure that you have deployed the redistributable package of .NET Framework on the computers. The latest version, .NET Framework 2.0, supports these OS versions: - Windows 98 - Windows 98 Second Edition - Windows ME - Windows 2000 Service Pack 3 - Windows XP Service Pack 2 - Windows Server 2003 You can download .NET Framework 2.0 (x86) redistributable packages here. You will also need Internet Explorer 5.01 or newer for all versions of .NET Framework. In addition, Windows 98 and Windows ME also requires Windows Installer 2.0 or newer. All other systems require Windows Installer 3.0, although Microsoft recommends using Windows Installer 3.1 or newer. On x86 computers you will need about 280MB of disk space, while x64 systems will require about 610MB. The download page lists some additional server requirements. Microsoft faces a tough challenge because they are responsible for ensuring that multiple versions of the .NET Framework run smoothly and that they won’t cause any compatibility issues on their OSes. This becomes difficult when developers don’t follow best practices; everyone is ready to blame Microsoft when they run into problems. This is not to say that Microsoft always writes perfect code. Microsoft has to come up with a solution that won’t break legacy applications just because there were some changes made to the libraries. Microsoft came up with a solution to allow ASP.NET applications to be configured to use the version on which they were developed on, and for the client-side applications to use the original libraries with which they were built. This design avoids dependency on the shared libraries and gives administrators a smoother path to upgrading the individual applications. To make side-by-side versioning work on a single system, each version of .NET Framework is placed in a separate folder. The fully qualified path to the folder includes the version number. For example, the computer running version 2.0.50727 of the .NET Framework is installed at C:\%windir%\Microsoft.Net\Framework\v2.0.50727 (see Figure 1). [Click on image for larger view.] |Figure 1. Multiple versions of .NET Framework placed in their own folders. The application contains information about the version number so that the common language runtime can determine which version of .NET Framework it should use. Keep in mind that the client-side applications will not use an updated version unless you explicitly tell them to do so. The server-side applications built with ASP.NET will use the latest version of the .NET Framework, unless you use the /noaspupgrade switch with the .NET Framework installer dotnetfx.exe. Here’s the syntax for preventing ASP.NET applications from being upgraded so they won’t use the latest version of .NET Framework: dotnetfx.exe /q:a /c:"install /noaspupgrade /l /q" You can install the .NET Framework silently by using the following syntax, but if you don’t use the /noaspupgrade switch, the ASP.NET applications will be upgraded to use the latest version, as mentioned earlier: dotnetfx.exe /q:a /c:"install /l /q" As an administrator, when installing the .NET Framework you can first use the /noaspupgrade switch and then use the command-line aspnet_regiis.exe tool to specify the exact version of the .NET Framework that you want an ASP.NET application to use. This will give you better control over the applications and avoid compatibility issues. You should use the matching version of the tool that comes with that specific version. For example, you should use the version of aspnet_regiis.exe that comes with .NET Framework 2.0 if you want an application to use .NET Framework 2.0. Similarly, use the version that comes with .NET Framework 1.1 if you want an application to use .NET Framework 1.1. Windows Server 2003 comes with .NET Framework 1.1 but you can install the latest version 2.0 of .NET Framework, or for backward compatibility you can install version 1.0 of the .NET Framework. Remember to enable each unique version of ASP.NET in the security extensions dialog box of the Internet Information Services (IIS) administration console. Furthermore, each unique version of ASP.NET must run in a separate application pool process under IIS 6.0. Thanks to side-by-side versioning, managing multiple versions of development environments is much easier now. With Windows Server 2003, IIS 6.0 and the .NET Framework 2.0, you are all set to go. Zubair Alexander, MCSE, MCT, MCSA and Microsoft MVP is the founder of SeattlePro Enterprises, an IT training and consulting business. His experience covers a wide range of spectrum: trainer, consultant, systems administrator, security architect, network engineer, author, technical editor, college instructor and public speaker. Zubair holds more than 25 technical certifications and Bachelor of Science degrees in Aeronautics & Astronautics Engineering, Mathematics and Computer Information Systems. His Web site, www.techgalaxy.net, is dedicated to technical resources for IT professionals. Zubair may be reached at email@example.com.
OPCFW_CODE
I'm a physics Ph.D. candidate at Columbia University in Szabolcs Marka's group working on multi-messenger astrophysics (MMA) using gravitational waves (GW) and high energy neutrinos. I wrote and maintan LLAMA, a cutting-edge MMA search pipeline and software framework implementing the world's first GW+neutrino online search pipeline. I have also worked extensively on LIGO's timing system hardware and diagnostic software as well as education and outreach. See the research tab for more information on my GW/MMA research contributions. LATE PHD RESEARCH: GW/NEUTRINO MULTI-MESSENGER PIPELINE In the late Initial LIGO and early (O1) Advanced LIGO eras, my team spearheaded the first GW/neutrino offline searches. During aLIGO's second observing run (O2), I took their method online by creating a software framework, LLAMA (Low-Latency Algorithm for Multi-messenger Astrophysics), for conducting this GW/neutrino search and disseminating results in low-latency. LLAMA was the world's first fully-automated low-latency GW/neutrino search pipeline and the first automated GW follow-up search to include spatial priors in its calculations. It proved to be highly reliable and was regularly the fastest GW follow-up search to run during O2. Follow-up with IceCube high-energy neutrinos was performed for all LIGO/Virgo O2 triggers, including historic kilonova GW170817; sub-threshold joint triggers overlapping with the original 1-detector GW skymap were successfully distributed and followed up by some EM partners, but improved GW skymap localizations showed a non-coincidence. The LLAMA framework was also used during O2 to run critical diagnostic checks of LIGO's timing system. It processed public triggers and demonstrated the ability to process LIGO sub-threshold triggers as well (though actual significance and plotting steps could not performed on sub-threshold triggers without code review). I used an advanced architecture, cloud-hosting solutions, and other advanced devops practices (which are only now becoming standard practice in MMA efforts) to achieve its exceptional performance. For O3, my team and I developed an improved Bayesian significance calculation for our pipeline that uses other astrophysical source data to boost sensitivity. This search method relies on a general Bayesian odds ratio which can be readily extended to other messenger types and 3+ detectors. During this time, I also added extensive reliability and performance improvements and wrote a highly performant multi-resolution HEALPix math library to ensure that LLAMA could run quickly and reliably with arbitrary spatial-prior-motivated core significance calculations. This was done to ensure that the new Bayesian method would work for GW/neutrino searches as well as any extensions added during O3. LLAMA has been running with the new code during O3, regularly finishing in less than a minute from GW trigger generation time (though IceCube result dissemination time has been longer due to humans-in-the-loop). Throughout this time, I gained highly extensive expertise in software development/operations methods and and astrophysics infrastructure by creating software tools in support of my entire team's scientific goals (many of these tools can be found in these three repositories). I developed advanced software for data retrieval, caching, validating, cleaning, and formatting, all highly non-trivial tasks due to LIGO's large data volumes and limited infrastructure. This also required developing virtual machines, cloud-hosted images, provisioning scripts, and extensive documentation to cope with the tremendous operational complexity of using constantly-changing experimental (in all senses of the word) scientific libraries with limited support and documentation. My teammate Rainer Corley and I extensively documented the photodiode layout of the Interferometer Sensing and Control (ISC) system (a crucial susbsytem used for scientific data collection and interferometer control) and created an interactive installation map showing the layout and operation of the whole system. I many other contributions to detector operations, like making useful LIGO tools available on MacPorts and adding control interfaces to LIGO's EPICS/MEDM industrial control system. I also used my technical expertise to make modest non-astrohysics contributions to to my group's public health efforts. (Much of my publically-available work is linked above; some of the rest is spread across DCC, ArXiV, BitBucket, and my GitHub.) UNDERGRAD & EARLY PHD RESEARCH: LIGO TIMING SYSTEM I started working with the Marka group as an undergraduate at Columbia. During this time I contributed to the development of Advanced LIGO's timing system, which my group was building from scratch to address weaknesses in Initial LIGO's commercially-built timing system. I debugged FPGA timing code; developed documentation and workflows for assembly, firmware-flashing, and testing; and personally assembled, tested, and shipped most of Advanced LIGO and KAGRA's timing system. At the beginning of my Ph.D., I resumed my undergraduate work on LIGO's timing system. I developed diagnostic systems and software to ensure the timing system's reliability. This involved developing remote-work tools, devops methods (like virtual-machine images), installation documentation, maintenance procedures, and documentation that enabled timing system supervision from NYC. Thanks in large part to these efforts, my group has maintained the timing systems at both LIGO sites (this is no small accomplishment; the Columbia group is the only group that manages and maintains mission-critical LIGO hardware from off-site. The timing system has flawlessly through the Advanced LIGO era, enabling precise event reconstruction and source localization in support of LIGO's historic, Nobel-prize-winning detections. Japan's KAGRA GW detector will also rely on our proven timing system. Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA) with Gravitational-Wave and High-Energy Neutrino Candidates (2019), Countryman, S., A. Keivani, I. Bartos, et al., eprint arXiv:1901.05486 GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs (2018), The LIGO Scientific Collaboration, the Virgo Collaboration, B. P. Abbott, et al., eprint arXiv:1811.12907 Bayesian Multi-Messenger Search Method for Common Sources of Gravitational Waves and High-Energy Neutrinos (2018), Bartos, I., D. Veske, A. Keivani, et al., eprint arXiv:1810.11467 Search for High-energy Neutrinos from Binary Neutron Star Merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory (2017), Albert, A., M. André, M. Anghinolfi, et al., The Astrophysical Journal Letters, Volume 850, Issue 2, article id. L35, 18 pp. (2017). GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral (2017), Abbott, B. P., R. Abbott, T. D. Abbott, et al., Physical Review Letters, Volume 119, Issue 16, id.161101 Observation of Gravitational Waves from a Binary Black Hole Merger (2016), Abbott, B. P., R. Abbott, T. D. Abbott, et al., Physical Review Letters, Volume 116, Issue 6, id.061102
OPCFW_CODE
[En-Nut-Discussion] Off-topic - Building Nut/OS fast...very fast harald.kipp at egnite.de Tue Jul 19 17:21:37 CEST 2011 On 7/19/2011 1:45 PM, Thiago A. Corrêa wrote: > Probably the problem lies with > building Nut/OS for each platform. > Maybe it's obvious, but make -j NUMBER_CORES does speed up the build > quite a lot. I used that in qnutconf and got build down to half the Yes, I tried this as well and now the CPU cores are sometimes up to 90%. Though, I used -j instead of -j8. The latter is still a bit slow. No A minor problem is, that I had to split "make clean all install" into "make clean", "make all" and "make install". The big problem is, that failures are much harder to find. If one process fails, the others are still producing more output. I observed the same problem with qnutconf. (Btw. sorry for adding the ugly controls "verbose" and "clear" in the qnutconf main window. I'll fix this next time). FYI, I'm running nut/tools/packaging/distcheck.lua. It finally produces a file errors-x.y.z.log, which becomes unreadable with option -j. > time compared to nutconf. We could also consider tweaking the > makefiles to support ccache. I don't know about this, but will try it. > Harald, are you building for release or build tests? Maybe we could > consider a buildbot? Most of the time it is used to check, that everything is still building fine. Of course the same script finally produces a distributable package, but that's required seldom. You mean a build bot that automatically tries to build all targets after each commit? The ETH Zurich provided this service some time ago, but stopped after BTnut was no longer actively developed. It often produces cryptic results, but in general it had been quite useful. Right now I'm starting the script manually. If it runs through, no problem. If it detects a problem, then I have a problem. In most cases more than one fix is required, requiring several runs until everything is in shape again. Sometimes this takes many hours. A build bot, which will email a complaint to the committing developer would definitely help. Does Sourceforge offer this service and if, can anyone recommend it or are there better alternatives? More information about the En-Nut-Discussion
OPCFW_CODE
I’ve just released a new Math CAPTCHA Library for CodeIgniter, which can use plain text English words for numbers and random question phrases. It’s also supports multiple languages (as it uses the core language library) and both addition and multiplication. It’s still in the early stages so it needs to be put through its paces, but hopefully the CodeIgniter community will find this a nice alternative to the regular image CAPTCHA or simple math CAPTCHA. The library comes with 5 English language phrases and English numerals, but can easily be set up to use any other language by replicating and translating the language file. Users are also encouraged to enter their own phrases (as many as you like) in order to make the CAPTCHA more random. The phrases are randomly selected. What do you get if you add eight to five? Or if you’d prefer to have numbers in the phrase: What is 7 plus 6? Or mix it up: Add 10 to ten, what do you get? Answers can be enforced to either number only, word only, or either. Where can I find it? Head over to GitHub to view/download the latest development version and start testing: Any comments and suggestions welcome :) The folder structure should match the CodeIgniter folder structure. - Copy the - Copy the - If you would like to use another language other than English you will need to duplicate the language file and translate the numbers and phrases respectively - Initialise the math CAPTCHA library and include it in your controller. Example below: - Add a callback for the math CAPTCHA form validation if you are using CodeIgniter Form Validation library. Example below: - Print the $mathcaptcha_questionon your form somewhere. Example below: - And that’s it! There are some configuration options which you can pass to the library in an associative array when you - language: This should be set to the language that you want to use. It will default to the language set in the Codeigniter - operation: The type of math question to use; multiplication. This will default to additionif not specified. - question_format: The type of number to include in the question; random. This will default to wordif not specified. - question_max_number_size: The maximum number size to use in the question. The default is 10, which is also the maximum allowed given the limitations of the language file. - answer_format: The type of answer that is allowed; wordmeans the user must answer in a word, numericmeans the user must enter the number or eitherfor, well, either. In order to make your installation of math CAPTCHA more unique you can try changing/adding more phrases to the language file. If you add more than 5, adjust the MATHCAPTCHA_NUM_MULTIPLICATION_PHRASES constants in the library file appropriately. Photo credit: Robot by Sebastian Lund on Flickr: http://www.flickr.com/photos/96khz/3127953038/
OPCFW_CODE
Issue 132 2018-11-08 Welcome to another issue of Haskell Weekly! Haskell is a safe, purely functional programming language with a fast, concurrent runtime. This is a weekly summary of what’s going on in its community. The second annual state of Haskell survey started last week on the 1st and continues until the 15th. More than 2,800 people filled out the survey already. If you already filled it out: Thank you! Please share it so we can get a good picture of the Haskell community. If you have not filled it out yet: We want to hear from you! Please take a few minutes to fill it out. GHC 8.6.2 released by Ben Gamari The GHC team is very happy to announce the availability of GHC 8.6.2, a bugfix release to GHC 8.6.1. The 8.6 release fixes several regressions present in 8.6.1. Hakyll part 1: Setup & initial customization by Robert Pearce First post in a series on making & customizing a static site with Hakyll. Exceptionally monadic error handling by Jan Malakhovski We notice that the type of catch :: c a -> (e -> c a) -> c aoperator is a special case of monadic bind operator (>>=) :: m a -> (a -> m b) -> m band the semantic (surprisingly) matches. Lambda the Ultimate pattern factory by Thomas Mahler Recently, while re-reading through the Typeclassopedia I thought it would be a good exercise to map the structure of software design-patterns to the concepts found in the Haskell type class library and in functional programming in general. Haskell at FINN.no by Sjur Millidahl Haskell is a purely functional programming language, with a powerful type system. The ability to express intent using types brings correctness, and the composition of a large program as small, independent building blocks makes it easy to reason about the code. Haskell by example: Utopian tree by Jan van Brügge In this series we solve coding challenges from Hackerrank in Haskell in a proper, functional way. A Utopian Tree has two growth spurts every year, one in spring and one in summer. My experience upgrading GHC, build tools, and dev tools by Matt Renaud I went through the process of setting up my environment again and wanted to document my process and the pain points I ran into. Signal processing in Haskell by Rinat Stryungis Today I would like to tell you about my work in the laboratory of the Physics Department of Moscow State University, where I study for a Master’s degree. The trouble with typed errors by Matt Parsons What we really want is: Order independence, no boilerplate, easy composition, and easy decomposition. Waargonaut the JSONer by Sean Chalmers Waargonaut is a Haskell library for encoding/decoding/manipulating JSON. The design and development of which has been driven by a dissatisfaction with the current status quo of JSON libraries in Haskell. Mercury is building a bank for businesses. We are currently 8 people and have raised $6m. We are close to alpha launch and are looking to grow our team. We are creating the next generation AI chip. Our software team is looking for exceptional compiler experts to help us create the software on the chip. We’re looking for a talented colleague to join our small language and data tools team. The ontology team at Crowdstrike researches, develops, and maintains tooling central to the data model used throughout our engineering department, including custom languages and compiler environments. Work as part of the globally distributed engineering team, together with the product and design teams, to define, develop and deliver on products. You are passionate about build systems that can manage a large-scale, multi-language codebase. You are interested in building tools that can prevent complex bugs and keep our code clean. - A tale on semirings - An answer to “The Trouble with Typed Errors” - Applicative validation - Benchmarks for Haskell serialization libraries - Carnap.io: A formal logic framework for Haskell - Darcs Hub future - Eliminating run time errors in OCaml and Haskell - Elm part 3: Adding effects - Hacktoberfest 2018 wrap-up - Haskell implementations archive - Haskell vs. Go vs. OCaml vs. … - Hasktorch v0.0.1 - How can I become comfortable with laziness in Haskell? - Journal of Functional Programming: Call for PhD abstracts - Moving towards dialogue - Pandoc donation from Handshake - Pandoc for Italy, exploratory post - Proposal: Stack code of conduct - Proving monoids with Idris Package of the week This week’s package of the week is QuickCheck, a library for random testing of program properties. Call for participation - Amazonka and Gogol project status update - hs-web3: Hashable instances for Solidity primitive types - stack: Simple commands should work without ghc installed - 2018-11-08 in Raleigh, NC, USA by Raleigh Haskell Meetup: (hack . yack) - 2018-11-08 in Durham, NC, USA by Durham Haskell Meetup: Morning Haskell (and Rust!) coding session - 2018-11-09 in Austin, TX, USA by Austin Types, Theorems, and Programming Languages: Going through Software Foundations by Benjamin Pierce et al - 2018-11-10 in Chilliwack, BC, Canada by ChilliHask Haskell User Group: Weekly Haskell Coding Meetup - 2018-11-10 in Norcross, GA, USA by Norcross Haskathon: Norcross Haskathon - 2018-11-10 in Boston, MA, USA by Weekly Functional Programming Meetup: Hang out, chat FP, work on some code - 2018-11-12 in Irvine, CA by Orange Combinator - Functional Programming In OC: Combinating - The Weekly Function - 2018-11-12 in Pittsburgh, PA, USA by Pittsburgh Functional Programming Meetup: Type Providers - F# - 2018-11-13 in Vancouver, BC, Canada by Vancouver Functional Programmers: Haskell Lunch Study Group • Fall ’18 Cohort - 2018-11-13 in Santa Monica, CA, USA by Santa Monica Haskell Users Group: Haskell Study Group: Applicatives - 2018-11-14 in Portland, OR, USA by Portland Functional Programming Study Group: PDX Func Theory Track - Logic and Proof - 2018-11-14 in Houston, TX, USA by Houston Functional Programmers: Symbolic Boolean Computation and Set-Theoretic Empirical Research in OCaml - 2018-11-08 in Graz, Austria by Functional Programming Graz: Functional Programming Meetup - 2018-11-08 in London, United Kingdom by Hoodlums: Hoodlums Meetup - 2018-11-08 in Gdańsk, Poland by Functional Tricity: Haskell&Rust! Functional Tricity #14 - 2018-11-08 in Warszawa, Poland by warsaw.ex: warsaw.ex meetup #1 - 2018-11-12 in Karlsruhe, Germany by Karlsruhe Haskell Meetup: Haskell Monday - 2018-11-13 in Bristol, United Kingdom by CodeHub Bristol: Hack Night + Haskell Study Group - 2018-11-14 in Stuttgart, Germany by Lambda Stuttgart: Lambda Stuttgart Meetup #10 - 2018-11-14 in Berlin, Germany by Berlin Haskell Users Group: Haskell Wednesday - 2018-11-14 in Amsterdam, Netherlands by FP AMS: CT study group: Representable Functors - 2018-11-14 in Prague, Czech Republic by Prague Lambda Meetup: Clojure Wednesday - 2018-11-08 in Taipei, Taiwan by Functional Thursday: Functional Thursday #69 (時間更動:11/8) - 2018-11-11 in Bangalore, India by Bangalore Functional Programmers Meetup: Parser Combinators in Haskell - 2018-11-13 in Brisbane, Australia by Brisbane Functional Programming Group (BFPG): BFPG Monthly Meetup (last one of 2018!) - 2018-11-12 in Sandton, South Africa by Lambda Luminaries: Functional Aspects of Kotlin in Android Development by Chester Cobus
OPCFW_CODE
using NUnit.Framework; using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Medallion.Collections.Tests { internal static class NodeValidator<TNode, TKey> where TNode : Node<TKey, TNode> { public static void Validate(TNode node, IComparer<TKey> comparer = null) { if (node == null) { return; } var lowerBound = Traverse.Along(node, n => n.Left).Last().Key; var upperBound = Traverse.Along(node, n => n.Right).Last().Key; InternalValidate(node, comparer ?? Comparer<TKey>.Default, lowerBound, upperBound); } private static void InternalValidate(TNode node, IComparer<TKey> comparer, TKey lowerBound, TKey upperBound) { var expectedCount = Node<TNode>.ComputeCount(node.Left, node.Right); if (node.Count != expectedCount) { Assert.Fail($"Expected count {expectedCount}. Was: {node.Count}"); } if (comparer.Compare(node.Key, lowerBound) < 0) { Assert.Fail($"Found value '{node.Key}' below lower bound '{lowerBound}'"); } if (comparer.Compare(node.Key, upperBound) > 0) { Assert.Fail($"Found value '{node.Key}' above upper bound '{lowerBound}'"); } if (node.Left != null) { InternalValidate(node.Left, comparer, lowerBound, node.Key); } if (node.Right != null) { InternalValidate(node.Right, comparer, node.Key, upperBound); } } } }
STACK_EDU
As you probably should know by now there was a big cleanup with OpenGL 3.1, when the functions deprecated with OpenGL 3.0 were removed. This means that the immediate mode is gone as well as all fixed function options, which results in a more flexible and Object Oriented API. But this also means that programs written with OpenGL3.1+ are fundamentally different than those written with OpenGL 2 and before. So if you develop a new application it makes sense to stick to the OpenGL3 model. The problem is that the current Linux open source drivers are stuck to at most OpenGL 1.5 support due to missing GLSL support. But what if you want to develop a OpenGL3 compatible renderer today? This post will cover how to create a renderer based on the OpenGL3 model, but only using OpenGL 1.5 constructs. If you want to know more about the OpenGL3 model, read here. The core of OpenGL3 probably is that you feed arbitary data into vertex/ fragment shaders which do something useful with them, instead of specifying directly what to do. These shaders are written in GLSL, which is a sort of a C dialect. But as I said there is no GLSL with OpenGL 1.5, so we will have to use vertex/ fragment programs. They do basically the same as shaders, but are written in assembly. Luckily one can compile GLSL Shaders to vertex/ fragment programs using Nvidias CG so we obviosly need the nvidia-cg-toolkit. Vertex/ Fragment programs are also loaded differently, so while for a vertex shader you do unsigned int prog = glCreateProgram(); unsigned int id = glCreateShader(GL_VERTEX_SHADER); glShaderSource(id, 1, (char**)&progStr.c_str(), NULL); glCompileShader(id); glAttachShader(prog, id); glLinkProgram(prog); you do for a vertex program unsigned int id; glGenProgramsARB(1, &id); glBindProgramARB(GL_VERTEX_PROGRAM_ARB, id); glProgramStringARB(GL_VERTEX_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, progStr.length(), progStr.c_str()); The pushing of data into the shader is also handled differently. For GLSL Shader: // we bind the uniform variable "matPMV" from the vertex shader int loc = glGetUniformLocation(prog, "matPMV"); glUniformMatrix4fv(loc, 1, GL_FALSE, matPMV); // bind the attribute "vertexPos" in the shader #define VERTEX_ATTR 0 glBindBuffer(GL_ARRAY_BUFFER, vbo); glBindAttribLocation(prog, VERTEX_ATTR, "vertexPos"); glEnableVertexAttribArrayARB(VERTEX_ATTR); glVertexAttribPointerARB(VERTEX_ATTR, 3, GL_FLOAT, false, 0, 0); for a vertex program // we have no uniforms here yet // this binds the C0..C3 to the model view matrix glBindProgramARB(GL_VERTEX_PROGRAM_ARB, id); glProgramLocalParameters4fvEXT(GL_VERTEX_PROGRAM_ARB, 0, 4, matPMV); // attributes work almost the same #define VERTEX_ATTR 0 glBindBuffer(GL_ARRAY_BUFFER, vbo); glEnableVertexAttribArrayARB(VERTEX_ATTR); glVertexAttribPointerARB(VERTEX_ATTR, 3, GL_FLOAT, false, 0, 0); but since we could specify the variable name in the code, we have to do it in the Shader: uniform mat4 matPMV : C0; attribute vec4 vertexPos : ATTR0; the ordinary GLSL Vertex shader would just read uniform mat4 matPMV; attribute vec4 vertexPos; But wait, I said there is no shader support in OpenGL 1.5. Right – this is where cgc comes into play. You have to compile your shader manually with it into a vertex program, which you finally load. The command for this is: cgc -oglsl -profile arbvp1 vertexShader.cg -o vertexProg.vp As you can see, the differences are pretty small using this technique, so you can start developing using the Open Source drivers now 🙂
OPCFW_CODE
top Fraud Bots Cost Advertisers $6 Billion This kind of argument is very annoying. Whenever somebody tries to charge for content, somebody else will copy it and distribute it for free. So, it's almost impossible, in the long run, to charge for content and continue to make a profit. All that's left is creating a better "wrapper" for the consumers. It takes time and energy to do that, and people don't want to enter a credit card to experience a site, so there really aren't a lot of options left. top Philae's Batteries Have Drained; Comet Lander Sleeps A bit more detail: nuclear batteries used to power probes like Voyager used plutonium-238, which is available via the US and Russia. Bottom line, the ESA would need to rely on it's supply of americium-241 to create the next generation of batteries. The conversation about using the stockpiles of americium-241 to create batteries really started in earnest (media coverage-wise, at least) in 2012, which was after this probe was deployed. top 33 Months In Prison For Recording a Movie In a Theater I read something a few months back that really struck me. I don't recall the source, so I'll try to paraphrase to the best of my ability. The basic tenant is that punishing a crime with the intent to get back at the offender is nothing more than revenge and is not the intent of the rule of law. The rule of law is to 1) remove violent and disruptive individuals from society, 2) discourage others from perpetrating the same crime. In cases with violent and disruptive components, such as assault and drug dealing, it's very clear that incarceration is the best option. For non-violent crimes, such as IP theft, money laundering, etc, it's not really so clear. Since the intent this time wasn't to remove the individual from society (which I think we call can agree wasn't necessary in this case) that means that the judge somehow A) determined the value of the stolen film, B) decided that 33 months was the amount of incarceration that would discourage others from stealing the same "value" of property. The judge ruled out public service, ruled out probation, and ruled out fines as an acceptable deterrent to future offenders. While it's easy not to agree with the ruling, it takes a very good understanding of human psyche to know when a penalty is enough to discourage OTHERS from committing the same crime. top Ask Slashdot: What Would You Do With Half a Rack of Server Space? If this were for my company, I'd want to do two things with the hardware. First, use it to back up the cloud environment. Maybe not the applications, but definitely the data. Disaster recovery is always paramount in the corporate world. Second, I'd want the hardware used to try out some new software, techniques, file systems, media servers, etc. It's never too late to learn new skills, and what better to learn on than servers you don't mind wiping if they get messed up. Using them to mine bitcoins is far less valuable (in a corporate environment) in the long run than using them to learn new skills, and exposure to new software. top Google's Experimental Newsroom Avoids Negative Headlines I was thinking the same thing... this is what Facebook did as a social experiment in a way. Personally, I'm supportive of Facebook's experiment as it added to the scientific body of work about social manipulation. In my opinion there's no expectation of equal "news" coverage on a social site, website, blog, TV station, or anywhere. As long as there are other options available, I say that "news" services can run their service without editorial oversight by the Government. top Mayors of Atlanta & New Orleans: Uber Will Knock-Out Taxi Industry You mean, kinda link a limo service does now? In other words, there's already a "private club" service that let's the wealthier and frequent fliers get whisked efficiently to where they are going. top Valve's Steam Machines Delayed, Won't Be Coming In 2014 Mod up parent, please. top Apple's Revenge: iMessage Might Eat Your Texts If You Switch To Android The email Verizon sends an Android upgrader includes a link labeled "Prepare and Activate". The page clearly explains how to deal with this. This ENTIRE ARTICLE is about somebody who didn't RTFM and got bit in the butt. top 'The Door Problem' of Game Design Which is the authors point. A programmer, not just a person who programs, has a special way of looking at the world and its systems. The conversation she's having with people is designed to separate those two kinds of people. Systems are generally more complex than they appear on first glance--and a real programmer is very able to visualize, define, and describe the system to whatever level of complexity is required. That being said, a GOOD programmer (and his manager) is able to keep feature creep in check by not getting distracted by out-of-spec parameters. top MtGox Finds 200,000 Bitcoins In Old Wallet In that same spirit, here's the ' you missed out of "it's" top Report: Space Elevators Are Feasible So turn it sideways so that one slightly non-tiny object can destroy the entire ribbon? top MIT Develops Inexpensive Transparent Display Using Nanoparticles To be useful for windshields, I think it would be necessary to allow light in from the outside (into the car) regardless of wavelength. I watched the video but it wasn't clear to me that they could make the reflection only occur on only one side of the surface. top Researchers Develop "Narrative Authentication" System Narrative authentication has been used by the military for years to authenticate the identity of soldiers found in the battlefield who are able to communicate but don't have any form of identification. top Come Try Out Slashdot's New Design (In Beta) In the new design, the bullets in the articles don't have bullets! This makes for some weird looking posts. For example, check out this same article in Beta. top Ask Slashdot: How Do I Request Someone To Send Me a Public Key? Your first paragraph is already implemented in something called SPF. It already works using the existing DNS infrastructure. The problem is that creating SPF records is effectively voluntary, so operators of mail servers are only able to use existence of the records as a way to increase trust, and not using the absence of the records as a way to decrease trust. Until everybody is on board with it, unfortunately, it's usefulness will be limited. And, just for clarity, a POP3 "server" doesn't accept mail. POP3 is a protocol for retrieving mail from a mail server that likely received the mail from another mail server via SMTP. SMTP is the problem, not POP3. And no, it won't solve the NSA problem, or the Google problem. They'll just build bigger and faster computers to decrypt the emails. top Red Hat Ditches MySQL, Switches To MariaDB So, I paid a couple thousand dollars for my SQL Server license, but I get a more feature complete, more stable product that does exactly what I need it to do. I'm a bit glad I didn't adapt the apparently unstable MySQL. As a business person, and not as a developer, MySQL (and it's forks) seems to be turning into a train wreck that is best to avoid. about a year and a half ago top USA Calling For the Extradition of Snowden I agree a defense fund should be started. Not because I think he's innocent, but because spending more time in the courts about the broader subject of privacy and the limiting of the government's grasp is important. He fell on the sword--he's brave wrong man. about a year and a half ago top Saudi Arabia Blocks Viber Messaging Service Does this article suggest that all other messaging that are operational in Saudi Arabia are being monitored? Would something like Facebook chat, if it's transported over SSL, be considered encrypted? If it's operating in SA (not sure if it is... just asking) does that mean that the SA government has been given the "keys to the castle" so to speak? about a year and a half ago top Narrowing Down When Humans Began Hurling Spears Perhaps they mean "Hurtling"? about a year and a half ago top UW Imposes 20-Tweet Limit On Live Events "Random fan" probably doesn't have nearly as many followers as the media tweeter. So, no problem. joshuao3 hasn't submitted any stories. joshuao3 has no journal entries.
OPCFW_CODE
how to hide static pages? i was given 3 static pages e.g proposal.test.com/seo proposal.test.com/ppc proposal.test.com/design I checked those directories in the server and there's no dynamic about their indexes, all plain htm file. the instruction given to me was, hide those url from anyone that doesn't match a random url from database..meaning e.g if user typed proposal.test.com/seo ,it shouldn't display the page, if the user typed something like e.g proposal.test.com/seo/a13sdfa and a13sdfa matched a key from a databased, that's the only time the proposal.test.com/seo page will be displayed so how am I gonna do this in PHP ? because all 3 directories are made up of pure static pages.. i have done the creating of keys already, i just wanna know how to hide these pages by appending a given random key and checking if it does or don't exists in database. Pardon the question, why this? I never heard of this before... Is that homework? What have you tried, where did you fail? What web server are you using? sorry this is not a home work, it's a job related task, i just don't have any idea how to append a random key to a static page url and check it via php :( @sasori out of personal curiosity, can I ask you what's the purpose of this system? they want the e.g proposal.test.com/seo to be hidden to public, and only if there is a random key appended to the url that exist in database shall show the page, like e.g proposal.test.com/seo/asd1323 ..the purpose is, allow the boss/whatever to give urls via email sent manually I have added an example of htacess rewrites in my post, see below. Since the pages are never considered PHP, you can not block the access using PHP. You can block access by configuring your web server, for example by using a .htaccess file. If you blocked access the normal way, you can use PHP to allow access to the files on certain conditions.. How about this: move the static files outside the public folder, so they cannot be accessed directly; redirect all requests to a php file (you can use rewrite engine with apache) which will look in the database for the accessed url/key and return the file_get_contents of the corresponding file. Here's an example of how the .htaccess file could look like: RewriteEngine on RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /index.php What this does is the following: if the requested file doesn't exist on the disk (as a file or a directory), it will redirect to /index.php. There you should have the logic to render what page you want. If you don't know in which variable the server will put the slug, just do a print_r($_SERVER) from inside index.php to find it. You should use mod_rewrite (in case of Apache web-server) and setup a rewriting of /a13sdfa into something like ?key=a13sdfa. Also you should include some PHP code in all static files in order to check the key validity. can you give me a sample of what to place in the htaccess file in order to rewrite e.g /a23sadf into e.g ?key=123sadf , i guess it's the only way to check via $_GET['key'] at each static index pages There are 2 ways you could solve this problem. 1) (my prefered) Use .htaccess to only display the page if it matches the regex givin in the .htaccess. 2) In PHP (your actual question) 'Get the slug from the URL, query it to the database and if you get a result display it. Otherwise, send a 404 header from php. Assuming the following: You have an Apache webserver with mod_rewrite enabled (check php info if you arent sure). Your virtual host allows overriding (AllowOveride All). .htacces RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^/]+) index.php?check=$1 [QSA,L,t] If the file or directory exsists on the server it will display the page. So it would display seo, design etc. Otherwise it redirects to index.php and gives its slug as a parameter named $check. With this variable, query to the database, check te result and redirect to the desired page. you mean to say, if am gonna type e.g proposals.test.com/seo , it would get redirected to index.php?check ?,,but i just tried that and nothing happened, and then I added a fake random key as e.g proposals.test.com/seo/asd23 , i got redirected to a page not found error
STACK_EXCHANGE
Security certificate for forums.linuxmint.com On forums.linuxmint.com there seems to be on and off problems with the security certificate. One minute the security certificate seemed to be acceptable and 5 minutes later it wasn't. For me, it has been on and off like this for the last 24 hours. I have never run into anything like this before. There is no issue with my CMOS battery (as far as my computer clock being off). Is there anything else that I should look into? I don't notice any issue with other sites. I can confirm the certificate for the website is indeed invalid(Edge, Firefox, Chrome, and Chromium Edge). The problem has nothing to do with clocks. The certificate is fully valid on its own, but browsers refuse to accept it because it was issued for a different domain name than you're trying to visit – it's meant for the main Linux Mint website, not for the forums. Chrome actually tells you so: As does Firefox: Note that the rejected certificate is quite new (issued yesterday and valid until Dec 2020), meanwhile the actual www.linuxmint.com site has a certificate expiring at Jan 2020. So the most likely conclusion is that the sysadmins ordered a new certificate for the main website, then accidentally deployed it to the wrong web server. The certificate that I am told is for forums.linuxmint.com is the same certificate for linuxmint.com which means they mistakenly deployed a certificate that did not indicate it was for forums.linuxmint.com Edge is indicating the certificate signature is identical for both domains. Firefox's error message indicates this is the case, your answer is indeed correct, but the certificate expires Tuesday‎, ‎December‎ ‎8‎, ‎2020 ‎9‎:‎22‎:‎11‎ ‎PM not Jan 2020. I guess they are using a load-balancer, then, with a few backend servers having the new certificate and a few servers having the old one. Most of my direct connection attempts result in a certificate that was activated '2019-12-09 03:22:11 UTC', expires '2020-12-09 03:22:11 UTC'. 03:22:11 UTC is 09:22:11 CST It's the same certificate, Edge display the expiration, based on the system time apparently. I don't automatically know what timezone you live in. Though, regarding "Jan 2020" – that is the old certificate that I get for www.linuxmint.com [activated '2019-01-06 14:20:48 UTC', expires '2020-01-06 14:20:48 UTC'], not the new certificate served by forums.linuxmint.com. ne minute the security certificate seemed to be acceptable and 5 minutes later it wasn't. For me, it has been on and off like this for the last 24 hours. The certificate is indeed invalid. I have never been to the site before you submitted this question, and every browser I have attempted to use indicates the certificate is invalid. There is no issue with my CMOS battery (as far as my computer clock being off). Is there anything else that I should look into? The problem is with the certificate, not the client, there is nothing you can do to resolve the error other than ignoring it. The certificate is invalid for the domain you are asking about. The subject alternative name does not include the subdomain forums.linuxmint.com Not Critical DNS name=linuxmint.com DNS name=www.linuxmint.com The domain is currently only valid for linuxmint.com and www.linuxmint.com, when you visit the main domain, it does NOT generate an error. The certificate just was updated a few days ago. It might be an issue with cache somewhere between you and the website. They use Google and CloudFlare caching services, either of these could be having a delay updating their copy of the certificate. If you encounter a problem hit "reload" up to three (3) times to force an update through the entire cache chain. As of insuring your system is staying up to date on time, your CMOS clock is not very reliable, and Operating Systems don't use it for more than booting their software clock. I recommend that instead, you set your system to synchronize off of a public NTP time server on boot and on a regular basis while booted. (Every 4-24 hours depending on how accurate you need your time to be.) This is unlikely a cache issue. I have never been to the site, but I am getting a certificate error, and the issue dates are recent (Sunday‎, ‎December‎ ‎8‎, ‎2019 ‎9‎:‎22‎:‎11‎ ‎PM). My system time is correct.
STACK_EXCHANGE
[xquery-talk] xquery technology now ready? dlee at calldei.com Wed Dec 11 06:55:21 PST 2013 Xquery is quite mature and robust and there are many solutions available both as Open Source and Commercial in all price ranges from free to very expensive. Much like the RDBMS world where you can get sqlite through Oracle ... and nearly everything in between It is important to realize that 100% "pure XQuery" is not sufficient to build a web application, you also need a web server or application server. Either one which comes with a pre integrated XQuery product or one which you add XQuery to. Now the question of "cost effective" is subjective. Clearly the software has to run on something ... and that something is never free. If you are asking specifically about hosting services where they provide all the infrastructure including the web server then you may be hard pressed to find one that is XQuery capable. However if you host your own server (such as on Amazon where for at least the first year, there is a "Free Tier" which is sufficient for small web servers) the out of pocket expense can be literally zero. And after the first year a reasonable web server can be run for a low price. For example I run my personal web site along with about a dozen others including xmlsh.org on a "m1.small" instance on amazon and total cost including storage runs me about $30 / month. This includes 100G of storage I use for my personal photo site. A t1.micro site will cost you about $15/month on demand pricing or about $8/month if you prepay. Google Compute just made it to public so it's another choice but I have not done cost analysis on it yet. Only you can tell if this is in your budget, but it seems reasonably low cost to me. (compare to the typical "cup of coffee" metric ... 2 - 5 cups of Starbucks / month will get you a fully functioning virtual host) As you work up the scale of both size of web site, necessary hardware, and especially dont neglect support or more advanced services then you will have to pay something for it. Sometimes that is paying in your time or paying someone else's time. Also depending on what application and data you are working on, XQuery may or may not be the easiest or "best" fit for the whole enchilada. For myself, for example, I am a very strong and vocal XQuery advocate and work for a company that produces an XQuery product, but when I build my own web apps I frequently use XQuery at the data base layer for a multi-tier solution, much like I would use MySQL for the database. You can, and I have personally, produced quite good web applications 100% in XQuery ... but it isn't the only reasonable route to take. Another reasonable route is to use an app server like Tomcat or JBoss and then use XQuery "on the back end" to handle the database and heavy transformation, but leave the presentation layer to the application server. The choice is yours. The solutions are plentiful ... it's a rainbow of solutions not black & white. What approach works "best" or even "reasonably good" for you may vary dramatically from others or even yourself. But I can assert with reasonable authority that there is a wide variety of quality XQuery implementations to choose from in the price range from Budget to Gourmet. And *unlike* XForms ... XQuery does not have to solve the GUI problem. It can, but It can also be used in conjunction with other front end tools, or can be used all by itself (in a XQuery implementation which supports application servers). David A. Lee dlee at calldei.com From: talk-bounces at x-query.com [mailto:talk-bounces at x-query.com] On Behalf Of e-letter Sent: Wednesday, December 11, 2013 9:28 AM To: Christian Grün Cc: Liam Quin; talk at x-query.com Subject: Re: [xquery-talk] xquery technology now ready? On 10/12/2013, Christian Grün <christian.gruen at gmail.com> wrote: >> From all the recommendations, is it correct to assume that a dedicated >> web server will need to be used, where permissions are provided to >> install these various software products? > If you simply want to learn the language, there is no need to look out > for professional hosting solutions. Some XQuery implementations come > with their own web server code, and they are often pretty light-weight > (and can also be run locally). As Adam indicated, if you are looking > for open source solutions, you could have a look at BaseX (disclaimer: > I’m contributing to this project), or eXist-db, which are both easy to > install, and check out the included examples. The problem with the general lack of widespread deployment of xquery is a risk of disappointment similar to xforms. I would like to learn xquery with the knowledge that practical deployment is possible and cost effective; otherwise the technology doesn't seem ready yet and so sql database + sever script language remains the option. talk at x-query.com More information about the talk
OPCFW_CODE
I would like to play a message to notify callers that all calls will be recorded. It must play before connecting to an operator’s extension. This typically plays as the call enters the system if you record all calls. This is fairly new so firmware would need to be up to date. A caller only needs to be recorded when connected to an operator The message will only play when a call is about to be recorded. If you don’t want to go update things, you can use an IVR for this. Make all the entries blank, set up the recording to be the message you want to play to inform people that the call will be recorded, set the timeout to zero, and the timeout destination to your operator. Then send the call to this IVR. They’ll hear the message and be directly transported to the operator. You can also use a ring group or a queue to do this, and record your message saying that the call will be recorded, and use this message as your “Music on Hold” but there is no guarantee that the caller will hear this message in its entirety before the call is picked up by one of the ringing phones. Thanks, I upgraded to firmware version 18. Is it possible to change the recording under PBX Settings -> General Settings -> General Preference -> Record Prompt? … “This call will be recorded.” This is the only way I know of: Is the recording saved somewhere on the operating system? I can log in using SSH terminal. Of course it’s saved. You just don’t have access to it that way. This isn’t a Linux based Asterisk box that you can SSH into and have access to the file system. It’s a closed ecosystem that you can only gain access to things that Grandstream allows. Yes you can change it. It is internal Asterisk file so you must replace it via Voice prompt. This have manual inside Is it possible to download the existing default voice prompts, make a change, and upload them all? One at a time, maybe. Again, you have no direct access to the filesystem. I always recommend adding the disclaimer on the main IVR, that way you can chose to record or not. Also you can play the disclaimer as the “prompt” of a ring group and the operator is the only one in the group. That way the phone will ring after the disclaimer is played. Of course you still need to get the disclaimer and add it as a “custom prompt”, and I guess you can get it from the internet since it is an Asterisk prompt. Yes and no. You cannot download them from UCM, but most are asterisk so you can download from there. Anyway why need them ? You just replace what you need and leave rest not uploaded. Is this right? - Create folder called “en_new_prompt”. - Put “dialog-being-recorded.wav” into folder. - Create text file called “info.txt” containing text “English” followed by a blank line. - Zip “info.txt” and “en_new_prompt” into “en_new_prompt.zip”. - Open “PBX Settings” -> “Voice Prompt” -> “Choose Voice Prompt to Upload”, and select “en_new_prompt.zip”. - Select "Language: " -> “English : en” or “English: en_new_” ? Click “Check Prompt List” ? English has up arrow and cannot be downloaded - Click “Apply Changes” In basic yes. This new will replace old file and rest will be from original. Extension -> if not default then you must set on new set en_new_ This not block download original file.
OPCFW_CODE
Does personal identity (the self) have to belong to a conscious being? How can what is empty know itself unless there is some intelligence present? It can't. That's like expecting the eye that can see so many objects, to see it self; it cannot. Likewise, the self cannot see itself unless it does some kind of trick to do so. Without consciousness the self cannot diversify itself. John Locke has this view, and I agree with him. Just trying to get another input on Locke's view. Because surely I cannot have been unconscious or semi-unconscious my whole life right??? Is an infant conscious? Is this infant aware of its actions? @JamesBrewer. The self sees itself by means of intelligence, but how intelligence sees itself? Locke's theory was empiricist, in opposition to the Cartesian postulation of immutable soul. It is the continuity of memory that creates self-awareness, according to him, and hence personal identity, and consciousness is essential to that. Locke's view of self then is similar to Leibniz's view of space, it is a relational construct rather than a Cartesian substance. This empiricist view found completion in Hume's bundle theory of the self. For a discussion, see e.g. John Locke on Personal Identity by Nimbalkar, who mentions a criticism of Locke's theory by Butler, who cleverly turned a common objection against the Cartesian cogito against Locke: "Joseph Butler accused Locke of a “wonderful mistake”, which is that he failed to recognise that the relation of consciousness presupposes identity, and thus cannot constitute it (Butler, 1736). In other words, I can remember only my own experiences, but it is not my memory of an experience that makes it mine; rather, I remember it only because it’s already mine. So while memory can reveal my identity with some past experiencer, it does not make that experiencer me." Oddly enough, Locke-Hume's theory is similar to the Buddhist doctrine of anatman, the non-self, where the soul is compared to a necklace without a thread, a continuous chain of beads that come and go, giving rise to the next one. As for the mode of self-awareness, Locke did not really address it. But it is a controversial issue that caused major disagreements in the history of philosophy. An eye can easily see itself in a mirror, the idea of self-consciousness as turning, re-flecting on itself has a long tradition. But, as Frank discusses in What is Neostructuralism?, it is not without prominent detractors. And their objection is not dissimilar to the Butler's objection to Locke: "It is true that Leibniz, Kant, Hegel, and after them many others (for example, Husserl) actually described self-consciousness as reflection... It is not true, however , for early romanticism and not, for example, for Fichte, Franz Brentano, Hans Schmalenbach, or Sartre. These thinkers explicitly rejected the model of reflection of knowledge as insufficient when it is a matter of describing the experience that consciousness has of itself. And they did this without exception along the lines of the following argument. If the experience of selfconsciousness were a result of self-reflection, then the following process would have to take place: the I, still without knowledge of itself, turns to itself during the process of representation and becomes aware of: itself. But how is it supposed to register this insight if it has not already previously had a concept of itself? For the observation of something (even if it is of me) will never provide me with information about that particular characteristic of my object that makes evident that it is I whom I am observing . I must rather have already had this insight, and I now bring it into play. (Only if I already know myself can the mirror tell me that it is I who is looking at him/herself. And reflection is precisely such a mirror)". This sets off a regress of reflections to achieve self-acquaintance, and the only way to end it, it seems, is to accept that the self has a way of knowing it-self that does not rely on reflective re-presentation, some sort of direct intuitive self-awareness. This does not mean resurrecting the Cartesian substance, as Butler wanted, but it does suggest that both Cartesian and Lockian views are wanting. The romantics talked of overcoming the traditional subject/object divide in this regard, and offered a synthesis of sorts: "If this is the case - and it is the case - then selfconsciousness has to be explained differently than on the basis of reflection, namely, as a Being-familiar-with-itself prior to all reflection which since Novalis is characterized as a nonpositing self-consciousness. Hölderlin's friend Isaac von Sinclair spoke of the "athetical" Schleiermacher of the "immediate" selfconsciousness: the "immediate" self-consciousness must not be confused with the "reflected self-consciousness where one has become an object to oneself". [...] Schleiermacher expressly emphasized in the passage cited earlier that under the term "immediate self-consciousness" he understands only the familiarity of consciousness with itself, not the knowledge of an I - as the owner of consciousness - about itself. Consciousness is thus apersonal, it is not the consciousness of an I. The I, according to Schleiermacher, is generated as a by-product, or in the background of a reflection (it is not an inhabitant of the nonpositing self-consciousness). "We have no representation of the I without reflection"; "self-consciousness and I relate to one another as the immediate to the mediate"". This is excellent and thorough, thank you very much. @ttnphns Hi, you did not accept it, is there some issue? But that was not my Q, Conifold :-). I could only upvote it... To answer this question we need to ask some questions, and check some situations: Does the Blind dream? Does the mute dream? Do the mute and the blind have half I, the mute has half I, the blind has half I?. What is the matter with a child born deaf, mute and blind?. Does Consciousness have levels?, If yes, then what are levels of Consciousness?. What is the importance of Memory?. By answering these questions, we can say easily that there is a lower limit for the I to be present. The Blind has complete I, and the Mute also has complete I. But to amplify our world we need more language. Thus, the Intact has complete I, the Blind has complete I, the Mute has complete I, but the Intact has a bigger world. Thus, Consciousness and Language are required to constitute the I. Thus, factors important to shape the I is: Consciousness. Sense Organs. Memory. Language. Left to know that the self is static change, namely, it's the change itself.
STACK_EXCHANGE
This is a continuation of last week’s post, “Most College Students Don’t Have ‘The College Experience’. I was unusual in that I could have gotten ‘The College Experience’ if I wanted to, yet I *chose* the commuter college track. This shocked quite a few people. After all, who would go the commuter school route if they could have the ‘full’ college experience? Based on my research, I figured I could learn more at the commuter colleges than the ‘College Experience’ colleges. Let’s look at the teachers. Research universities choose professors based on their ability to do research. Commuter colleges choose professors based on their ability to teach. Undergrad classes at research universities are often taught by grad students. Most classes at commuter colleges are taught by experienced teachers. In my field of interest, I found that ‘The College Experience’ would have entailed me taking a lot of pointless classes, whereas the commuter schools would not. One source told me that this was on purpose, to keep students in school longer (and paying more money to the institution). I also talked to students who took on ‘The College Experience’ in spite of this problem … and they did not seem too happy with wasting at least one year of their finite lives. There were other issues too, but what it boils down to is that it seemed the commuter schools were a lot more focused on their mission of helping students learn than the ‘College Experience’ schools. But I was naive to think that people value college as a place of learning. My parents supported me from the beginning, in fact, they even encouraged me do ditch ‘The College Experience’. But some people claimed I was making a big mistake. So I asked them to explain. They did not argue based on learning. They knew my research on that matter was pretty conclusive. Instead, they claimed that everyone needed the experience of wasting a year of their lives to adjust to ‘campus living,’ and that this ‘campus living’ was something so special I needed it in my life. I got what their real message was. I was a traitor to the upper middle class by insisting on choosing a school good for my learning instead of a school good for my reputation. Well, they couldn’t stop me, and after seeing how I fared in college, they came to think that I made the right decision (or at least an okay decision). I haven’t even touched on finances (‘The College Experience’ is a lot more expensive than commuter colleges – which helps explains why my parents, who were paying, favored commuter schools). I think one of the greatest benefits of going to commuter colleges was the diversity of the people I encountered. Diverse ages (16-50), diverse ethnicities, diverse classes, diverse living arrangements, diverse relationship statuses, diverse life backgrounds, and so on. I probably learned a lot more about different kinds of people in commuter colleges than I ever would have in ‘The College Experience’.
OPCFW_CODE
Manual PP stage gives inconsistent output shape for first stage Found this as i was developing a test for PP+FSDP, but disabled the FSDP parts and found the same issue remains. Repro command and test code are in this gist: https://gist.github.com/wconstab/74a3354f31b2b63cec7f4213682360a1 after disabling FSDP its still running on 4 gpus but doing 2 PP stages so rank0,1 are paired, and rank 2,3 are paired. just look at 0,1 and ignore 2,3. The error message is RuntimeError: Failed to run stage backward: Stage output: ('Tensor(torch.Size([10]), grad=True)',) Output gradient: ['Tensor(torch.Size([1, 10]), grad=False)'] Input: ['Tensor(torch.Size([1, 10]), grad=False)'] Tracing it back, these logs show that during forward, stage 0 computes a different than expected output shape. It appears to drop the (1,) dim for some reason. [rank0]:V0411 17:27:30.063000<PHONE_NUMBER>30400 ../pippy/pippy/PipelineSchedule.py:231] [0] Forwarded microbatch 0 DEBUG:pippy.PipelineSchedule:[0] Forwarded microbatch 0 [rank1]:V0411 17:27:30.064000<PHONE_NUMBER>00544 ../pippy/pippy/PipelineStage.py:220] [1] Forwarded chunk 0, outputs: Tensor(torch.Size([1, 10]), grad=True) DEBUG:pippy.PipelineStage:[1] Forwarded chunk 0, outputs: Tensor(torch.Size([1, 10]), grad=True) [rank1]:V0411 17:27:30.064000<PHONE_NUMBER>00544 ../pippy/pippy/PipelineSchedule.py:231] [1] Forwarded microbatch 0 DEBUG:pippy.PipelineSchedule:[1] Forwarded microbatch 0 [rank0]:V0411 17:27:30.064000<PHONE_NUMBER>30400 ../pippy/pippy/PipelineStage.py:220] [0] Forwarded chunk 1, outputs: Tensor(torch.Size([10]), grad=True) DEBUG:pippy.PipelineStage:[0] Forwarded chunk 1, outputs: Tensor(torch.Size([10]), grad=True) Adding some debug code to the test script, the partial model chunk fed to stage 0 seems to give the expected output shape: 0 : torch.Size([1, 10]) ok i think what's going on is some mishandling of the *args operator. Based on this experiment >>> import torch >>> x = torch.ones((2, 4)) >>> def foo(*args): print(args) ... >>> args = x >>> foo(*x) (tensor([1., 1., 1., 1.]), tensor([1., 1., 1., 1.])) forward_one_chunk does # Compute forward try: output = self.forward_maybe_with_nosync( *composite_args, **composite_kwargs ) but composite_args is just a tensor here, not wrapped in a tuple or anything. going up a layer in the stack, we call forward_one_chunk via output = self._stage.forward_one_chunk(arg_mbs[i], kwarg_mbs[i]) # type: ignore[index] where arg_mbs is what the user provided. in my case i provided microbatches as [tensor(), tensor(), ...] Is the contract clear for how microbatches should be provided? Not sure if i'm supposed to be putting each one in a tuple. (or if so, we should do more validation of that. Yeah this looks like an issue where *args is expected to be a list of args ([tensor_arg1, tensor_arg2, tensor_arg3]), but in this case args is just a single tensor which is getting expanded. This was due to the change of merging the manual and tracing forwards, whereas previously manual support both args=single tensor and args=list of tensors So the quick fix for you is to change microbatches to [[tensor()], [tensor()], ...], I think we only want to support list of args going forward so I will add validation, test cases, and erroring on behalf of the program if the input is unexpected type (since a tensor getting expanded silently is very confusing!)
GITHUB_ARCHIVE
Merge with inner join I am trying to merge an inner join so that I can use 3 different tables, where TBL1 is the destination table where the records will be inserted, TBL2 where all the records to insert in table 1 TBL1 live and the third and last table TBL3 where a condition will be made by rfc, where if tbl3.rfc = tbl2.rfc load the data to TBL1. My query that I am doing is the following: MERGE INTO TBL1 concent USING (SELECT inter.rfc, arch.name_contr, arch.rfc,arch.taxpayer_situation, arch.oficio_global,arch.presumed_publication,arch.definitive_publication FROM TBL2 arch INNER JOIN TBL3 inter ON inter.rfc = arch.rfc ) ON (concent.rfc = arch.rfc) WHEN MATCHED THEN UPDATE SET concent.name_contr = arch.name_contr, concent.taxpayer_situation = arch.taxpayer_situation, concent..oficio_global = arch.oficio_global, concent.presumed_publication = arch.presumed_publication, concent.definitive_publication = arch.definitive_publication, concent.id_arch = arch.id_arch WHEN NOT MATCHED THEN INSERT (concent.id_concent,concent.id_arch,conce.snapshot_date,concent.rfc,concent.name_contr, concent.taxpayer_situation,concent.oficio_global,concent.presumed_publication, concent.definitive_publication,concent.baja_logica,concent.last_update) VALUES (arch.id_arch, arch.id_arch,'04/05/2021',arch.rfc,arch.name_contr, arch.taxpayer_situation,arch.oficio_global,arch.presumed_publication, archi.definitive_publication,'01','05/05/2021'); The error it marks is: Command line error: 8 Column: 27 Error report - Error SQL: ORA-00904: "ARCH"."RFC": invalid identifier 00904. 00000 - "%s: invalid identifier" *Cause: *Action: database The scope of table aliases arch and inter is limited to that subquery only. If you want to specify columns from that subquery on the level of parent merge, you need to give alias to that subquery in using clause, for example v_using: MERGE INTO TBL1 concent USING (SELECT inter.rfc as inter_rfc arch.name_contr, arch.rfc,arch.taxpayer_situation, arch.oficio_global,arch.presumed_publication,arch.definitive_publication FROM TBL2 arch INNER JOIN TBL3 inter ON inter.rfc = arch.rfc ) v_using ON (concent.rfc = v_using.rfc) WHEN MATCHED THEN UPDATE SET concent.name_contr = v_using.name_contr, concent.taxpayer_situation = v_using.taxpayer_situation,...
STACK_EXCHANGE
You can help protect yourself from scammers by verifying that the contact is a microsoft agent or microsoft employee and that the phone number is an official microsoft global customer service number. That is how i want to download files, without creating frames or using any plugins. To display modern html5 websites and presentations correctly in internet explorer 9, switch off compatibility view using one of the ways below. Note that examples and screenshots in this document have been provided from the esearch software. This page and associated content may be updated frequently. The problem is that it supports a couple of browsers. Aug 22, 2012 this attribute is extremely useful in cases where generated files are in use the file name on the server side needs to be incredibly unique, but the download attribute allows the file name to be meaningful to user. This includes the basic createelement shiv technique, along with monkeypatches for document. Why dont you trigger the download when you click the click me button. How to use the download attribute webdesigner depot. Aug 21, 2017 the html5 shiv enables use of html5 sectioning elements in legacy internet explorer and provides basic html5 styling for internet explorer 69, safari 4. For that reason alone, its probably not worth using unless your. Ben nadel looks at the html5 anchor download attribute, which can. There are no restrictions on allowed values, and the browser will automatically detect the correct file extension. The problem is that some browsers such as firefox 2, camino 1, and all versions of internet explorer dont see the html 5 elements as unrecognised. How can i download a file in ie 10 or ie 11 using url of the. This script is the defacto way to enable use of html5 sectioning elements in legacy internet explorer. The video player supports playlist, full screen mode, progress. When used on an anchor, this attribute signifies that the resource it points to should be downloaded by the browser rather than navigating to it. The download attribute is one of those enhancements that isnt incredibly sexy but is a practical and easy to add. The html5 shiv enables use of html5 sectioning elements in legacy internet explorer and provides basic html5 styling for internet explorer 69, safari 4. If the compatibility view mode is on, the icon is of a solid color. It the value is removed then original filename used. However, i am more interested in a kind of builtin facilities like download attribute of in html5. Edge all flags enabled firefox all flags enabled internet explorer 9. Im looking forward to internet explorer implementing the download attribute soon. Updated version of html5 filesystem explorer expeephole, that allows you to delete single files and folders. Taking into consideration everything that has been added to html5, the download attribute is a very small part, but in my opinion its an attribute that was long overdue, and definitely has its uses in todays apps for both usability and simplification. The crossorigin attribute, valid on the audio, img, link, script, and video elements, provides support for cors, defining how the element handles crossorigin requests, thereby enabling the configuration of the cors requests for the elements fetched data. Microsoft edge internet explorer extensions to the html5 specification. At this time, neither internet explorer or safari support the download attribute. Internet explorer gains modicum of html5 internet explorer fans can now get a taste of the video elements in html5 without having to switch browsers, but the plugin that gives ie an html5 boost. Problem streaming html5 videos on bing search using internet explorer 11 i recently installed windows 7 pro and i am having a problem streaming html5 codec for youtube videos that come up on bing searches. Using the anchor tag and download attributes to force. Tech support scams are an industrywide issue where scammers trick you into paying for unnecessary technical support services. The value of the attribute will be the name of the downloaded file. The html5 download attribute is intended to tell the browser that a certain link should force a certain file to download, optionally with a certain name that might be different than that on the server. Html5 viewer using microsoft internet explorer this document provides information on using the new htlm5 viewer with microsoft internet explorer. For users of those browsers, you might want to suggest a file name to save as. Mar 26, 2020 this attribute can be useful when the generated file names are used in the serverside so the download attribute enables the file name to be meaningful to users. As per description that on your computer you are not able to play some of the html5 videos in internet explorer. If the attribute is present, its value must either be the empty string equivalently, the attribute may have an unassigned value, or a value that is an ascii caseinsensitive match for the attribute s canonical name, with no leading or trailing whitespace. Therefore, be extremely judicious in employing this attribute. Spero che internet explorer e safari implementino lattributo download presto. This attribute is only used if the attribute is set. Select up to five browsers and compare their test results in detail. You may run a fix it for internet explorer issues to make ie fast, safe and stable mentioned below. Figure 8 you can change the internet explorer user agent string on the fly and even add. How to create a direct single click download button in divi. Apr 27, 2017 how to create a direct single click download button in divi using the download attribute posted on april 27, 2017 by jason champagne in divi resources 18 comments a direct download link is a link that starts to download the file on click instead of linking to it in your browser window. Using the anchor tag and download attributes to force a file. The html5 download attribute makes handling download links very convenient for anyone who has no access to serverside configuration. Html5 defines restrictions on the allowed values of boolean attributes. But the link given below downloads an archive with. On the search bar, type internet options, and click on internet options from the results. It is possible your results may differ slightly due to external factors such as settings and which operating system is used. For older web browsers such as internet explorer, the download attribute may not be available. Why html5 presentation doesnt play in internet explorer 9. Microsoft edge internet explorer html5 standards support document. If you believe the data above is incorrect, or if you think we are missing an important browser or device, please open a bug report at. This attribute is extremely useful in cases where generated files are in use the file name on the server side needs to be incredibly unique, but the download attribute allows the file name to be meaningful to user. The download attribute specifies that the target will be downloaded when a user clicks on the hyperlink. Years ago i showed you how to force a file to download with php. This solution seems like a good one, but it is not perfect, so you need to go a bit further if you want to achieve support in older browsers. The download attribute has not yet been implemented in as you might expect internet explorer, though it is supported by edge. Direct downloads with the download attribute kauboys. Download attribute on a tag not working in ie stack overflow. Browser compatibility testing of download attribute. Browsers internet explorer plugins html5 free downloads. Click here to download a zip file of all pdf files for internet explorer standards support documentation. The first change to your markup is to add the xmlns attribute to your.
OPCFW_CODE
Joshua Jacobs, PhD The ability to orient and navigate in spatial environments is a vital part of life for both humans and animals. Research in my laboratory examines the neural systems in humans that support spatial navigation and memory. Our objective is to characterize the principles underlying how the human brain represents spatial information during navigation and to test how these signals are used to support memory and other behaviors. In our experiments we record single-neuron and field-potential activity from neurosurgical patients who have electrodes surgically implanted in deep brain structures, including the hippocampus and entorhinal cortex. During recording, patients perform computer-based virtual navigation tasks. We analyze these recordings to identify neurons whose activity represents location and other navigational variables during movement, similar to measurements of “place” and “grid” cells in rodents, and then compare how these representations vary when patients perform other spatial tasks. There are several broader goals of this work. First, we are interested in comparing the neural representation of space between humans and animals to identify common and distinctive aspects of spatial coding between species. Second, we test whether the neural coding of location during movement is similar to the brain patterns used to encode memories. Third, we engage in translational research to develop brain stimulation protocols for enhancing human spatial memory to help patients who experience cognitive impairment due to aging or disease. 1. Jacobs, J., Weidemann, C., Burke, J., Miller, J., Wei, X., Solway, A., Sperling, M., Sharan, A., Fried, I., Kahana, M. (2013). Direct recordings of grid cells in human spatial navigation. Nature Neuroscience. 16(9), 1188–1190. 2. Zhang, H., Jacobs, J., (2015). Traveling theta oscillations in the human hippocampus. The Journal of Neuroscience. 25(36), 12477–12487. 3. Miller, J., Fried,. I.F., Suthana, N., Jacobs, J. (2015). Repeating Spatial Activations in Human Entorhinal Cortex. Current Biology. 4. Jacobs, J. (2014). Hippocampal theta oscillations are slower in humans than in rodents: Implications for models of spatial navigation and memory. Philosophical Transactions of the Royal Academy of Sciences B. 369: 20130304. 5. Jacobs, J., Lega, B. & Anderson, C. (2012). Explaining how brain stimulation can evoke memories. Journal of Cognitive Neuroscience. 24(3), 553–563. * Qasim, S. & Jacobs, J. (in press). Human hippocampal theta oscillations during movement without visual cues. Neuron. * Jacobs, J. & Lee, S. A. (in press). Spatial Cognition: Grid cells support imagined navigation. Current Biology.
OPCFW_CODE
How do I download data from Spitzer? - 1 Don't forget to try and answer the "Questions to think about ..." at the bottom of this page! - 2 The basics: Introduction and Terminology - 3 Downloading Data: Using the SHA, short versions - 4 Downloading Data: Using the SHA, long version - 5 Downloading Data: Using the SHA, a concrete example, very long version - 6 Downloading Data: How can I find already-reduced Spitzer data? - 7 Downloading Data: Using the SHA-searching for a list of objects - 8 Downloading Data: How can I quickly get a mosaic of my object? - 9 Questions to think about and things to try with the SHA - 10 I'm ready to advance to a highly technical and in-depth discussion on processing Spitzer data. The basics: Introduction and Terminology The Spitzer Heritage Archive (SHA) is the permanent home for all of the data collected during the Spitzer mission, plus all the documentation you need to understand it all. The SHA is formally part of IRSA's archive holdings (no longer 'owned' by the SSC). The SHA provides a web-based interface to the Spitzer archive, and it lives here: http://irsa.ipac.caltech.edu/applications/Spitzer/SHA/ Because it is web-based, you do not need to download and install software that is platform-dependent. It should "just work" in whatever browser you use (though, for really new or really old browsers, your mileage may vary)! There is online help for the SHA -- see the help menu in the upper right (of the red menu bar). There are also several other ways to get help; see here -- look under "Spitzer Heritage Archive Documentation". The cookbook's first few chapters has detailed step-by-step recipes (one of which was originally developed for a NITARP team), the User's Guide is a standalone PDF manual, and some instructional videos are linked in as both YouTube and Flash copies. The software that used to be the primary mechanism for pulling data from the archive is called Leopard. There might be some lingering references to Leopard on the wiki, though we have tried to clean them all out. An individual Spitzer observation sequence is an AOR, or Astronomical Observation Request. In certain cases (often calibration or sometimes science observations), you may also see an IER, or Instrument Engineering Request. Either one involves many individual frames, as well as observer name, date of observation, object or area of the sky observed, and instrument used (IRAC, MIPS, or IRS)-- these are all part of the AOR. All of Spitzer's operations (planning, scheduling, processing) have been centered around these units (AORs or IERs). Now, for the SHA, we are starting to move away from that, but there are some things that are still only available on an AOR basis, so we really can't escape them. The rest of the new terminology has its origin in other similar terms used in other archives. I know, I know, hard to think about astronomers using the same terms to mean the same thing across multiple telescopes and wavelengths! But we're trying... Raw data that are fundamentally unprocessed are "Level 0" data. As NITARP folks, you should never encounter (or want to encounter, really) Level 0 data. The individual data frames that emerge, calibrated, from the Spitzer pipeline are "Level 1," or "Basic Calibrated Data," or "BCDs." You can get just the BCDs from a region that you want; you don't have to download the whole AOR if it covers a much larger region than you want. As NITARP folks, you probably don't need these Level 1 data. But you might. The products that come from combining these individual data frames (such as mosaics) are "Level 2," or "post-BCD," or "PBCD data." These still exist fundamentally on an AOR level, e.g., you can't get a Level 2 mosaic that is just a portion of an AOR. As NITARP folks, you probably want these data. You can also get some higher-level processed products (which you might call "Level 3" data, but which in this context are called "Enhanced Products") through this interface. These products are supplemental data that are produced either by the SSC or donated to us by professional astronomers, and represent additional processing. For example, you can get a mosaic combining data from 7 AORs into one big mosaic, with customized (as opposed to hands-off pipeline) processing of image artifacts. Most of the enhanced products in the SHA are delivered by Legacy teams, or developed by the SSC itself. See below for more on this. All of the images come in FITS format. (Wondering what is FITS format?) (If you are really savvy, you might also care that they are mostly single-plane FITS files. Some enhanced products will be/are multi-plane FITS.) The other format for some data is IPAC table files (.tbl extension). IPAC table format is really just plain text, with a special header. Once you get a file like this, just about anything (including Excel) can read it. (YouTube video on tbl files, how to access them, and how to get them into Excel (10min).) Downloading Data: Using the SHA, short versions - Option 1 : SHA Quick Start for NGC 4051 or How can I quickly get a mosaic of my object? Both are quick, text only quick start guides. - Option 2 : Please see the first recipe in the Spitzer Data Analysis Cookbook (direct link ought to work!). This has screen snapshots. (Developed for professional astronomers. Hopefully it makes sense to you too. Let Luisa know if it doesn't.) - Option 3 : Or, see YouTube QuickStart video (7.5 min). (Developed for professional astronomers. Hopefully it makes sense to you too. Let Luisa know if it doesn't.) Downloading Data: Using the SHA, long version In more detail! (The wiki page was developed for you; the video was developed for professional astronomers.) Downloading Data: Using the SHA, a concrete example, very long version Originally developed especially for the 2010 CG4 team, but then turned into a formal chapter for the professional astronomer's Data Reduction Cookbook. This demo covers the following tasks: Use the SHA to search the Spitzer Archive for all possible and relevant CG4 observations. Use the SHA to assess which of several different observations of the same object will most quickly yield an image that you want. Select data for download, and do it. Recipe 2 from the Cookbook (direct link should hopefully work!!) Downloading Data: How can I find already-reduced Spitzer data? The SHA also includes polished mosaics and source lists, with more to come! Downloading Data: Using the SHA-searching for a list of objects How to search for a large list of objects efficiently. Downloading Data: How can I quickly get a mosaic of my object? Get me a mosaic, quick! Don't bother me with preambles or complete explanations, I just want a picture. (Also see What is a mosaic and why should I care?) Questions to think about and things to try with the SHA Pick an object to search on, anything you want. - How many observations are available? Which are imaging? At which bands? - Can you find any already-processed Spitzer data on this object? This tells you how to start from the same place professional astronomers do. You will have to learn how to mosaic frames using the Spitzer tools developed for professional astronomers by the Spitzer Science Center. This needs a lot of disk space, and, well, a little bit of courage! And access to IDL would help a lot.
OPCFW_CODE
using System; using System.Collections; using System.Collections.Generic; using System.Linq; using _Fixtures.Sammlung.Extras; using NUnit.Framework; using Sammlung.Dictionaries; using Sammlung.Dictionaries.Concurrent; using Sammlung.Exceptions; namespace _Fixtures.Sammlung { [ExcludeFromCodeCoverage] public class BidiDictionaryTests { [SetUp] public void Setup() { } public static readonly BidiDictConstructors[] BidiDicts = { new BidiDictConstructors(() => new BidiDictionary<int, int>(), d => new BidiDictionary<int, int>(d), e => new BidiDictionary<int, int>(e)), new BidiDictConstructors(() => new BlockingBidiDictionary<int, int>(), d => new BlockingBidiDictionary<int, int>(d), e => new BlockingBidiDictionary<int, int>(e)), }; [TestCaseSource(nameof(BidiDicts))] public void InsertPairsFindPairs(BidiDictConstructors tuple) { var (zf, _, _) = tuple; var pairs = Enumerable.Range(1, 100).Zip(Enumerable.Range(100, 100).Reverse(), Tuple.Create).ToArray(); var bDict = zf(); Assert.IsFalse(bDict.IsReadOnly); foreach (var (a, b) in pairs) bDict[a] = b; Assert.AreEqual(100, bDict.Count); foreach (var (a, b) in pairs) { Assert.AreEqual(b, bDict[a]); Assert.AreEqual(a, bDict.ReverseMap[b]); Assert.AreEqual(b, bDict.ForwardMap[a]); Assert.IsTrue(bDict.Contains(new KeyValuePair<int, int>(a, b))); Assert.IsTrue(bDict.ContainsKey(a)); Assert.IsTrue(bDict.ForwardMap.ContainsKey(a)); Assert.IsTrue(bDict.ReverseMap.ContainsKey(b)); } CollectionAssert.AreEquivalent(Enumerable.Range(1, 100), bDict.Keys); CollectionAssert.AreEquivalent(Enumerable.Range(100, 100), bDict.Values); } [TestCaseSource(nameof(BidiDicts))] public void ClearClearsAllMaps(BidiDictConstructors tuple) { var (zf, _, _) = tuple; var pairs = Enumerable.Range(1, 100).Zip(Enumerable.Range(100, 100).Reverse(), Tuple.Create).ToArray(); var bDict = zf(); foreach (var (a, b) in pairs) bDict[a] = b; bDict.Clear(); Assert.AreEqual(0, bDict.Count); Assert.AreEqual(0, bDict.ForwardMap.Count); Assert.AreEqual(0, bDict.ReverseMap.Count); } [TestCaseSource(nameof(BidiDicts))] public void DifferentMethodsCovering(BidiDictConstructors tuple) { var (zf, _, _) = tuple; var bDict = zf(); bDict.Add(new KeyValuePair<int, int>(1, 2)); bDict.Add(new KeyValuePair<int, int>(2, 3)); var bdEnum = ((IEnumerable) bDict).GetEnumerator(); while (bdEnum.MoveNext()) { var kvPair = (KeyValuePair<int, int>) bdEnum.Current; var fwd = kvPair.Key; var rev = kvPair.Value; Assert.IsTrue(fwd == 1 && rev == 2 || fwd == 2 && rev == 3); } var fwdEnum = ((IEnumerable) bDict.ForwardMap).GetEnumerator(); while (fwdEnum.MoveNext()) { var kvPair = (KeyValuePair<int, int>) fwdEnum.Current; var fwd = kvPair.Key; var rev = kvPair.Value; Assert.IsTrue(fwd == 1 && rev == 2 || fwd == 2 && rev == 3); } CollectionAssert.AreEquivalent(new [] {1, 2}, bDict.ForwardMap.Keys); CollectionAssert.AreEquivalent(new [] {2, 3}, bDict.ForwardMap.Values); CollectionAssert.AreEquivalent(new [] {2, 3}, bDict.ReverseMap.Keys); CollectionAssert.AreEquivalent(new [] {1, 2}, bDict.ReverseMap.Values); CollectionAssert.AreEquivalent(new[] {new KeyValuePair<int, int>(1, 2), new KeyValuePair<int, int>(2, 3)}, bDict.ForwardMap); CollectionAssert.AreEquivalent(new[] {new KeyValuePair<int, int>(2, 1), new KeyValuePair<int, int>(3, 2)}, bDict.ReverseMap); Assert.IsTrue(bDict.ForwardMap.TryGetValue(1, out var fwdValue)); Assert.AreEqual(2, fwdValue); Assert.IsTrue(bDict.ReverseMap.TryGetValue(2, out var revValue)); Assert.AreEqual(1, revValue); Assert.AreEqual(2, bDict.ForwardMap[1]); Assert.AreEqual(1, bDict.ReverseMap[2]); Assert.IsTrue(bDict.TryGetValue(1, out var value)); Assert.AreEqual(2, value); Assert.IsFalse(bDict.TryGetValue(3, out _)); Assert.IsFalse(bDict.Remove(3)); Assert.IsTrue(bDict.Remove(2)); Assert.IsTrue(bDict.Remove(new KeyValuePair<int, int>(1, 2))); Assert.AreEqual(0, bDict.Count); } [TestCaseSource(nameof(BidiDicts))] public void ConstructorTests(BidiDictConstructors tuple) { var (_, df, ef) = tuple; var d1 = new Dictionary<int, int> {[0] = 100, [1] = 100}; Assert.Throws<DuplicateKeyException>(() =>df(d1)); Assert.Throws<DuplicateKeyException>(() => ef(d1.AsEnumerable())); var d2 = new Dictionary<int, int> {[0] = 100, [1] = 101}; var b2 = df(d2); Assert.AreEqual(100, b2.ForwardMap[0]); Assert.AreEqual(101, b2.ForwardMap[1]); Assert.AreEqual(0, b2.ReverseMap[100]); Assert.AreEqual(1, b2.ReverseMap[101]); Assert.IsTrue(b2.ForwardRemove(0)); Assert.IsTrue(b2.ReverseRemove(101)); Assert.AreEqual(0, b2.Count); var _ = ef(d2.AsEnumerable()); } [TestCaseSource(nameof(BidiDicts))] public void CopyTo_SunnyPath(BidiDictConstructors tuple) { var (_, _, ef) = tuple; var pairs = Enumerable.Range(1, 100).Zip(Enumerable.Range(100, 100).Reverse(), Tuple.Create).ToArray(); var bDict = ef(pairs.Select(t => new KeyValuePair<int, int>(t.Item1, t.Item2))); var array = new KeyValuePair<int, int>[100]; bDict.CopyTo(array, 0); CollectionAssert.AreEquivalent(bDict, array); } } }
STACK_EDU
Novel–Cultivation Chat Group–Cultivation Chat Group Chapter 1729 – Invisible Death trust standing Soft Feather, Doudou, and Music Shuhang all landed back again on the ground. Her detects of listening to, odour, and preference experienced already been removed. On top of that, she was not a primordial soul and had not been resonating with Piece of music Shuhang as well as senior citizens of your Nine Provinces Number One Group, so she got not read the conversation between the two. Dharma Queen Development stated, [What follows needs to be the feels of vision and touch. If you are missing out on each of these, we won’t have the ability to see, listen to, or truly feel anything. At that time, we are only in a position to pa.s.sively obtain a pounding.] From your incredible tribulation, there are bursts of Buddhist chants that became available. Chapter 1729 Unseen Loss the strongest wingless gargoyle manga After, a sculpture of a large Buddha consisting of ‘tribulation lightning’ blossomed coming from the tribulation clouds. The Buddha became available utilizing its mind directed towards land surface, while its toes ended up directed skyward. It was actually hanging upside-down mainly because it slowly descended, showing unexpectedly domineering. Soft Feather, Doudou, and Song Shuhang all landed lower back on the ground. “At on this occasion, it would be great when the Sage’s eyes was still on this page,” Tune Shuhang reported regretfully in their mind. Smooth Feather, Li Yinzhu, Doudou, Track Shuhang, and all the people in the Nine Provinces Number 1 Group associated with him, froze and may even not transfer. Meanwhile, Doudou changed locations with Piece of music Shuhang’s metal manifestation as a result of ‘friends.h.i.+p mark’. Anyways, Song Shuhang relocated considerably quicker than Very soft Feather, and the man appeared prior to the Buddha very first. Or possibly is it an elevated model of Dharma King Creation’s lethal audio? Tune Shuhang requested, [During those times, would our divine perception still work?] The Tyrant Cuttlefish’s Dual Blades split aside, and on the list of cutting blades executed the Perfect Saber Burying the Starry Sea—Su Clan’s Seven’s most robust saber technique—while other blade conducted the Seventy-Two Angry Saber Strikes—Thrice Reckless’s handed down saber strategy. santal folk tales pdf The Holy Expert Ape’s Sword stirred and transformed into a night sky—it was Northern River’s Free Cultivator’s Twelve Swords with the Milky Way. star trek resistance Everybody was in a state where their primordial souls had been resonating, which allowed them to talk together mentally. Fresh Expert Phoenix az Slayer’s ‘Ground Splitting’ talent, Accurate Monarch Tyrant Flood Dragon’s ‘Dragon Flash’, and Genuine Monarch White Crane’s mixture of ‘Holy Light’ enchanting tactics. Music Shuhang inquired, [With what develop is heavenly tribulation likely to descend? And exactly how are we to manage it?] Gentle Feather put into practice closely associated with, and she swung her sword from where b.you.t.terfly-Phoenix Sword Qi got preparing out. The razor-sharp sword was inserted in the human body from the incredible tribulation Buddha as Doudou quickly circled around its body. Without delay after, even her sense of ‘touch’ vanished. Piece of music Shuhang termed out, [Very soft Feather, Doudou, initialize your divine feeling. We have to obtain up and come back to the floor.] Her senses of seeing and hearing, stink, and flavor had previously been taken away. On top of that, she was not a primordial soul and was not resonating with Track Shuhang as well as seniors on the Nine Provinces Number 1 Group of people, so she experienced not been told the conversation between them. The very last wave in the perfect tribulation? A Lady’s Captivity among Chinese Pirates in the Chinese Seas While he was chatting, the divine tribulation from the heavens concluded acc.you.mulating power and descended yet again. The last ‘purple-precious metal lightning pillar’ was only the prelude on the incredible tribulation, and the real thing would begin right now. For cultivators, their five feels were actually not quite as well-defined his or her ‘divine sense’. Which has a everyday sweep of their divine good sense, every thing around them would become as distinct directly to them being the palm of these fingers. Soft Feather claimed, “Doudou, Senior citizen Song, we need to go ahead and take cause ahead of our perception is stripped aside on top of that!” Doudou reb.u.t.ted, [Why don’t you examine your hair on your own go initially before chuckling at another person else’s head of hair?] Very soft Feather claimed, “Doudou, Older Melody, we ought to go ahead and take direct right before our sight is removed away at the same time!” Everyone was in a condition where their primordial souls ended up resonating, which allowed the crooks to speak with each other emotionally. the winter’s tale themes Everyone was in a condition where their primordial souls were actually resonating, which helped these to speak collectively emotionally. Dharma King Design claimed, [This really is awful, we have been deprived of all our five feels.] The Sacred Sword with the Conclusion and the many expertise on the retirees, which were staying viewable over the Thirty-Three Divine Beasts’ Put together Wonderful Treasure, all poured down on the divine tribulation Buddha. At the moment, two puffs of white smoke sprayed in the nose of the Buddha sculpture. Sword motive billowed out, and intense discolored qi filled up the heavens. Novel–Cultivation Chat Group–Cultivation Chat Group
OPCFW_CODE
Why does time stop in black holes? Everyone says that time stops in the black hole. It's a "fact". However, I have never heard everyone explaining that. Of course, I know that observer in weaker gravitational field sees that something in stronger gravitational field is experiencing slower time. However, slower and no at all is quite different. I have no idea what equation is used to calculate dime dilatation, but it will use gamma and therefore division. And the only time division of non-zero constant yields zero is when you divide by infinity. And although black holes are super heavy, super badass and super black, they posses finite energy and therefore finite gravitational acceleration (even at event horizon). So shouldn't just the time be very slow, rather than just stop from our point of view? This question might give you some insight: http://physics.stackexchange.com/q/60925/ - and the one it references too: http://physics.stackexchange.com/q/24018/ I've read the first one already before posting (and it's quite unrelated!). I'll take look at the other one. @Renan The other one is unrelated as well. They are all talking about What happens if time stops. I'm asking Why does it stop in the first place? After another read on your question and the second one I linked to, I agree. Possible duplicate: Black holes and Time Dilation at the horizon Why does time stop in black holes? Time according to whom? The fact is that, in special and general relativity, there is no universal time. Indeed, time is a coordinate in relativity so one must be careful to specify the coordinate system when asking questions like this. Now, every entity also has an associated proper time which is not a coordinate which means that it is coordinate independent (invariant). Think of your proper time as the time according to your 'wrist watch'. In the context of the static black hole (Schwarzschild black hole) solution, there is a coordinate system (Schwarzschild coordinates) that we can associate with the observer at infinity. That is to say, the coordinate time corresponds to the proper time of a hypothetical entity arbitrarily far from the black hole. In this coordinate system, we can roughly say that the coordinate time 'stops' at the event horizon (in fact, there is no finite value of this coordinate time to assign to events on the horizon). However, there are coordinate systems with finite coordinate time at the horizon, e.g., Kruskal-Szekeres coordinates. Moreover, for any entity falling freely towards the horizon, the proper time does not 'stop'. Indeed, the entity simply continues through the horizon towards the 'center' of the black hole and then ceases to exist at the singularity. We interpret the fact that the Schwarzschild coordinate time does not extend to the horizon as follows: no observer outside the horizon can see an entity reach (or fall through) the horizon in finite time. This is simply understood as the fact that light emitted from (or inside) the horizon cannot propagate to any event outside the horizon. Why? Because the spacetime curvature at the horizon is so great that there is no light-like world line the extends beyond the horizon. Indeed, the horizon is light-like. A photon emitted 'outward' at the horizon simply remains on the horizon. Within the horizon, the spacetime curvature is such that there are no world lines that do not terminate on the singularity - the curvature is so great within the horizon that the future is in the direction of the singularity. I like to imagine effects as being somewhat similar to a Doppler shift. If a radio transmitter is transmitting at 1.00MHz as it approaches the event horizon, outside observers will see the wavelength of the signal get shorter and shorter, implying that every second it will see more and more cycles. From the transmitter's perspective, every 1,000,000 cycles will still be one second. Is that a fair view? I have pointed out that I understand relativity. Short answer: It doesn't stop. Slightly longer answer: The case of a non-rotating, non-charged black hole is described by the Schwarzschild solution. It is now the case that, if you draw the worldline of a particle falling into a black hole, you will find that the coordinate time in the Schwarzschild metric grows infinite as the particle approaches the event horizon. Naively, this would seem to imply that a particle takes forever to fall into a black hole, which would mean that it becomes slower and slower as it approaches the event horizon. And as it would seem to imply that the particle comes to stop, some people say that "time stops at the event horizon". But this is just an artifact of the coordinates. The Schwarzschild coordinates are simply chosen badly. The proper time, i.e. the time the falling particle/observer would perceive, is finite, and there are other coordinates in which there is also no singularity at the event horizon, so that all coordinates stay finite. Nothing particulary terrible happens at the event horizon from the view of the falling particle, it is just that no light-like paths connect the interior of the horizon the the exterior, so that nothing can cross the horizon from the inside. Inside the horizon, some weird stuff happens when looked at from the Schwarzschild coordinates, like the former time-coordinate becoming space-like, but this is again rather an artifact of the coordinate system than a property of the true black hole. The are coordinates which cover the whole of the spacetime except for the center of the hole, where the is a true singularity. All bets are off as to what happens there. I was trying to make sure answerers know that I'm aware or the observer relative stuff. I was wondering why from my point of view the stuff is stuck at the event horizon (which must be really crowded from our point of view...) @TomasZato: Well...actually, from your point of view, all the stuff stuck at the horizon is also black due to gravitational redshift, so you don't see it. As for why the coordinate time in the Schwarzschild metric (which is the time of an observer at rest at infinity) grows infinite, just observe that the factor between $t$ and $\tau$ in the metric goes to zero as the radius approaches the horizon. So it's currently just a mathematical thing which can eventually have some constant added to it as soon as we find out that things are a little bit different? @TomasZato: Not really. If it is wrong that we cannot see things falling into a black hole in a finite amount of our time, then GR is wrong. [But note that black holes are unfortunately in short supply for experiments.] I thought you didn't know any GR physics.
STACK_EXCHANGE
Cubism Web Samples Change History Live2D Cubism SDK for Web Release Page (GitHub) The change history is available in CHANGELOG.md in the Cubism SDK for Web distribution package. You can also check CHANGELOG.md in CubismWebSamples on the Live2D GitHub. Check NOTICE.md in the Cubism SDK for Web distribution package or NOTICE.md(English), or NOTICE.ja.md(Japanese) in CubismWebSamples on the Live2D GitHub. Cubism 4 SDK for Web R6_2 (03/16/2023) - Fix some problems related to Cubism Core. - See `CHANGELOG.md` in Core. Cubism 4 SDK for Web R6_1 (03/10/2023) - Add funciton to validate MOC3 files. CHANGELOG.mdin Core and Framework. Cubism 4 SDK for Web R6 (02/21/2023) - Remove Debugger for Chrome from recommended extensions. Cubism 4 SDK for Web R5 (09/08/2022) - Added multilingual documentation. Cubism 4 SDK for Web R5 beta5 (08/04/2022) - Sample model “Mao” was updated to the latest version. - Fixed a problem in which loading an unsupported version of a MOC3 file would cause an exception and crash. Cubism 4 SDK for Web R5 beta4 (07/07/2022) - Sample model “Mao” was added. Cubism 4 SDK for Web R5 beta3 (06/16/2022) - Fixed a bug in which ViewPort was sometimes not set correctly. - We no longer support Internet Explorer. Cubism 4 SDK for Web R5 beta2 (06/02/2022) - Fixed a bug that caused incorrect Multiply Color and Screen Color to be applied. See the Cubism Core Change History. - There are no changes to Samples or Framework. Cubism 4 SDK for Web R5 beta1 (05/19/2022) - Cubism 4.2 is now supported. - Multiply Color and Screen Color are now supported. - Multiply Color and Screen Color can now be overridden by the user’s desired color. Cubism 4 SDK for Web R4 (12/09/2021) - Sample models have been updated. (Created with Cubism Editor 4.1.02) - Fixed a bug in which the move process for one model affected other models when multiple models were displayed. - Fixed a bug that caused breathing calculations to differ from those in Cubism Viewer (for OW). Cubism 4 SDK for Web R3 (06/10/2021) - Fixed a bug that caused a 404 error when the model path was corrected and an exact path was required. Cubism 4 SDK for Web R3 beta1 (05/13/2021) - Added a sample to manipulate lip-sync from a waveform on an audio file (.wav). - Added sample voice to Haru. Cubism 4 SDK for Web R2 (03/09/2021) - Added the ability to dynamically resize the screen size and touch detection area. - Adjusted the size calculation of the model displayed in the window and modified it to use a view matrix. Avoided unnecessary namespaces to simplify import. #34 Cubism 4 SDK for Web R1 (01/30/2020) - Webpack Dev Server was introduced into the development workflow. - README.md was added to the sample project. - Added “Prettier” and “ESLint” to the format for code checking. - Framework directory is now a sub-module. - Please refer to CHANGELOG.md in CubismWebFramework for detailed changes. - The “Sample” directory has been renamed to “Samples.” - The “Resources” directory has been moved directly under the “Samples” directory. Cubism 4 SDK for Web beta2 (11/14/2019) - Fixed CamelCase in “cubismrenderer_webgl.ts.” Cubism 4 SDK for Web beta1 (09/04/2019) - Added function to display Moc file version. The “Invert Mask” function is now supported. - Added “.editorconfig” and “.gitattributes” files for file management. - “CHANGELOG.md” was added as a file to describe changes. - The method of starting up a simple local server has been changed. - Sample model “Rice” was added. (/Sample/TypeScript/Demo/Resources/Rice) Updated Cubism Core library version to 04.00.0000 (67108864) . - Adjusted the format of all files to be consistent according to the contents of “.editorconfig.” - Renamed the file “cubismrenderer_WebGL.ts” to “cubismrenderer_webgl.ts.” - Migrated “CubismSdkPackage.json,” which describes package information, to YAML format “cubism-info.yml.” - Added “package.lock.json” to manage dependent packages. - Adjusted “README.md” and removed the notice regarding the suspension of the repository “Cubism Bindings.” - Adjusted “.gitignore.” - Fixed model image reloading issue in WebKit.
OPCFW_CODE
Can ping, has traffic being sent, but cannot access service on port Description I am just trying to access some of the services exposed on my NAS (even just a simple HTTP server for a proof-of-concept) and I cannot get access anything: (e.g. http://<server wireguard IP>:<service port>). I have tried setting up wireguard on other machines before (although I am a novice when it comes to wireguard) and I was able to successfully achieve the same result (access service on port). I am able to ping both from server to client and from client to server via ipv6 but not via ipv4. Not sure what that means but I did discover that. Steps to reproduce $ ssh user@nas $ sudo wg-quick up wg0 [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add <IP_ADDRESS>/24 dev wg0 [#] ip -6 address add fdc3:f7cd:e017::1/64 dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Expected behavior Opening http://<server wg ip>:<service port> and the service running on said port should retur it's respective http response. Synology NAS model E.g. DS416play wg0.conf [Interface] PrivateKey = <redacted server private key> Address = <IP_ADDRESS>/24, fdc3:f7cd:e017::1/64 ListenPort = 51820 [Peer] PublicKey = <redated peer public key> AllowedIPs = <IP_ADDRESS>/32, fdc3:f7cd:e017::2/128 Peer config: [Interface] PrivateKey = <client private key> Address = <IP_ADDRESS>/32, fdc3:f7cd:e017::2/128 [Peer] PublicKey = <server public key> AllowedIPs = <IP_ADDRESS>/24, fdc3:f7cd:e017::/64 Endpoint = <server WAN ip>:51820 PersistentKeepalive = 15 Maybe I'm misunderstanding how this would work, and I don't need this VPN to route all traffic or anything, but I was hoping to just type in http://<IP_ADDRESS>:<service port> into my browser and access the service remotely. I don't spot any obvious errors, but then again it's been a while since I set up my configuration. Can you SSH to through the Wireguard IP? Please try to set: mtu = 1280 on client side. I had to do that to connect from Mac Os and to use web services. Not required from IOS. @runfalk I am able to SSH into the server but only via IPv6 @fabiov64 no luck, added the config change and no difference :( I assume you can access the service through IPv6 as well then? Fyi I've just tried this and it works fine (I can browse NAS via <IP_ADDRESS>). I had always accessed via the actual NAS IP before, have you tried adding the NAS IP to peer allowed list and trying to browse then? I believe I did not set up Port Forwarding on my router properly... Which might explain why IPV6 worked by not IPV4. (NAT? Sorry a little out of my experience.) Everything works now :)
GITHUB_ARCHIVE
Tools and resources for QA testing The VA.gov Platform provides QA-related tools and resources to help you build quality Veteran-facing digital experiences that are bug-free and function as designed. This guide is an overview of the tools and resources provided for you to perform QA activities. Creating a test plan Your test plan outlines the steps required to perform QA testing. It includes information about who will be responsible for each task, which topics are being tested and when it should be completed. We provide a TestRail account for each VFS team to use to help you create and manage your QA testing plan. For more information on getting access to TestRail and using TestRail to create a test plan, see Create a plan in TestRail. You’re not required to use TestRail, but the templates and resources we provide assume that you will. If you plan and execute your QA using different tools, you're responsible for providing documentation that’s equivalent to the artifacts required as part of the collaboration cycle. QA testing within the Platform CI workflow The Platform CI workflow includes tools and features that support both automated and manual QA testing activities. Frontend unit tests Write unit tests as you build to make sure your form (or other component) is behaving as you expect and to help guard against future bugs. API unit testing Determine whether your application behaves correctly even if HTTP calls to external services do not return correctly. Determine if application dependencies are working accurately and check if accurate information is being communicated between multiple system components. See the End-to-end testing with Cypress guide for more information. QA testing outside of the Platform CI workflow You’ll need to manually manage QA activities that happen outside the Platform CI workflow. Run load tests to ensure the stability of an API before launching new endpoints or making substantial updates to existing endpoints. Open source load testing tool. Use to define test behavior in Python code. Has a web interface for interactive testing. An HTTP benchmarking tool based mostly on wrk. Conduct cross-browser manual testing after you've pushed your build to staging. Your application or feature must work in all of the browsers and versions included in our browser support policy. You can use TestRail to create multiple test runs with different operating systems or browser configurations. If you already have a TestRail account, you can view this example TestRail Test Plan with Multiple Test Runs. If you do not have access to TestRail, see Create a test plan in TestRail for instructions on requesting access. Integrate accessibility testing into your product development process to make sure your service is usable by everyone. The VA requires that your product meets WCAG 2.1 Level A and AA success criteria. Review the following resources and guides to help you meet the VA.gov experience standards for accessibility. Accessibility standards: WCAG 2.1 success criteria and foundational testing Working with Platform accessibility specialists Depending on your product, you may want to verify and validate your product in one of the standard Platform environments. Review Instances: These ephemeral environments are only available through the SOCKS proxy and are created when a pull request is opened in vets-website or vets-api repositories. Development: Integration with live external services outside of VA.gov are not available in this environment. Instead, this environment relies on mocked data. Staging: Integration with live lower environments for external services outside of VA.gov is available in this environment. Production: Integration with live external services across the VA enterprise and real production user data is active in this environment. Use of feature toggles to limit the audience for your change may be prudent. Platform does not recommend verifying the functionality or behaviors of your product in production. See Environments for more information. Test users and test data An essential aspect of creating a quality assurance test plan for your product is the setup of test users and associated data. The Test User Dashboard (TUD) is a searchable, filterable, interactive catalog of test user accounts available for use in staging and review instances. See the Test User Dashboard documentation for more information. Monitoring QA practice health Our QA Product Dashboards give insight into how VA.gov products are functioning from the perspective of QA coverage. Use the dashboard for your product to whether or not you’re in compliance with VA standards and identify areas for improvement. See the QA Product Dashboard guide for more information. Other helpful tools Create animated GIFs to attach to defects and aid in the description of an issue encountered. Download this tool at the Cockos Incorporated website. A plugin for VSCode that uses git blame to present who changed what and why in the code. Download GitLens from the Visual Studio Marketplace. Dead link checker One way to quickly smoke test link functionality on the full site is to enter a URL (example: https://staging.va.gov) in this dead link checker tool. Chrome DevTools can help you isolate a misbehaving element on the page, examine element attributes, and explore errors on the page. Viewport Resizer is a Chrome extension for resizing to different preset viewport sizes to verify responsive design. Help and feedback Create an issue ticket to suggest changes to this page
OPCFW_CODE
You are not using SMS anymore, so SMS can not be hijacked. If you worry for an recovery access, you should not worry. You say you have backup codes and you even have the original QR Code which in fact contains the unencrypted secret key, that is used to calculate the TOTP values according to RFC 6238. You (or the attacker) can take this QR-Code at any time and create a 1:1 copy of the authentication device smartphone app. You probalby never will lock yourself out - if this is what you worry about. So - you are safe. But are you secure? You may want to read my previous blog post. You can also activate U2F as @parth-maniar stated. But you should know, that U2F uses a preinstalled attestation certificate with a unique serial number. (Well, this is a feature of x509 certificates.) The key pair you are registering with google is derivated from the private key belonging to this certificate. (Please put your tin foil hat on, now) Thus a possible attack vector for an Intelligence Service might look like this: The attestation certificate is passed to the service (Google) when registering. The Intel Service might ask Google for the serial number of your certificate. (which Google would not give to them!). Then the Intel Service might go to Yubico and ask Yubico for the private key belonging to the certificate with your serial nubmer. (which Yubico does not have and would not give them!) Now the Intel Service has your private key. Using this private key the attacker can immediately login to all the accounts where you registered the U2F device. The nice thing is, that the Intel Service can attack a "weak" service provider, where you registered to link your name to the serial number. And then they can simple login with your account to the "strong" service provider, who otherwise would not give them your login or your data. Imho U2F is not the salvation it claims to be. (tin foil hat off) Correction on Feb 25th, 2017 (unfortunately the markdown does not support strike out of words) I obviously was mislead by some other blog post and a FIDO spec not being precise about implementation roughly 3-4 years ago. The U2F devices contain an attestation certificate, but these usually are not unique for each device. The attestation certificate is just for verifying to the service, that this is a device of a certain type. If you buy a bunch of U2F devices from one vendor they probably all have the same certificate and of course corresponding keys. (I checked this myself these days) Another customer would probably get the same cert (and keys) as you. Still, a vendor could create individual attestation certificates for each key or for each customer buying a bunch of tokens. However, a vendor probably will change the attestation certificate, if they release a new device version. This is because the service should now, if an old device tries to register or a newer device, because it might deny the registration of the older device type. Well, you still have to trust the vendor to handle the attestation certificate correctly, because this is in fact information that is sent to each service, when registering. The most known U2F device vendor thus uses an additional master key, which is bound to the attestation certificate! My previous statement was wrong! Thus the Intelligence Service in fact usually can not identify your individual device by the attestation certificate. The IS would have to contact the vendor in advance to ask them to put individual attestation certificates on it. I am sorry for the misunderstanding or having caused any big panic! Only mean to cause mid-sized panic! ;-) ...or the normal way of do-not-switch-of-your-brain-when-someone-claims-his-solution-is-totally-secure.
OPCFW_CODE
Okay, I am quite happy with the standard library as it is now, but in the process of coming up with an interface for a test framework I realized that the error handling, as far as it can be called that, is completely unusable. While it probably would be possible to write try/catch constructs it would be entirely impossible to write a version that somehow knows how to handle errors thrown by misuse of a primary operation or the use of `error`. In short: Currently there is no way to handle something like 0 reciprocal ! That is completely unacceptable but unfortunately I am currently a bit stumped on how to devise a better system. A big hurdle is that all Poslin code needs to be sequentially processable, or, put another way, there is no such thing as code blocks or something which can be associated with a wrapping error handler or something. What can be wrapped with an error handler, though, is a thread. This brings me to the first option: This would first require a new data type `:Error`, which would have two attributes: A string (the error message) and metadata about the error (whatever those might be). `handle-error` would be an operation which takes two threads, the body and the handler, and returns one thread, a checked thread (or something like that, I'm not sure about the terminology). This scheme also requires a second return stack for handlers, a handler stack… Hmmm… Now, if a checked thread is called, the handler is pushed onto the handler stack and the body called. When the checked thread is done, the top of the handler stack is dropped. Now, every time some kind of error happens, the corresponding error object is pushed onto the current stack, the top of the handler stack is popped off and called. If that doesn't work (because the handler stack is empty) fall back to the way errors are treated right now (but still, the error object now is available for the user, as long as a REPL is available for the user, which probably shouldn't be the case for user applications). This scheme requires two new Types (`:Error` and `:CheckedThread`) and 6 or 8 new primary operations (`new-error`, `error-message`, `error-data`, `handle-error`, `checked-thread-body`, `checked-thread-handler`; maybe `h<-` and `h->`, who knows what kind of crazy exception systems can be done with those two). `error` would become obsolete, moving to the standard library as a simpler version of `new-error` that doesn't bother with metadata. One boon if this kind of system is that it doesn't do any kind of stack unwinding or anything like that, so you don't loose information. The downside of this approach is that, when wrapping with `handle-error` you need to anticipate every place in the wrapped thread the handled error could come from and make sure that it is handled correctly. I guess it would be possible to write a more… reasonable exception handling facility on top of this, which would be enough for me (because Poslin is all about building reasonable stuff on simple mechanisms), but, well it's just a guess Also, the additional handler stack seems a little bit like overkill. But only a little bit. Also I just realized that it might be possible to discard of checked threads (and the corresponding operations) entirely and implement `handle-error` with `h<-` and `h->`. So, any thoughts on this? An alternative idea on how to do this? Tweaks to the proposed systems?
OPCFW_CODE
Document Type : Research Paper 1 School of Mathematics, Thapar University, Patiala-147004, India. 2 School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, PR China. 3 Faculty of Mechanical Engineering, University of Belgrade, Kraljice Marije 16, 11120, Beograd, Serbia. Compared with the previous work, the aim of this paper is to introduce the more general concept of the generalized $F$-Suzuki type contraction mappings in $b$-metric spaces, and to establish some fixed point theorems in the setting of $b$-metric spaces. Our main results unify, complement and generalize the previous works in the existing literature. H.H. Alsulami, E. Karapinar, and H. Piri, Fixed points of generalized F-Suzuki type contraction in complete $b$-metric spaces, Dis. Dyn. Nat. Soc., Volume 2015, Article ID 969726, 8 pages. H.H. Alsulami, E. Karapinar, and H. Piri, Fixed points of modified $F$-contractive mappings in complete metric-like spaces, J. Funct. Spaces, Volume 2015, Article ID 270971, 9 pages. I.A. Bakhtin, The contraction principle in quasimetric spaces, Funct. Anal., 30 (1989), pp. 26-37. L. Ciric, S. Chandok, and M. Abbas, Invariant approximation results of generalized nonlinear contractive mappings, Filomat, 30 (2016), pp. 3875-3883. S. Czerwik, Contraction mappings in $b$-metric spaces, Acta Math. Inform. Univ. Ostrav., 1 (1993), pp. 5-11. H. Ding, M. Imdad, S. Radenovic, and J. Vujakovic, On some fixed point results in $b$-metric, rectangular and $b$-rectangular metric spaces, Arab J. Math. Sci., 22 (2016), pp. 151-164. N.V. Dung, and V.L. Hang, A fixed point theorem for generalized $F$-contractions on complete metric spaces, Vietnam J. Math., 43 (2015), pp. 743-753. M. Jovanovic, Z. Kadelburg, and S. Radenovic, Common fixed point results in metric-type spaces, Fixed Point Theory Appl., Volume 2010, Article ID 978121, 15 pages. E. Karapinar, M.A. Kutbi, H. Piri, and D. ORegan, Fixed points of conditionally $F$-contractions in complete metric-like spaces, Fixed Point Theory Appl., 2015 (2015), Article ID 126, 14 pages. H. Piri, and P. Kumam, Fixed point theorems for generalized $F$-Suzuki-contraction mappings in complete $b$-metric spaces, Fixed Point Theory Appl., 2016 (2016), Article ID 90, 13 pages. S. Shukla, S. Radenovic, and Z. Kadelburg, Some fixed point theorems for ordered $F$-generalized contractions in 0-$f$-orbitally complete partial metric spaces, Theory Appl. Math. Comput. Sci., 4 (2014), pp. 87-98. D. Wardowski, Fixed points of a new type of contractive mappings in complete metric spaces, Fixed Point Theory Appl., 2012 (2012), Article ID 94, 11 pages. D. Wardowski, and N.V. Dung, Fixed points of $F$-weak contractions on complete metric spaces, Demonstratio Math., 67 (2014), pp. 146-155.
OPCFW_CODE
Is there no limit to power levels? Since Jiren and Frieza don't have godly ki, we are able to understand that you can attain the power of a god through regular training; that regular ki can go as high as divine ki. Based on existing knowledge, how much roughly is Jiren's full power level? (the power he plans to use on Beyond Super Saiyan Blue Vegeta and 20+ Kaio-Ken Super Saiyan Blue Goku). My guess is based on Super Gogeta's power level of 2,500,000,000 that Jiren's full power level is 189,000,000,000,000,000. I don't think this question is opinion based. Using comparison of existing power levels, an answer could be concluded. Note, the OP isn't asking for an exact number. There are 2 questions here: 1) How much roughly is Jiren's full power level?, and 2) is power level limitless? Please focus on a specific question. There are no power levels in Dragon Ball super. Most of the answers here are based on opinion. Also, Super Gogeta is not canon. Comparing a canon character with someone who doesn't make any sense. A better question would be how strong is Jiren compared to Goku and Vegeta or other acharacters in the T.O.P. What is this question asking? The question in the body is different from the one in the heading ("What is the limit to power levels" vs "What is Jiren's power level". And all the answers are talking about multipliers instead of either of these. @ShayminGratitude Just the usual fan obsession about power numbers even though the series recognized it as awful and limiting and so abandoned it entirely ages ago. This question was closed as "unfocused" because, after a long time, the issue hasn't been fixed: the title and the question body asked 2 different questions, and some of the existing answers were interpreting the question differently due to that. We have no idea which are the multipliers for super saiyan god and super saiyan blue, and we have no idea which is the current base power level of Goku and Vegeta, so it's impossible to give power levels with figures (numbers). If we try to determine with a number the base power level of Goku and Vegeta (which seems to be about the same for both of them for the fights and sparrings they had in Dragon Ball Super) we find a lot of incongruences or very hard to fit situations. Also with the other transformations Fake Vegeta in base form (which is as strong as Vegeta) was able to defeat SSJ3 Gotenks. Goku in base form was able to overpower Freezer 4th form, Freezer 3rd form was able to almost kill super saiyajin Gohan. In one movie Beerus says Goku in base form isnt as strong as Freezer (4th form) So according to some data, Goku or Vegeta base form could be weaker than a super saiyajin, and according to other data it could be stronger than a SSJ3. Situations like those can only be fixed with speculation, no with the dialogue or things said by the authors because they just didnt try to fix those inconsistencies. So, being us unable to give number estimations since they could vary very widely, I believe this is the only thing we can say. Super Saiyan Blue Kaiokenx20 of Goku, plus Ultra Super Saiyan Blue Vegita which seems to be in a similar level than Goku Super Saiyan Blue Kaiokenx20 , gave some fight but no damage to Jiren. This tells us that Goku (who is using the power of 20 super saiyan blue) plus Vegeta who is using a similar power (who together has a power around x40 super saiyan blue) are weaker than Jiren. Then Jiren power level has to be somewhere between x50 to x100 super saiyan blue. In my opinion, no. The multipliers were devised to keep in account how power scaling would work, in case the character undergoes any power up or transformation. The multipliers assumed are: Kaioken = x1.5 Kaioken x10 = x10 Kaioken x20 = x20 Super Saiyan = x50 Super Saiyan 2 = x100 Super Saiyan 3 = x400 Super Saiyan God = x15,000 Super Saiyan Blue = x25,000 Ultra Instinct "Omen" = x150 Super Saiyan Blue The numbers calculated can be assumed to be partially correct because no track of power level was kept after Dragon Ball Z. Goku (Suppressed) = 50,000,000,000 Jiren (Extremely Suppressed++) = 1,000,000,000,000,000,000 Goku (Super Saiyan) = 2,500,000,000,000 Goku (Super Saiyan 2) = 5,000,000,000,000 Goku (Super Saiyan God) = 750,000,000,000,000,000 Goku (Super Saiyan Blue) = 37,500,000,000,000,000,000 (Qaudrillion) Goku (Super Saiyan Blue Kaioken x10) = 375,000,000,000,000,000,000 Jiren (Less Suppressed+) = 400,000,000,000,000,000,000 Goku (Super Saiyan Blue Kaioken x20) = 750,000,000,000,000,000,000 Jiren (Less Suppressed+)=1,000,000,000,000,000,000,000 Piccolo = 32,000,000,000 Gohan (Suppressed) = 5,500,000,000 Vegeta (Suppressed) = 25,000,000,000 True Form Frieza = 5,000,000,000,000 Krillin = 740,000,000 Goku (Genki Dama/Spirit Bomb) = 900,000,000,000,000,000,000 Goku (Ultra Instinct) = 3,500,000,000,000,000,000,000 Jiren (Less Than 30%)=12,000,000,000,000,000,000,000 Source: Dragon Ball Wikia i assume by Wikia you're talking about the Dragon Ball Wikia. if not please update the link as to which wikia you are referring to yeah, will do the same. Should I post other reference links as well? if you use other sources to reference your answer then yeh it's a good idea Could you please point to the specific Wikia articles mentioning this number? I failed to find the multipliers and exact power levels mentioned on this answer.
STACK_EXCHANGE
What exactly are the formal rules around US paying off debt? Is there a precise set of rules (e.g., very specific law, or court ruling?) that explains what specifically does - or does not give the US executive branch the right to choose to default on specific portions of debt despite having non-empty treasury? "non-empty" here has a very precise meaning: they are paying other non-sovereign-debt liabilities but not servicing debt fully. Please note that I'm NOT at all interested in finance technicalities (e.g. the story line about 3 separate payment systems that are hard to interconnect may be a technical excuse for why the default was possible/likely during prior month, but it has zero impact on legal situation). I'm also not interested in "separate" accounts, e.g. it's clearly understood that SOME of Treasury's balance sheet is in (supposedly) separate books like Social Security Trust Fund that aren't applicable to debt servicing and are also funded separately. This question only deals with general liabilities that are not being paid from such separate books. For example, from Forbes article: Tribe (of Harvard Law School) and Balkin (of Yale Law School) disagree. They think the President would be relegated instead to some sort of "prioritization" process, where he directs those limited funds available to him to making certain that the country honors its outstanding bond obligations -- the most unambiguous focus, they contend, of Section 4 of the 14th Amendment -- and to funding other absolutely essential functions while allowing most other obligations (salaries, entitlements, contracts, etc.) temporarily to go by the wayside, though this would obviously cause enormous pain and upheaval throughout the country. I don't think the question is clear here. What do you mean by non-empty treasury in correlation with I'm also not interested in "separate" accounts? The whole point of the default was that the treasury was empty except for those "separate accounts". @Bobson - no it wasn't. It wasn't full enough to pay all the bills, but that's different from "empty". Question is, is it legally allowed to pay OTHER bills before the sovereign debt interest/principal. The separate books are a minority (e.g. Social Security) I think I see what you're getting at now. That's a much more succinct way to put it, which makes it much clearer than the original question. Per the 14th Amendment: The validity of the public debt of the United States, authorized by law, ... shall not be questioned. Given that this is a constitutional mandate, this makes ensuring that the validity of the debt (i.e. whether or not it will be repaid) the top priority for the executive branch. Following this interpretation (and anything is only a personal interpretation until it's tested by the courts), there is no provision to pay any other "general" bill before making debt/interest payments. Thus: Nothing gives the President the right to choose to default on "public debt" in favor of anything else. This is slightly more complicated by the section I ...'d out: including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion There are two ways to interpret "pensions and bounties for services in suppressing insurrection or rebellion". If it's read as "(pensions and bounties) for services...", then it's not relevant to the discussion. If it's read as "pensions and (bounties for services...)" then pensions are in the same protected category as the debt, adding a second set of bills which must be paid before any others. As far as I know, there's been no legal clarification on which way this should be read. As for what "public debt" specifically entails, I refer you to the wikipedia page, which summarizes it as: Debt held by the public, such as Treasury securities held by investors outside the federal government, including that held by individuals, corporations, the Federal Reserve System and foreign, state and local governments. Debt held by government accounts or intragovernmental debt, such as non-marketable Treasury securities held in accounts administered by the federal government that are owed to program beneficiaries, such as the Social Security Trust Fund. Debt held by government accounts represents the cumulative surpluses, including interest earnings, of these accounts that have been invested in Treasury securities. Could you give a reference to show that the 14th Amendment is interpreted to mean that public debt is always prioritized over all other spending, please? @DJClayworth - There's a scholarly quote to that effect in the question itself. And since it hasn't been tested by the courts or officially acted on by the government, there is no official interpretation. I definitely approve of asking it directly, though! The article seems to indicate that scholars disagree over the interpretation of the 14th Amendment. @DJClayworth - Actually, the disagreement there is over whether the 14th Amendment would require the President to just outright ignore the debt ceiling, or whether he has to make do with the money he has (the quote is from the latter group). No one there is questioning that defaulting would violate it. Keep mind that the court very well break Section 4, Article 14 into kindling not due to politicisation (although there is too much of that going on) but because it may be dreadfully worded and clash with other parts of the constitution.
STACK_EXCHANGE
The financial industry is increasingly embracing International Organization for Standardization (ISO) 20022-based standards (MX messaging) to exchange messages for both payments and securities. Key benefits of MX messaging include its ability to capture richer data, flexibility, and machine-readable format. However, the older SWIFT MT message set is still deeply entrenched in the core systems and processes of the financial sector. This situation has created a growing demand for MT-MX conversion. In this article, I will show you one way to achieve MT to MX mapping on Red Hat OpenShift using the message transformation platform from Trace Financial, a Red Hat Independent Software Vendor (ISV), and Red Hat Fuse. ISO 20022 MX transformation with CI/CD This approach minimizes the dependency between the business analysts and integration developers: Business analysts can focus on building the message/data transformations using the Trace Transformer tool, while integration developers focus on building the integration routes and endpoints. The output from the business analysts is uploaded into an artifact repository, while the developers' code is stored in a Git repository. The CI/CD pipeline automates building, packaging, and deployment into multiple environments in OpenShift (Figure 1). OpenShift's monitoring stack monitors the transactions and issues alerts, which developers can view using the Prometheus metrics service and the Grafana display tool. MT-MX mapping with Trace Transformer Trace Transformer is a desktop IDE that lets a non-programmer business analyst create, consume, validate, and transform complex messages rapidly while complying with message standards that are themselves rapidly evolving. The advantages of Trace Transformer include the following: - It comes with a full set of ready-built, high-quality message definitions and mappings. - It validates messages against public standards such as ISO 20022, as well as in-house rules. - It eliminates coding for message transformations, even for very complex ones. - It builds in quality control by making testing integral to development, executing tests "as you build." Users can view the mappings in the Transformer Design-Time GUI, which provides a very clear and nontechnical visualization, ideal for analysts (Figure 2). Cloud-based integration tools Red Hat Integration provides developers and architects with cloud-based tools for integrating applications and systems. Its capabilities include application and application programming interface (API) connectivity, API management and security, data transformation, service composition, service orchestration, real-time messaging, data streaming, change data capture, and maintaining consistency across data centers. Red Hat Integration was built for cloud-based development, so developers can use the same advanced build, management, and runtime platforms to connect systems that they use for new service development and integration. The cloud-based tools create deployable artifacts for cloud platforms. Platforms can be combined for public cloud, private cloud, and on-premise environments for scalable, highly available microservices using powerful container management tools. Red Hat Fuse is a distributed, cloud-based integration platform based on open source projects such as Apache Camel. Fuse allows you to define routing and mediation rules in a variety of languages and between different types of endpoints. Transformer provides an Apache Camel Component implementation that is bound into the CamelContext by default. The Transformer Camel Component is bound against the txfrmr URI scheme. For instance, a route to a message transformation service might start with a txfmr:com.alliance.mxToMt/ string. When you include txfrmr in a route, the URI resolves to an exposed service operation. Red Hat Fuse Source-to-Image (S2I) is available as a template in OpenShift (Figure 3), making it easy to build and package the integration source code and transformer libraries as inputs and produce a container image that runs the assembled application as output. Automate cloud-based CI/CD with Tekton Because financial messaging formats and standards are rapidly updated to meet market demand, automation is mandatory to deploy and release the new changes and products into production. Business analysts can update changes to the message transformation process using Transformer and upload the output JAR into an artifact repository. The upload triggers the Pipeline to automate the CI/CD process and deliver results quickly (Figure 4). The process does not require developers to make any code changes or to be involved in the packaging/deployment processes. On the other hand, developers must get involved when additional business channels are required for business expansion, which happens less often than changes to message standards. Developers can create new integration flows and endpoints to extend the new business channels by reusing the transformation libraries. The developers do not require deep knowledge of financial message transformation and mapping. This decoupling eliminates the bottleneck of the traditional approach to financial message delivery, where the analysts are required to document the message transformation and to elaborate the steps to the developers in order for them to program the logic. Monitoring and alerting tools The OpenShift graphical interface offers tools for setting up and viewing monitoring and alerts. This section steps through some available tools. Red Hat Fuse console Red Hat Fuse includes a web console (Figure 5) based on Hawtio open source software. The console provides a central interface to examine and manage the details of one or more deployed Red Hat Fuse containers. You can also monitor Red Hat Fuse and system resources, perform updates, and start or stop services. OpenShift includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components (Figure 6). A set of alerts are included that, by default, immediately notify cluster administrators about issues with a cluster. By enabling the monitoring for user-defined projects, you can define the business services and pods to be monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your projects in the OpenShift web console. Grafana data visualization For visualizing the data and metrics generated by the monitoring, Grafana can be used as a dashboard. Grafana is an open source tool for running data analytics and pulling up metrics that make sense of the massive amount of data with customizable dashboards. You can create charts, graphs, and alerts for the Red Hat Fuse message transactions when connected to Prometheus in the OpenShift platform (Figure 7). I have created a video to show the elements of the approach presented in this article. The approach described in this article decouples the dependencies between the technologies used. Thus, it is potentially possible to further modernize the integration piece with Red Hat OpenShift Serverless, Red Hat build of Quarkus, Camel K, Kamelets, and Red Hat AMQ Streams (Apache Kafka). Keen to explore some hands-on interactive lessons? The following links can help you learn more about the central technologies in this article:
OPCFW_CODE
I remember that a number of times, developers have approached me with the question – How do I auto increment a default value in a table column? These were mostly folks who had worked in the MS SQL, Mysql and other database environments where this features exists. Now, Oracle 12c Default Column Value feature has been incorporated and puts this to rest. Read more on other Oracle 12c topics • Database Identity Columns • Default Column Value Enhancement Oracle 12c Default Column Value So lets look a little deeper at this, focusing on the following. • Default Values using Sequences • Default Values on explicit Nulls • Metadata Only Default Values Default Values using Sequences Now, you can use the NextVal and CurrVal sequence pseudo-columns as a default value for the table columns. This feature is pretty much similar to the Identity Columns feature, which has also been introduced in 12c. The only difference between the two is that in using the Sequence as a default value, you cannot have an implicit “Not Null” and “Not Deferrable” constraints. In this example here we will create a table which will use a sequence as a default value. The sequence should already exist in the database before the table creation. SQL> create sequence t1_seq; Sequence created. SQL> create table t1 ( 2 id number default t1_seq.nextval, 3 name varchar2(20) 4 ); Table created. SQL> insert into t1(id,name) values(1,'John'); 1 row created. SQL> insert into t1(name) values('Mike'); 1 row created. SQL> insert into t1(id,name) values(NULL,'Jack'); 1 row created. SQL> select * from t1; ID NAME ---------- -------------------- 1 John 1 Mike Jack We can also use the CurrVal pseudocolumn of the sequence to populate values. This can be very helpful if we need to maintain a parent child relationship. Below are some of the other conditions which apply when using the sequence as a default value for the column. • Sequence must already exist and the user creating the table or inserting rows must have select privileges. • If sequence is missing or dropped after table creation, subsequent sequence creation will result in error. • If the sequence is dropped after the table creation the insert will result in error. • Sequence used is stored in the data dictionary and the normal naming conventions apply. • Use of sequence is subject to the same conditions as a normal sequence,including gaps due to misused sequence values. Default Values on Explicit Nulls In the example above we have see the default values being set by a sequence when the column is not referenced in the insert statement. If column is referenced in an insert statement then the value is taken from that insert even if the value is NULL. Oracle 12c database allows you to change this behavior using the “On Null” clause while specifying the default values. Let’s create another column in our sample table above, using this “On Null” clause and test it. SQL> truncate table t1; Table truncated. SQL> create sequence t1_seq_on_null; Sequence created. SQL> alter table t1 add id2 number default on null t1_seq_on_null.nextval; Table altered. SQL> insert into t1(id,name,id2) values(1,'John',101); 1 row created. SQL> insert into t1(name) values('Mike'); 1 row created. SQL> insert into t1(id,name,id2) values(null,'John',null); 1 row created. SQL> select * from t1; ID NAME ID2 ---------- -------------------- ---------- 1 John 101 21 Mike 1 John 2 The notable difference here is the third row. Here we provided a Null value for both the columns. This time however the column ID2, which has the “On Null” clause defined, placed a sequence value in the table for this column where the insert statement passed a Null value. Metadata Only Default Values This feature was first introduced in 11g to address a very serious issue. Suppose you have a table which has records in the millions and you need to add another “Not Null” column to this table. The task was tedious because it required that you provide a default value for new column as it is a mandatory column of table. When you provided a default value, Oracle had literally update all the exiting millions of records with the default value for this new column. This is some serious work and would require a huge amount of application down time to complete this. Oracle 11g simplified this process by introducing the “Metadata only” default value feature. Using this method when you added a new mandatory column for the table with a default value, the value was stored in the Metadata only. The Optimizer rewrites the query at run time to take that value from the Metadata every time it was accessed. This was a huge space and time saving. Oracle 12c database takes this one step further by introducing this concept for both mandatory and optional columns. Now “Metadata Only” default values are available regardless of the fact that column is “Not Null” column or it is an optional one. Read more on Oracle 12c Database Identity Columns.
OPCFW_CODE
A group is a container for a bunch of windows, analogous to workspaces in other window managers. Each client window managed by the window manager belongs to exactly one group. The groups config file variable should be initialized to a list of DGroup objects provide several options for group configuration. Groups can be configured to show and hide themselves when they’re not empty, spawn applications for them when they start, automatically acquire certain groups, and various other options. Match(title=None, wm_class=None, role=None, wm_type=None, wm_instance_class=None, net_wm_pid=None)¶ Match for dynamic groups It can match by title, class or role. __init__(title=None, wm_class=None, role=None, wm_type=None, wm_instance_class=None, net_wm_pid=None)¶ Matchsupports both regular expression objects (i.e. the result of re.compile()) or strings (match as a “include” match). If a window matches any of the things in any of the lists, it is considered a match. - title – things to match against the title (WM_NAME) - wm_class – things to match against the second string in WM_CLASS atom - role – things to match against the WM_ROLE atom - wm_type – things to match against the WM_TYPE atom - wm_instance_class – things to match against the first string in WM_CLASS atom - net_wm_pid – things to match against the _NET_WM_PID atom (only int allowed in this rule) Group(name, matches=None, exclusive=False, spawn=None, layout=None, layouts=None, persist=True, init=True, layout_opts=None, screen_affinity=None, position=9223372036854775807)¶ Represents a “dynamic” group. These groups can spawn apps, only allow certain Matched windows to be on them, hide when they’re not in use, etc. __init__(name, matches=None, exclusive=False, spawn=None, layout=None, layouts=None, persist=True, init=True, layout_opts=None, screen_affinity=None, position=9223372036854775807)¶ - name (string) – the name of this group - matches (default None) – list of Matchobjects whose windows will be assigned to this group - exclusive (boolean) – when other apps are started in this group, should we allow them here or not? - spawn (string or list of strings) – this will be exec()d when the group is created, you can pass either a program name or a list of programs to - layout (string) – the default layout for this group (e.g. ‘max’ or ‘stack’) - layouts (list) – the group layouts list overriding global layouts - persist (boolean) – should this group stay alive with no member windows? - init (boolean) – is this group alive when qtile starts? - position (int) – group position Bind keys to mod+group position or to the keys specified as second argument. from libqtile.config import Group, Match groups = [ Group("a"), Group("b"), Group("c", matches=[Match(wm_class=["Firefox"])]), ] # allow mod3+1 through mod3+0 to bind to groups; if you bind your groups # by hand in your config, you don't need to do this. from libqtile.dgroups import simple_key_binder dgroups_key_binder = simple_key_binder("mod3")
OPCFW_CODE
Assigning different lifetimes to a single variable I'm still trying to understand Rust ownership and lifetimes, and I'm confused by this piece of code: struct Foo { x: String, } fn get_x<'a, 'b>(a: &'a Foo, b: &'b Foo) -> &'b str { let mut bar = &a.x; bar = &b.x; bar } Playground This code does not compile, because data from 'a' is returned. I assume this is because when I initialized bar, I assigned a &'a reference to it, so Rust assumes that bar has lifetime 'a. So when I try to return a value of type &'a str it complains that it does not match the return type 'b str. What I don't understand is: Why am I allowed to assign a &'b str to bar in the first place? If Rust assumes that bar has lifetime 'a, then shouldn't it prevent me from assigning b.x to it? Every borrow has a distinct lifetime. The Rust compiler is always trying to minimize lifetimes, because shorter lifetimes have less chance to intersect with other lifetimes, and this matters especially with mutable borrows (remember, there can only be one active mutable borrow of a particular memory location at any time). The lifetime of a borrow derived from another borrow can be shorter or equal to the other borrow's, but it can never be larger. Let's examine a variant of your function that doesn't have any errors: fn get_x<'a, 'b>(a: &'a Foo, b: &'b Foo) -> &'b str { let mut bar = &a.x; bar = &b.x; todo!() } The expressions &a.x and &b.x create new borrowed references. These references have their own lifetime; let's call them 'ax and 'bx. 'ax borrows from a, which has type &'a Foo, so 'a must outlive 'ax ('a: 'ax) – likewise with 'bx and 'b. So far, 'ax and 'bx are unrelated. In order to determine the type of bar, the compiler must unify 'ax and 'bx. Given that 'ax and 'bx are unrelated, we must define a new lifetime, 'abx, as the union of 'ax and 'bx, and use this lifetime for the two borrows (replacing/refining 'ax and 'bx) and the type of bar. This new lifetime needs to carry the constraints from both 'ax and 'bx: we now have 'a: 'abx and 'b: 'abx. The borrows with lifetime 'abx don't escape from the function, and lifetimes 'a and 'b outlive the call frame by virtue of being lifetime parameters on the function, so the constraints are met. Now let's get back to your original function: fn get_x<'a, 'b>(a: &'a Foo, b: &'b Foo) -> &'b str { let mut bar = &a.x; bar = &b.x; bar } Here, we have an additional constraint: the type of bar must be compatible with &'b str. To do this, we must unify 'abx and 'b. Given that we have 'b: 'abx, the result of this unification is simply 'b. However, we also have constraint 'a: 'abx, so we should transfer this constraint onto 'b, giving 'a: 'b. The problem here is that the constraint 'a: 'b only involves lifetime parameters (instead of anonymous lifetimes). Lifetime constraints form part of a function's contract; adding one is an API breaking change. The compiler can infer some lifetime constraints based on the types used in the function signature, but it will never infer constraints that arise only from the function's implementation (otherwise an implementation change could silently cause an API breaking change). Explicitly adding where 'a: 'b to the function signature makes the error go away (though it makes the function more restrictive, i.e. some calls that were valid without the constraint become invalid with the constraint): fn get_x<'a, 'b>(a: &'a Foo, b: &'b Foo) -> &'b str where 'a: 'b, { let mut bar = &a.x; bar = &b.x; bar } Much more beautifully explained than I did! And great use of the todo!() I don't think it's correct to assume that bar is assigned lifetime 'a based on the initial assignment. Rather, bar is assigned lifetime 'b based on the return type. That's why it doesn't complain about assigning something from b, but it does complain about assigning something from a. There's a few things you can do to verify a bit more what's going on here. First, if you promise that a lives at least as long as b, via fn get_x<'a: 'b, 'b>, the whole program compiles without problem. Second, to see that the issue is based on type, check out this version of your code: Playground struct Foo { x: String, } fn get_x<'a, 'b>(a: &'a Foo, b: &'b Foo) -> &'b str { let mut bar: &'_ String = &a.x; // Alternatively try with 'a, 'b or '_ for the lifetime parameter bar = &b.x; &b.x } Here we return something that's definitely correct for the lifetime, and just use bar to play with explicitly specifying the lifetime parameter. The version I posted compiles, because here the borrow checker just finds a lifetime that encompasses both 'a and 'b. But if we replace the '_ with either 'a or 'b it won't compile because now we have a lifetime mismatch when we assign to the same variable from different parameters. Thanks for your answer! I agree that I'm probably wrong about how lifetimes are calculated here, but I'm still confused. The compiler doesn't complain about assigning something from a to bar (there's no error associated with that line). And the error message explicitly says that it is returning "data from 'a'", which to me implies that it thinks bar (what I'm returning, and the line it is underlining in the error) is "data from 'a'", i.e. it has the same lifetime as a.
STACK_EXCHANGE
Sign up and stay connected to your favorite communities. Anyone interested in a second part? Probably on linking, analysis, querying and exporting? Was meant to write it but somehow did not get enough motivation. Yes please! That was an excellent article! Thank you writing and sharing it! Yes please. I have been working on a similar work flow for my article reading. Printing papers and scribbling notes never seems to work for me in the long run. You may want to look into adding Zotero to your workflow for managing your references and PDFs. Zotfile and Better-Bibtex are phenomenal plugins. I like this Interleave package for notes, I have been just embedding highlights and notes inside PDFs and extracting them in Zotero. Sounds good. Though I'm doing a lot of analysis after taking down the notes. Same as you I've had a lot of bad experiences with jotting notes in non-searchable ways. The analysis and intra-references in notes file, that I turn into mind-maps w graphviz are my way of keeping everything in sync. I have played with mind mapping in the past and never found my stride with it. Are you using org-mind-map? Maybe mind-mapping is a bit too much of a stretch. More like relationship/dependency mapping. I have been using similar approaches to org-mind-maps but they do limit you to a single, giant image. That's a good solution up to a point. As it again becomes hard to follow. I ended up building my own tool that allows me to visualise the relationships given a notes file and an entry-point ie title or a tag. For now I'm using dot as a UI as it is fairly easy to work with. I've been thinking about using something more interactive yet somehow it's hard to justify the effort for now :) Yes, those actually sound even more interesting than the first article. Please do. Yes please! Would love to see such an article! This article left me yearning for a helm-zotero package. The article doesn't link to it, but here's interleave: This is incredible! I never knew there was a combination of tech that would let you read a PDF, highlight things, and jot things in the margins. I've been printing documents specifically so I can jot things in the margins. Are there other options for this? On iOS / Android / macOS / Windows? /u/peel and others: How is the jotting experience on the Onyx Boox large screen reader? What if the PDF has small margins? Googling suggests it doesn't have handwriting recognition? Hmm I've been using the 10-incher which gives you way better experience than reading the same thing on an iPad. Smart cropping works beautifully. With small margins I tend to take notes over the text. Though as I migrated to Boox Max it is yet another great leap. You don't need cropping nor any of the extra features. It's just a huge screen with extra features. As I believe the last step of organising the notes is pretty important I tend to export the annotated pdf pages only into dropbox/sync.com and then get them organised. It does not come with handwriting recognition. But that's not a biggie for me as it fits my workflow anyway. Are you really able to read a PDF file on emacs macOS? my emacs stop responsing right after I open a PDF file... Just found out the reason of the hanging problem, it's because linum-mode, disable it when open PDF files and it's cool now Sorry just realize you're the real author, how did you config your Emacs to have it actually able to read PDF files? Works on mine. Compiled from latest source on latest High Seirra. No issue here with pdftools and spacemacs. Not the author, but I can read pdfs in emacs on Mac. Do you have line numbers on globally by any chance? You need to disable line numbers for pdf/doc type buffers Edit - oops sorry, just saw your other comment that this was indeed the issue Yeah thank you. Just figured it out after post the question. Now I have the best PDF reader ever :D The extensible, customizable, self-documenting real-time display editor.
OPCFW_CODE
This will run you through setting up an angular app on a Rails 5 server from scratch. - Assuming you have node + npm locally - Install the Angular CLI tool ( npm install -g @angular/cli) - Install the Rails gem ( gem install rails) 1. Install Rails Create a new directory called myapp, create a new rails project in it, and start up the rails server: Navigate to http://localhost:3000 to see your new rails server 2. Install Angular From inside your rails directory run ng new angular, where angular is the name of your app. Note that the name app is already taken by Rails. Once it is installed run ng serve to spin up the angular app. Navigate to http://locahost:4200 to see your new angular app 3. Let Angular talk to Rails A this point you now have an angular app running on port 4200 and a rails app running on port 3000. This is great...but in order to do any kind of communication we need to allow cross-origin requests from Rails when we're in development. Note: this tutorial is just covering developing in Rails and Angular. A follow-up post will walk through the steps to get this working in production In order to let angular (on port 4200) talk to the API you're going to create in Rails (on port 3000), we need to enable CORS. Add the following to your gemfile, kill the rails server, run bundle install again, and restart your rails server. Before this works we need to add one more thing to Rails (only in dev). Create a file at config/initializers/cors.rb and add the following: This allows anybody to request anything (hence why we're just in dev) from any location. Note: Stating again that this is a development only solution. Ok, so at this point now Rails should be able to handle any request we send it from Angular on port 4200. For a proof of concept, let's set up a route that returns some json. We'll use the rails generator for a new controller. This creates a file at app/controllers/users_controller.rb. Create a method to return some json: Let's add a route to hit this method in config/routes.rb: To make sure everything is working, go to http://localhost:3000/users and verify the json is structured like you expect. If everything looks good, let's setup Angular to read from this endpoint. 4. Angular setup The next thing we need to do is setup Angular to hit our endpoint and get back the json response, rendering the list of users to the screen. Navigate to angular/src/app/app.component.ts and replace the file with the following code: And then in app.component.html add the following div: This performs a basic injection of the Http module and calls a get on the users route we creatd earlier. If everything is hooked up correctly, you should see the array of users displayed in the browser. This tutorial is obviously lacking in a few areas: - Never put the CORS config into production - ng build should eventually place your compiled code into a place that is consumed by a route in rails to load your actual app - Calling the Http method would normally be done by a service. This is meant as a quick up-and-running tutorial. Hopefully this at least gets your environment up and ready to develop on Angular. Once you're ready to push this code to production you'll need to do a bit more config, but don't let that stop you now. It's just a matter of figuring out where the files live and where the api endpoints live.
OPCFW_CODE
Thunderbird Profile Encryption Is there any way to encrypt my Mozilla Thunderbird profile (stored emails and such) to protect them with a password? Solutions such as using Windows encryption or TrueCrypt won't work because I only want to encrypt the contents of the file, not use file system-specific features. Thank you! Your Thunderird profile consists of multiple files. Putting them all in a TrueCrypt container will be much easier than handling encryption/decryption on a per-file basis. You can use a TrueCrypt container file to avoid "file system-specific features." But then I'd have to install TrueCrypt, which I specifically said doesn't suit my needs because I just want to encrypt one folder, not an entire partition... If you read Mike's comments he did not state encrypting an entire partition. Oh shoot... sorry about that, my bad. Everywhere I'd looked on the internet, I'd read about how TrueCrypt encrypts entire partitions, and I missed that part... thanks, I'll look into it; +1 for both. Would you mind explaining how to do that? I installed the program, but everything is about encrypting volumes, not folders... TrueCrypt creates a virtual encrypted disk within a single file and mounts it as though it was a real disk. This virtual volume will look like a whole drive to the OS and applications. You can then move Thunderbird's profile folder onto this new virtual drive (as described in some of the answers). Huh... okay, so I misunderstood, but it's still the same problem: whether the new volume is virtual or physical, I'm still creating a new volume, which I wanted to avoid. With TrueCrypt: Create a new file-based container. In the main window, Create volume Create an encrypted file container → Standard volume Select where you want to store it. (I have an AppData.tc in my user directory.) Accept the default encryption algorithm. Select how big do you want the volume to be. Enter a password, or pick a key file, or both. Format the volume. (I personally choose NTFS as filesystem, for some reliability.) Even though file-based, the container still has a standard filesystem. The Linux term is "loop mounting". Click Exit. In the main TrueCrypt window, open the freshly-created volume. Use Select File Pick an empty drive letter from the big list Click Mount You can make this step mostly-automatic through Favourites → Add Mounted Volume to Favourites. Move your Thunderbird profile. Copy the current profile from your AppData folder to the drive you chose in 2.2 Usually it is in %APPDATA%\Thunderbird\Profiles and has a name similar to mbqbp1tq.default After copying, rename to Thunderbird profile or something, to avoid confusion later. Securely wipe the old profile. I used to like Eraser, until it received a complete rewrite and became inconvienent to use "but it's .NET now!" Now I stick with sdelete. Tell Thunderbird about the new location. It's kept in %APPDATA%\Thunderbird\Profiles.ini, but there's an easier way to update it: Start → Run → enter thunderbird -profilemanager Delete your current profile. Click Don't delete files; you already nuked them in step 3.2. Click Create Profile, enter any name (such as default), and click Choose Folder. Pick the location of your encrypted profile from step 3.1. Start Thunderbird. If you decide you do not like TrueCrypt, there is FreeOTFE, which works in mostly the same way. With Windows' built-in Encrypting File System: Not to be confused with BitLocker. You mentioned that you do not want to use filesystem-specific features, but they can be useful at times. Browse to your Thunderbird settings folder. Usually %APPDATA%\Thunderbird. Right-click on Profiles, choose Properties. Advanced → Encrypt contents → OK → OK Start Thunderbird. Backup the encryption key. You only need to do it once for your Windows account. Start → Run → certmgr.msc Personal → Certificates Find the one with "Encrypting File System" in its "Intended Purposes" column. Right-click, All tasks → Export Click Yes, export the private key Enter the encryption password for the exported key, and choose where to put it. Oh, one more thing. You have to somehow wipe the old, unencrypted data. I use cipher /w:C: to wipe all unused space, but even one pass takes a long time... The downside - EFS is only available in Windows * Professional and up. From a comment: The only other way (besides transparent encryption, as above) is to build crypto capabilities into Thunderbird itself. And considering the complexity of the program, it is not a solution. I would mark this as the answer -- it is extremely detailed and well-written -- except that I specifically stated I was not looking at encrypting entire volumes or using file-system features, but your steps indicate that I need to either (1) create another volume, format it, then mount it as a directory, or (2) use NTFS's encrypting file system... which is exactly what I said I was not looking for (I already knew about these methods beforehand!). Thank you for taking the time to write it, though. @Lambert: You stated you do not want to encrypt a partition. You did not mention anything about putting the data into a file, which my post was about. And TBH, I cannot imagine any other way to transparently encrypt an entire lot of files in real-time, besides doing it at file-system level (either a virtual disk or a special filesystem). Sure, technically you could do some hacking to hook all file accesses made by thunderbird.exe, but that way lies madness. Besides, you have not explained why you are so against either of those methods. @grawity: Seems like I'm misunderstanding how this works... so instead of formatting a volume and mounting it as a folder, you're doing the opposite -- creating a virtual volume and mounting it as a partition. While slightly better, it still won't work for me because I just don't want to create a whole new volume (I guess "volume" was a better name for it than "partition", since it's virtual), simply because it's overkill and it gets very annoying with multiple OSs. (The second method makes my encryption depend on my Windows password, which I don't like.) Thanks for the reply, though. :) @grawity: I was just looking for a way to make Thunderbird encrypt its files, not tricking it into thinking its files aren't encrypted when they really are. @Lambert: TrueCrypt does work on Linux. Similarly, FreeOTFE can open LUKS volumes. As for "overkill"... I'd rather spend five minutes configuring a widely known and used program than spend five hours implementing an encryption scheme in Thunderbird itself and then hoping I patched all functions that deal with files. @grawity: Who said I was dual-booting with Linux, and who said the problem was compatibility? The problem, simply, is that it's specifically the solution I was not looking for; while your answer is great for some situations, the point of this question was to know if there were any other solutions except this, and this simply isn't answering the question, even if it's a great solution. @grawity: That's exactly the kind of answer I was looking for, thanks! (Feel free to put that in your post and I'll mark it as the answer.)
STACK_EXCHANGE
Why do level 4 and 5 Druid spells seem to change tone? I am playing a Druid for the first time (D&D 5e) and am quite enjoying it. My character recently hit level 9, and I noticed that the 4th and 5th level spells seem to shift in tone dramatically from earlier levels. While lower level spells seem to emphasize beasts and nature thematically, the higher level spells seem to shift toward elementals and life force / necromancy style spells. Is there a reason for this? In other words, is there lore behind the shift in tone and theme? Or is there a tactical reason for this shift? Or am I imagining things? I'm answering based on the history of the game (I'm setting up a 2nd Ed. campaign right now, so the in-depth descriptions of how druids evolve as they level is still fresh in my mind), assuming that this isn't a developer-intent question but instead just trying to figure out why the class gets more esoteric around 9th level when it was formerly solidly grounded in what we think of as nature. If you're looking for developer commentary, let me know and I'll delete my answer. This is how it's been, at least since 2nd edition in the '90s. Low-level druids are still mastering the "nature" of the Prime Material - learning to summon animals and draw on their aspects to enhance themselves. But 4th and 5th level spells are beyond that introductory phase. This is where you learn to manipulate the very building blocks of all nature - life energy and the elemental forces, as well as complex natural phenomena like diseases and disasters. You still get some plant and animal related spells, but they're generally bigger in scope - summoning hordes or granting complete control over animals. You also expand your control over nature to less "natural" monsters with spells like charm monster. In much older editions of DnD, 9th level was considered "name level" - this is the point where you are your own political force to be reckoned with. Lords and armies in the areas you're active have standing policies for you specifically, usually "placate if possible", and you're known across at least your continent, if not further. On a meta level, it's also traditionally the point where "pure casters" like wizards and clerics are expected to surpass normal encounter design. At this point you can skip entire adventure arcs using spells like fly and teleport. The Druid's 5th level spells are equally game-changing: you can raise the dead, undo permanent disabilities like petrification, make plants and animals have human-level intellects, wipe out an entire village or cripple a town with a single insect summoning, bind a person to complete a quest, or permanently reshape the land.
STACK_EXCHANGE
Noted: sorry of intermingling the two commands. It has been a bit frustrating with all of I tried using ldapadd with just "manager" instead but seems all I get are ldap_bind: Invalid credentials (49)Tried using no CN at all, Is there a better guide for migrating ldap to a new server that anyone would recommend? I've been using the Redhat guide but it obviously is lacking a little bit and their support is too. With no CN: # ldapadd -x -D "dc=mydomain,dc=com" -W -f /tmp/nis.ldif.ldapDumpEnter LDAP Password:ldap_bind: Invalid credentials (49) Tried with no password, assuming that none has been correctly set:# ldapadd -x -D "dc=mydomain,dc=com" -W -f /tmp/nis.ldif.ldapDumpEnter LDAP Password:ldap_bind: Server is unwilling to perform (53) additional info: unauthenticated bind (DN with no password) disallowed Turn of slapd and use slapadd: # slapadd -l /tmp/nis.ldif.ldapDump56afc9ed The first database does not allow slapadd; using the first available one (2)56afc9ed bdb_db_open: warning - no DB_CONFIG file found in directory /var/lib/ldap: (2).Expect poor performance for suffix "dc=my-domain,dc=com".slapadd: line 1: database #2 (dc=my-domain,dc=com) not configured to hold "ou=Hosts,dc=company,dc=com"; no database configured for that naming context_ 0.01% eta none elapsed none spd 2.3 Surely I am not the first person to try migrating data but searching for good guides on this has not turned up anything that works. BTW Quanah, I loved my Zimbra server back in the 3.x days, was wonderful, hated leaving that behind. Not sure how long you've been with them but kudos for your work with From: Quanah Gibson-Mount <quanah(a)zimbra.com> To: k j <kj37075(a)yahoo.com>; openldap-technical(a)openldap.org Sent: Friday, January 29, 2016 3:35 PM Subject: Re: problem with slapadd in migrating LDAP servers --On Friday, January 29, 2016 8:25 PM +0000 k j <kj37075(a)yahoo.com> wrote: ldapadd -x -D "cn=administrator,dc=mydomain,dc=com" -W -f That is ldapadd, not slapadd. Since you haven't imported your database yet, I'm going to guess the user doesn't exist in it yet, thus it can't bind. This is why one would need to use slapadd with slapd offline instead. I would note it is highly recommended to avoid the broken RHEL packages of OpenLDAP. If you require paid support for your LDAP deployment, you likely want to contact Symas and use their packages. If you are fine without paid support, you may wish to use the packages provided by the LTB project if you are not comfortable building OpenLDAP on your own. Zimbra :: the leader in open source messaging and collaboration
OPCFW_CODE
It's home to an extensive breed registry that grows by nearly 300,000 animals each year membership application registration form breeder's reference guide. Computer calendar bugs: y2k, y2004, y2038, y2106, y4k, etc the y2k bugs part1: short field storage (lazy programming) part2: is 2000 a leap year. There are four major aspects of the costs of the year 2000 problem that need to computer operating systems utilise calendar and clock routines, and many are will be some significant benefits associated with solving the year 2000 problem is not accurate enough for serious economic analysis of software problems. After over a year of international alarm, few major failures occurred in y2k bug, also called year 2000 bug or millennium bug, a problem some computer software did not take into account that the year 2000 the checking of these “ embedded systems” for sensitivity to calendar dates related topics. The year 2000 problem, also known as the y2k problem, the millennium bug, the y2k bug, or y2k, is a class of computer bugs related to the formatting and storage of calendar saving two digits for every date field was significant in this effort the dutch government promoted y2k information sharing and analysis. Collection of seminar talks on major software bugs (in german) to analyse target system behavior, one is altering that very behavior first email by t nicley related to the pentium bug splendor of the seas, computer problem stops cruise ship, 1997, see also here wikipedia: year 2000 problem. It was a real and serious threat relating to computers only recognising when the year 2000 arrived, there were so few problems resulting from the a tribute to the it industry that, in the main, they resolved the problem in time interpretation of an ancient calendar predicting the world's end next friday. In one way or another, year 2000 computer problems threaten almost all of the supplies the millennium bug as the twentieth century draws to a close year 2000-related computer problems could also bring disruptions in almost any aspect the basis of that analysis, plans and priorities can be developed to minimize. The computers refused to accept a termination date beyond dec march, 1998) the generic problem underlying the y2k or millennium bug is actually is the current date on the gregorian calendar only before time, t and not thereafter as another illustration, even if the year 2000 problem disappears, we will still have. What good does it do you if your computers are year 2000 compliant but they can 't order y2k is not a glamorous issue, but it is a major priority if the best and brightest the scope of other technology-related problems will undoubtedly be different the millennium bug—it's still more than a year away, and tired of it.
OPCFW_CODE
How to set up a gtest project with cmake with a custom lib path I'm trying to use cmake to generate a visual studio project with gtest with the following cmake file.. cmake_minimum_required(VERSION 3.13) set(CMAKE_CXX_STANDARD 11) find_package(GTest REQUIRED) message("GTest_INCLUDE_DIRS = ${GTest_INCLUDE_DIRS}") add_library(commonLibrary LibraryCode.cpp) add_executable(mainApp main.cpp) target_link_libraries(mainApp commonLibrary) add_executable(unitTestRunner testRunner.cpp) target_link_libraries(unitTestRunner commonLibrary ${GTEST_LIBRARIES} pthread) I downloaded and compiled gTest at this specific path; C:\Users[MyUserName]\Documents\Libraries\gTest However when I try to call cmake.. on the cmake file I get met with the following error Could NOT find GTest (missing: GTEST_LIBRARY GTEST_INCLUDE_DIR GTEST_MAIN_LIBRARY) What do I need to do to make cmake find these paths? FWIW, gtest is a good candidate for just using FetchContent() instead of wrestling with installation paths. It's even the example used in the documentation. "I downloaded and compiled gTest at ..." - Have you installed GTest after building? For hint CMake about custom installation your could set either GTEST_ROOT variable (as described in the documentation) or CMAKE_PREFIX_PATH variable, as described in that my answer. To be able to use find_package, you must first install googletest manually. On the other hand, FetchContent() will fetch the cmake content from specified source url and the call to FetchContent_MakeAvailable() will make its cmake artifacts to be usable in your CMakeLists.txt. An example usage would be as follows: cmake_minimum_required(VERSION 3.14) include(FetchContent) FetchContent_Declare( googletest GIT_REPOSITORY https://github.com/google/googletest.git GIT_TAG 703bd9caab50b139428cea1aaff9974ebee5742e # release-1.10.0 ) FetchContent_MakeAvailable(googletest) # enable cmake testing enable_testing() # add test exe target, and link it with gtest_main library add_executable(YourTestTarget hello.cc) target_link_libraries(YourTestTarget gtest_main) # this will include GoogleTest script and calls test discovery function defined in it, so you donot have to create main function for tests, they will be auto discovered include(GoogleTest) gtest_discover_tests(YourTestTarget) It will fetch the content from googletest git release tag and install it into deps directory in cmake-build folder. Then, its artifacts will be available to your CMakeLists.txt. Note that, to be able to use FetchContent and FetchContent_MakeAvailable, you need to upgrade cmake at least cmake14. It's worth nothing that FetchContent_MakeAvailable() is available as of cmake 3.14 and makes this process a lot cleaner. If OP can change their cmake_minimum_required() from 3.13 to at least that version, it would be very much worth their while. Thanks, this is true, FetchContent_MakeAvailable needs at least cmake3.14. I edit.
STACK_EXCHANGE
Duplicate record_id is getting generated when option set to generate_record_id = "true" for US_ASCII We are getting duplicate record_id while reading US_ASII file with below read option: 'encoding': 'ASCII', 'is_text': 'true'', 'generate_record_id':'true' Actually record_id is duplicated but not the entire record. Expected behaviour With the above read option, the record_id should be generated with unique value, But we are getting duplicate record_id and noticed that one of the record_id has none value for most of the column, other one has proper values. For example: Current output: record_id col1 col2 col3 1 2 none none 1 2 UK Chicago 2 4 none none 2 4 Asia XXXX Expected output: record_id col1 col2 col3 1 2 UK Chicago 2 4 Asia XXXX @yruslan - could you please take a look and help for the solution Hi, thanks for the report. Looks like a very interesting bug. Keen to fix it. Could I ask you to attach the copybook the file the exact code snipped you used so it would be easier for us to reproduce? Also, could you try instead of is_text = true to use .option("record_format", "D") or .option("record_format", "D2") and check if the issue happens still. Sorry, it is production issue and we can't able to get sample information due to security policy. Anyway I will try with the option which you suggested and let you know the results. Hi @Loganhex2021, how is it going? The new upcoming version (0.2.10-SNAPSHOT, current master) has safeguards against partial records parsing caused by too long ASCII lines. You can try if it fixes your issue as well by any chance. Let me know if you have found the solution for your issue. hi @yruslan , Thanks for following up. We noticed interesting thing in the source file, actual source file size is ~400 MB. Each RecordLength is 102 bytes. Almost every 32 MB we are getting duplicate Record_Id. So we are getting 12 duplicate records for this 400 MB file. (400/32 => 12 ) Is this helpful for identifying root cause for this issue? Yes, it is very helpful! Will try to reproduce Still can't reproduce. Could you please send the code snippet you use to load the data, and spark-cobol version? `` Still can't reproduce. Could you please send the code snippet you use to load the data, and spark-cobol version? Please find the below code snippet the generate sample file and the read option we used. ` To generate the file (pyspark - databricks) _source_path = '/test_ascii/test/triage_ascii_3.txt' record = '' record_id = 1 data for record_id in range(record_id, 23): if record_id == 1: record += (str(record_id).zfill(7) + 'dummydata'*10 + 'dum\r\n') else: record += record with open(_source_path, 'w') as testfile: testfile.write(record) #To read the file , (got 6 duplicates) _ro = {'copybook_contents': ' 01 ASCII-FILE.\n 02 ID-COLUMN PIC X(7).\n 02 COL-TWO PIC X(09).\n 02 FILLER PIC X(86).\n', 'is_text': 'true', 'encoding': 'ASCII', 'ebcdic_code_page': 'cp037', 'string_trimming_policy': 'none', 'debug_ignore_file_size': 'true', 'generate_record_id': 'true'} entity_df = spark.read.format("cobol").options(**_ro).load(_source_path) entity_df.exceptAll(entity_df.drop_duplicates(['Record_Id'])).rdd.collect() ` spark version: 3.1.2 scala 2.1.2 cobrix version: 2.2.2 I see, thanks! Please, try the latest master of cobrix (2.4.10-SNAPSHOT ideally), or at least 2.4.9. Your issue might have been fixed already. In addition, Remove ebcdic_code_page, debug_ignore_file_size, is_text Add .option("record_format", "D") Thanks @yruslan , I will try with suggested option. I see, thanks! Please, try the latest master of cobrix (2.4.10-SNAPSHOT ideally), or at least 2.4.9. Your issue might have been fixed already. In addition, Remove ebcdic_code_page, debug_ignore_file_size, is_text Add record_format: 'D', pedantic: 'true' With record_format: 'D' option, if the last record has lesser bytes, then the record got skipped. Cool, glad to hear that record_id doe not have duplicates. Will try to reproduce the last record issue you mentioned. Hi, With record_format: 'D' option, if the last record has lesser bytes, then the record got skipped. Can't reproduce it, records that have at least one byte (even a space character) are not skipped. Probably the best course of action would be to wait for 2.4.10 to be released and try updating the version of spark-cobol and check if the error is still there.
GITHUB_ARCHIVE
The goal of this exercise is to read and store an input file into a table then validate certain fields within the input and output any error records. I need to read and store each policy group so that there are just 5 records stored in the table at a time instead of the entire file. So I need to read in a policy group which is 5 records, do the processing, then read the next 5 records, etc until the end of the file.. This is the input file. 10A 011111 2005062520060625 20A 011111000861038 32A 011111 79372 60A 0111112020 6 4 94A 011111 080 1 10A 02222 2005082520060825 20A 022221000187062 32A 022221 05038 60A 0222212003 6 4 94A 022221 090 1 .... I was able to load the first 5 records into a table by having my table OCCUR 5 TIMES but I don't know how I would continue that. My code is below. (I wrote it just to see if it was working correctly, but it prints the header line with the first 4 records, instead of just the first 5) 05 T1-RECORD-TABLE. 10 T1-ENTRY OCCURS 5 TIMES INDEXED BY T1-INDEX. 15 RECORD-TYPE-10 PIC X(80). 15 RECORD-TYPE-20 PIC X(80). 15 RECORD-TYPE-32 PIC X(80). 15 RECORD-TYPE-60 PIC X(80). 15 RECORD-TYPE-94 PIC X(80). copy trnrec10. COPY TRNREC20. COPY TRNREC32. COPY TRNREC60. COPY TRNREC94. ..... Z200-READ-FILES. READ DISK-IN INTO T1-ENTRY(T1-INDEX) AT END MOVE 'YES' TO END-OF-FILE-SW. WRITE PRINT-RECORD FROM T1-ENTRY(T1-INDEX). I don't want a step by step for this (though that'd be nice :P) bc I know WHAT I need to do I just don't know HOW to do it bc my textbook and course note are useless to me. I've been stuck on this for a while and nothing I try works.
OPCFW_CODE
Our smallest PC, with full speed desktop performance. Rock solid reliability for business and government. A general purpose desktop. Give yourself room to grow. Enjoy the silence in your studio, lab, home or office. Extreme performance with overclocking and multi-GPU. Post production and design. Cutting edge workstations. Custom Air Cooled Want more choices? Customize an air cooled desktop from scratch. Custom Liquid Cooled Want higher performance? Customize a liquid cooled desktop. Intel Haswell for Traverse Pro 15" The perfect balance of power and mobility. Traverse Pro 17" Full size keyboard. Great desktop replacement. Peak HPC Workstations For Intel Phi and NVIDIA Tesla Designed for quiet operation Intel Xeon, 2U/3U/4U Flexible rackmount design Intel Xeon, 2U/3U/4U Up to 20 hot swap disks Want more choices? Customize an air cooled server from scratch. Build Your Own > 3DLabs WildCat Realizm 200 512MB DDR3 AGP 3DLabs WildCat Realizm 200 512MB DDR3 AGP 3D Labs 01-000092 Optimized for Running Multiple Applications Simultaneously - Designed to minimize CPU load while driving the graphics pipeline at maximum capacity - Innovative 16GB virtual memory support shatters the limits of onboard memory by automatically handling huge datasets while caching essential data for fastest access Maximum scalability, Maximum Performance - Wildcat Realizm 200 's Visual Processing Unit (VPU)offers industry-leading performance and programmability capabilities - Huge fragment shader program support for 256 K individual instructions with looping and conditionals where competing technologies only support 64 K - Fragment processor has direct access to virtual memory, enabling generalized algorithms to be efficiently computed using large data buffers without concern for memory fragmentation - Shader programs can access 32 different buffers in one pass, allowing complex algorithms to execute efficiently using an unlimited number of samples Video Display Capabilities - Industry 's only isochronous command channel with fast context switching and automatic hardware scheduling to insure "glitch-free" effects with real-time video. - Dual-link, dual display for today's megapixel display requirements. Capable of driving resolutions of 3840 x 2400 at the highest refresh rates. Optimized Dual-Display Acceleration - Innovative VPU design allows improved graphics acceleration for your dual-display configurations - High-resolution support and dual-display support give you more visual real estate on the desktop Windows Acuity Manager - Next generation display management technology for application and performance optimization and control - Ergonomic, dual taskbar minimizes cursor and mouse movement for dual displays or 9.2-megapixel (3840 x 2400) displays Minimal System Load =Maximum Graphics Acceleration - 3Dlabs professional graphics driver works in close concert with Wildcat Realizm hardware to reduce your system CPU and memory load for all display-related activities Unmatched VPU Performance - The most advanced Visual Processing Unit (VPU) available today offering unparalleled levels of performance, programmability, accuracy, and fidelity - Optimized floating-point precision across the entire pipeline The Most Memory Available on Any Graphics Card - Handles more textures without stressing your system memory - Provides ample frame buffer to support high-resolution,true-color displays, with SuperScene™ antialiasing for the ultimate in visual quality - Precise floating-point conversions across the entire graphics pipeline maximize image accuracy, storage, and processing capabilities with zero performance impact - Enough memory to provide off-screen memory to support Pbuffers while providing abundant memory for highly detailed,true-color 2D and 3D textures - all simultaneously High Onboard Bandwidth - High onboard bandwidth means professional performance - 512-bit memory bus delivers the highest possible throughput Hardware Accelerated 3D Volumetric Textures - 3D textures are applied throughout the volume of a model, not just on the external surfaces -and it happens in real-time and it happens in real-time for the precision display capabilities you demand Supports 32 Lights in Hardware - Designed to minimize any performance hits to your CPU and system memory Extreme Geometry Performance - Manipulate the most complex models easily in real-time - Wildcat Realizm's VPU features full floating-point pipelines from input vertices to displayed pixels to offer you unparalleled levels of performance, programmability, accuracy and fidelity - Genuine real-time image manipulation and rendering using advanced programmable featuers so your projects are on spec and on time - Graphics architecture is able to directly display 16-bit floating-point pixels with 3-channel, 10-bit video-rate alpha blending, 10-bit LUT and 8-bit WIDs - Independent dual 400 MHz 10-bit DACs, creating the highest levels of displayed color resolution or performance 36-Bit High-Precision Floating-Point Vertex Pipeline - Wildcat Realizm delivers images so accurate you won't worry about display anomalies or rendering errors on your next time-critical masterpiece - At 12 pixels per clock cycle, Wildcat Realizm 200 processes pixels at astounding speeds - Virtual shader program memory support up to 256K fragment shader instructions plus flow control and loops - With Wildcat Realizm you get unmatched OpenGL® Shading Language performance and functionality to insure robust execution and acceleration for industrial-strength shaders - from the company that initiated OpenGL Shading Language development Remove the boundaries to your view of the world - Innovative, advanced display features coupled with maximum programmability let your creativity take you further 64-Bit Hardware Accumulation Buffers - Accelerated performance of accumulation buffer operations used in depth-of-field, motion blur, shadow,and multi-pass rendering algorithms. - Provides a tangible appearance of depth, enhancing visual immersion into the 3D environments you create. Multiview Option with Framelock/Genlock - Most advanced framelock/genlock capbilities in the industry - Facilitates multi-system video walls and supports genlock to a house sync source - Supports tri-level sync for HDTV,bilevel sync for NTSC and PAL VPU technology for professional performance with professional results Full programmability and floating-point capabilities through entire graphics processing pipeline Seamless 32- to 16-bit and 16- to 32-bit conversion with zero overhead AGP 8x interface for fast data transfer through the system bus Dual-display,dual-link DVI to double the digital display bandwidth (for true 3840 x 2400 resolution capabilities) 256-bit GDDR3 memory interface for higher memory performance. SuperScene™ multisampling full-scene antialiasing support Texture sizes up to 4K x 4K Dedicated isochronous channel Orthogonal, compiler friendly SIMD array throughout pipeline allowing compilers to deliver optimal performance Independent dual 400 MHz 10-bit DACs OpenGL ® 2.0 (full support when rati#64257;ed) OpenGL 1.5 with OpenGL Shading Language Microsoft DirectX ® 9.0 with High Level Shader Language (HLSL, VS 2.0, PS 3.0) Supports optional Wildcat Realizm Multiview card for framelock/genlock capabilities Leading support for OpenGL Shading Language and DIrectX 9 HLSL Full floating-point programmability- Optimized floating-point precision at each pipeline stage (36-bit vertices, 32-bit pixels, 16-bit back-end pixel processing) for the highest precision rendering accuracyand fidelity 16 programmable 36-bit #64258;oating-point vertex shaders supporting AGP 3.0, single-slot card. Occupies two slots for quiet cooling solution Optimized for AGP 8x performance Requires auxiliary system power connection Compliant with AGP 3.0 graphics electromechanical and power specification Consumes 85 Watts of system power 512 MB GDDR3 uni#64257;ed memory with 256-bit-wide interface bus 64 KB of #64258;ashable EEPROM memory for VGA bios and product con#64257;guration storage. Virtual memory support allowing Microsoft Windows 2000 and Microsoft Windows XP (32-and 64-bit). Windows drivers include 3Dlabs Acuity Windows Manager. Red Hat Linux Enterprise Edition (32-and 64-bit ver 3.0 or later) Two DVI-I Analog/Digital Video Output Ports - dual-link DVI capable of supporting the following con#64257;gurations Supports Wildcat Realism Multiview Option Card: Multiview card supoprts framelocking, genlocking, sychronozed framebuffer swap and synchronized refresh rate. Processor: Intel® Pentium® 4, Athlon™ or compatible processor (Pentium 4, Athlon 64 or Opteron recommended) Operating system: Microsoft® Windows® 2000, Windows XP or Red Hat® Linux Enterprise Edition (ver 3.0 or later) Memory: 512 MB system memory recommended for best performance (128 MB minimum) Disk storage: 25 MB free disk space Conncetivity: One AGP (3.0) slot with adjacent empty slot for cooling solution (AGP 8x recommended) Other requirements: 85 Watts available system power for graphics card Items: Wildcat Realizm 200 AGP 8x professional graphics acceleratorTwo DVI-VGA adpaters for analog displaysAuxiliary power extension cableInternational installation guideProduct CD with electronic manual, drivers and bonus software Welcome to a new kind of Realizm ... where precision, speed and your creativity combine in ways you've only dreamed of. 3Dlabs puts the power of the industry's most advanced visual processing right at your fingertips with Wildcat Realizm 200. 3Dlabs' AGP 8x-based graphics solution delivers all the performance, image fidelity, and features you'd expect from a professional graphics accelerator. With Wildcat Realizm 200's no-compromise performance plus the industry's largest memory resources, you'll have more time to devote to your creativity. Wildcat Realizm graphics accelerators offer the highest levels of image precision. You get quality and performance in one advanced technology solution. So, whether you're working on realistic animations, intricate CAD renderings, or complex scientific visualizations - if you can imagine it, you can make it real with Wildcat Realizm. See a problem on this page? Let us know. Build Your Own Copyright 2014 - Puget Custom Computers Warranty & Return
OPCFW_CODE
About the role Suncorp believes in the value and the power of its data to improve its business performance. The Data Science & AI Centre of Excellence exists to accelerate the value that Suncorp acquires from its investments in turning data into tangible value. It does this in two ways: - By developing important use-cases (experiments, prototypes, fully productionised instances) within the team that are integrated into our business processes at scale. - By supporting strategic leaders and the broader data science community across the Suncorp Group to invest in the right initiatives and deliver these utilising best practices. We do this by developing strong business relationships and providing strategic advice to core Data Science & AI decision makers. - Engage with business stakeholders, understand their pain points and opportunities and work with them to solve their most difficult and impactful problems. - Combine an inquisitive mind with an ability to understand a customer’s business context and translate the understanding into an innovative and iterative data science solution design that delivers value to business unit customers quickly. - Create innovative data science technical solutions that you will execute to create value for our customers. - Be accountable for the design and delivery of complex advanced data science components (data, models, algorithms, advice, deployment etc.) using onshore and offshore resources. - Work within an agile framework to deliver incremental value to business customers throughout a project’s lifecycle. - Be accountable for the quality control of the core data science components (data, models, algorithms, advice etc) being delivered to Group customers. - Possess exceptional technical capability in one or more areas of data science/machine learning. - Advise other team members (and wider Technology & Transformation and Group team members) on relevant matters related to data science “best practice”. - The Senior Data Scientist role is one of Suncorp’s leading technical data scientists. They will be expected to provide thought leadership on data science and AI ‘best practice’ through ongoing scans of the external environment. This includes the data science and AI ecosystem (e.g. platforms, tools and processes). Skills & Experience: - Superior results in a relevant advanced technical bachelor’s degree in quantitative subject area (e.g. statistics, actuarial, data science, mathematics, computer science). - Membership of relevant professional body (e.g. IAPA, Actuaries’ Institute etc.) is valued, as the successful candidate is expected to be well-connected into external best practice. - Relevant experience relative to position one is applying for (in an advanced analytical or data science related field of work). - Advanced level of competence in solving difficult problems, using data science (advanced analytical) techniques. - Advanced knowledge of the execution of a wide range of Data Science techniques and being able to do this in a way that is focussed on delivering tangible value. Must have experience in delivering complex machine learning or multivariate predictive analytics solutions in a commercial context (i.e. generalised linear models, tree-based approaches, NLP techniques etc). - Proficient communication skills to be able to co-design solutions with customers. Emotional intelligence and the ability to communicate complex ideas to a range of internal stakeholders. The Data Scientist and Senior Data Scientist roles will regularly engage with Executive-level stakeholders. - Discounts of up to 25% on our various Insurance, Banking & Superannuation products - Flexible working environment and arrangements; genuine focus on work-life balance - Numerous discounts with our corporate partners (retail & shopping / travel & holiday / health & wellbeing) - We offer support and various programs for our people: (Employee Assistance Program (EAP), Health & Wellbeing, Study Support, Employee Referral Program ($600), Facilities for nursing mothers, Company share options, Social club, and Years of Service Recognition. Suncorp is a leading financial services provider in Australia and New Zealand, enabling more than nine million customers to better protect and enhance their financial wellbeing. With a heritage dating back to 1902, we have grown to become a top-50 ASX-listed company with over 13,000 people. We offer banking, wealth management and insurance products and services through our well-recognised brands including Suncorp, AAMI, GIO, Apia, Shannons and Vero, as well as those from our partners. Working as part of Suncorp Network we believe we are our best when our workforce is as diverse, talented and passionate as the communities in which we live and operate, and where our people feel included, valued and connected. We are passionate about inspiring our people by creating an inclusive culture, offering flexible work, career development and internal mobility, and building connected relationships amongst our team members and with our customers. Joining Suncorp, you will be joining an organisation which cares and is proud of our achievements in being recognised for: - Best Insurance Company in Corporate Social Responsibility (2018) - Employer of Choice for Gender Equality for sixth consecutive year (2014-2019) - Money magazine’s Bank of the Year and Business Bank for a fourth consecutive year (2018-2021) - General Insurance Product Innovation of the Year (2018) If this opportunity sounds like the challenge you have been looking for, please apply online today. For further information regarding this position, please contact Rochelle.email@example.com At Suncorp we build inclusion by providing an environment where everyone is able to be themselves and feel valued, involved and respected for their perspectives and contribution. Advertised: 06 Sep 2021 AUS Eastern Standard Time Applications close: 20 Sep 2021 AUS Eastern Standard Time
OPCFW_CODE
The concept of Audit Trails is really simple. If you turn on Audit Trails in Microsoft CRM 2011 you can see who changed a field, the time it was changed, and what the previous value was. It will also track the history of changes (not just the last change). No more wondering who made that change – now you will know not only who made the change – but when they did it. In this example, see how a staffing agency can see how and when an error was entered regarding a job’s pay rate and bill rate inside Microsoft Dynamics CRM 2011. Obviously you want your Bill Rate to be greater than your Pay Rate. In this case, the intended Pay Rate for this employee was $50.00 per hour and $75.00 per hour for their Bill Rate. This screen shot shows an extra zero was added to the Pay Rate during the submittal set-up. Selecting Audit History in the Common field from the left navigation menu opens your audit trail. Here you can see the fields changed were Pay Rate and Bill Rate and the values entered were 500.00 and 75.00 respectively (we will discuss how to stop this from being saved in upcoming blog posts). Notice you can observe when the changes were made, by whom, the event type, filed(s) changed, and the previous and new values. When the mistake is discovered, we can correct it on the submittal form. Now your correction will be logged in the Audit History. Most of your Microsoft Dynamics CRM data and operations can be tracked. This includes: changes to shared privileges of a record, create, update, and delete operations on records, etc. How to enable field level security for your Microsoft CRM 2011 Organization Remember, by default, this is disabled at the organization and entity level, but enabled on all auditable entity attributes. To enable audit trails for organization and entity level, navigate to Setting > Administration > Systems Settings. In the System Settings window in Microsoft Dynamics CRM 2011, select the Auditing tab and then check the Start Auditing check box. Hopefully you found this example of setting up Audit Trails for Microsoft CRM 2011 useful – but you may be asking yourself why you would even let someone enter a pay rate that is more than the bill rate. There are some limited areas where this could be the case. It is possible to lock this down either as part of the standard software or using other methods. For example, you can allow access for a certain administrative user or users with specific security clearance. In future blog posts we will look at other ways you can control this sort of data entry: 1. How field level security can be used to control who can see and edit data Learn more about our Microsoft Dynamics end-to-end staffing software.
OPCFW_CODE
|mortality rate definition||1.36||0.5||2047||54| |morbidity rate definition||0.93||1||1187||32| |morbidity rate definition cdc||1.59||0.1||1729||37| |morbidity rate definition medical||0.15||0.1||1848||14| |morbidity rate definition microbiology||1.97||0.2||2913||32| |mortality rate definition 10%||0.37||0.1||105||98| |mortality rate definition cdc||1.51||0.1||7143||31| |mortality rate definition cms||1.11||0.5||2053||70| |mortality rate definition apes||1.49||0.3||6218||80| |mortality rate definition aphg||1.86||0.3||5510||78| |mortality rate definition biology||1.75||0.1||9248||85| |mortality rate definition medical||1.43||0.3||5663||56| |mortality rate definition geography||0.74||0.6||342||13| |mortality rate definition sociology||1.82||0.9||3002||78| |mortality rate definition epidemiology||0.36||0.6||5538||27| |mortality rate definition insurance||0.81||0.1||808||97| |mortality rate definition ap human geography||0.8||0.1||1385||19| |mortality rate definition for virus||0.8||0.6||9662||77| |morbidity rates definition medical term||1.69||0.7||8962||17| The morbidity rate is shown as a percentage. It is calculated by dividing the number of cases of a disease, injury, or disability by the total population during a specific time period, as shown below: For example, in a city with a population of 2 million in one year, 10,000 people are suffering from a particular disease.What are the measures of mortality? Mortality rate, or death rate, is a measure of the number of deaths (in general, or due to a specific cause) in a particular population, scaled to the size of that population, per unit of time. Mortality rate is typically expressed in units of deaths per 1,000 individuals per year; thus, a mortality rate of 9.5 (out of 1,000)...What is morbidity statistics? morbidity statistics. a branch of statistics that is concerned with the disease rate of a population or geographic region.
OPCFW_CODE
BPEL process instances are stateful --- therefore, a client interacting with the BPEL engine must identify the particular instance with which it intends to interact in all of its communications. The BPEL specification defines a mechanism --- correlation --- which allows the process designer to specify which parts of an incoming message (i.e. a message going from a client to the BPEL server) should be used to identify the target process instance. Correlation is a powerful mechanism --- however it is a bit complicated and relies on "in-band" message data to associate a messages with a process instance. To keep simple cases simple, ODE provides an alternative correlation mechanism --- implicit correlation --- that automatically handles correlation through "out-of-band" session identifiers. The mechanism is simple: a unique session identifier is associated with every every partner link instance. When a message is sent on a partner link, the session identifier is sent along with the message. The recipient is then able to use the received session identifier in subsequent communications with the process instance. Messages received by the BPEL engine that have a session identifier are routed to the correct instance (and partner link) by that session identifer. There are two major use cases for the implicit correlation mechanism requiring different levels of familiarity with the mechanism's details: process to process and process to service interactions. The former case deals with situations where the ODE BPEL process instance is communicating with another ODE process instance. The latter deals with situations where a ODE BPEL process instance is communicating with an external (non-ODE) service. When an ODE process needs to communicate with other ODE processes, using implicit correlations is quite simple. Simply omit the <correlations> element from the <invoke> activities. The following is an example showing one process (processA) starting another (processB) and then being called back: . . . <invoke name="initiate" partnerLink="responderPartnerLink" portType="test:MSResponderPortType" operation="initiate" inputVariable="dummy"/> <receive name="callback" partnerLink="responderPartnerLink" portType="test:MSMainPortType" operation="callback" variable="dummy"/> . . . . . . <receive name="start" partnerLink="mainPartnerLink" variable="dummy" portType="resp:MSResponderPortType" operation="initiate" createInstance="yes"/> <invoke name="callback" partnerLink="mainPartnerLink" portType="resp:MSMainPortType" operation="callback" inputVariable="dummy"/> . . . In the above example, ODE will use the implicit correlation mechanism because no explicit correlations are specified. Communication between the two processes will reach the correct instance as long as the same partner link is used. For a complete example check MagicSession. See the Stateful Exchange Protocol.
OPCFW_CODE
Spatial data is ubiquitous. Massive amounts of data are generated every day from a plethora of sources such as billions of GPS-enabled devices (e.g., cell phones, cars, and sensors), consumer-based applications (e.g., Uber and Strava), and social media platforms (e.g., location-tagged posts on Facebook, Twitter, and Instagram). This exponential growth in spatial data has led the research community to build systems and applications for efficient spatial data processing. In this study, we apply a recently developed machine-learned search technique for single-dimensional sorted data to spatial indexing. Specifically, we partition spatial data using six traditional spatial partitioning techniques and employ machine-learned search within each partition to support point, range, distance, and spatial join queries. Adhering to the latest research trends, we tune the partitioning techniques to be instance-optimized. By tuning each partitioning technique for optimal performance, we demonstrate that: (i) grid-based index structures outperform tree-based index structures (from 1.23× to 2.47×), (ii) learning-enhanced variants of commonly used spatial index structures outperform their original counterparts (from 1.44× to 53.34× faster), (iii) machine-learned search within a partition is faster than binary search by 11.79% - 39.51% when filtering on one dimension, (iv) the benefit of machine-learned search diminishes in the presence of other compute-intensive operations (e.g. scan costs in higher selectivity queries, Haversine distance computation, and point-in-polygon tests), and (v) index lookup is the bottleneck for tree-based structures, which could potentially be reduced by linearizing the indexed partitions The spatial intersection join is an important spatial query operation, due to its popularity and high complexity. The spatial join pipeline takes as input two collections of spatial objects (e.g., polygons). In the filter step, pairs of object MBRs that intersect are identified and passed to the refinement step for verification of the join predicate on the exact object geometries. The bottleneck of spatial join evaluation is in the refinement step. We introduce APRIL, a powerful intermediate step in the pipeline, which is based on raster interval approximations of object geometries. Our technique applies a sequence of interval joins on 'intervalized' object approximations to determine whether the objects intersect or not. Compared to previous work, APRIL approximations are simpler, occupy much less space, and achieve similar pruning effectiveness at a much higher speed. Besides intersection joins between polygons, APRIL can directly be applied and has high effectiveness for polygonal range queries, within joins, and polygon-linestring joins. By applying a lightweight compression technique, APRIL approximations may occupy even less space than object MBRs. Furthermore, APRIL can be customized to apply on partitioned data and on polygons of varying sizes, rasterized at different granularities. Our last contribution is a novel algorithm that computes the APRIL approximation of a polygon without having to rasterize it in full, which is orders of magnitude faster than the computation of other raster approximations. Experiments on real data demonstrate the effectiveness and efficiency of APRIL; compared to the state-of-the-art intermediate filter, APRIL occupies 2x-8x less space, is 3.5x-8.5x more time-efficient, and reduces the end-to-end join cost up to 3 times. SheetReader: Efficient Specialized Spreadsheet Parsing Spreadsheets are widely used for data exploration. Since spreadsheet systems have limited capabilities, users often need to load spreadsheets to other data science environments to perform advanced analytics. However, current approaches for spreadsheet loading suffer from either high runtime or memory usage, which hinders data exploration on commodity systems. To make spreadsheet loading practical on commodity systems, we introduce a novel parser that minimize memory usage by tightly coupling decompression and parsing. Furthermore, to reduce the runtime, we introduce optimized spreadsheet-specific parsing routines and employ parallelism. To evaluate our approach, we implement a prototype for loading Excel spreadsheets into R and Python environments. Our evaluation shows that our novel approach is up to 3x faster while consuming up to 40x less memory than state-of-the-art approaches. Our open source implementation of SheetReader for the R language is available at https://github.com/fhenz/SheetReader-r and has been downloaded more than 4K times. Optimistic Data Parallelism for FPGA-Accelerated Sketching Sketches are a popular approximation technique for large datasets and high-velocity data streams. While custom FPGA-based hardware has shown admirable throughput at sketching, the state-ofthe-art exploits data parallelism by fully replicating resources and constructing independent summaries for every parallel input value. We consider this approach pessimistic, as it guarantees constant processing rates by provisioning resources for the worst case. We propose a novel optimistic sketching architecture for FPGAs that partitions a single sketch into multiple independent banks shared among all input values, thus significantly reducing resource consumption. However, skewed input data distributions can result in conflicting accesses to banks and impair the processing rate. To mitigate the effect of skew, we add mergers that exploit temporal locality by combining recent updates.Our evaluation shows that an optimistic architecture is feasible and reduces the utilization of critical FPGA resources proportionally to the number of parallel input values. We further show that FPGA accelerators provide up to 2.6𝑥 higher throughput than a recent CPU and GPU, while larger sketch sizes enabled by optimistic architectures improve accuracy by up to an order of magnitude in a realistic sketching application. Workload Prediction for IoT Data Management Systems The Internet of Things (IoT) is an emerging technology that allows numerous devices, potentially spread over a large geographical area, to collect and collectively process data from high-speed data streams. To that end, specialized IoT data management systems (IoTDMSs) have emerged. One challenge in those systems is the collection of different metrics from devices in a central location for analysis. This analysis allows IoTDMSs to maintain an overview of the workload on different devices and to optimize their processing. However, as an IoT network comprises of many heterogeneous devices with low computation resources and limited bandwidth, collecting and sending workload metrics can cause increased latency in data processing tasks across the network. In this ongoing work, we present an approach to avoid unnecessary transmission of workload metrics by predicting CPU, memory, and network usage using machine learning (ML). Specifically, we demonstrate the performance of two ML models, linear regression and Long Short-Term Memory (LSTM) neural network, and show the features that we explored to train these models. This work is part of an ongoing research to develop a monitoring tool for our new IoTDMS named NebulaStream.
OPCFW_CODE
Implicit-explicit gradient of nondual awareness or consciousness-as-such On Zoran Josipovic’s Implicit-explicit gradient of nondual awareness or consciousness-as-such This is just an explanation of a paper that I found very provocative: 1) it provides a theory of nonduality grounded loosely in neuroscience and 2) it relates in a deep way to some of the core claims of Heidegger, which are important for understanding the grand mistake within analytic philosophy. Nonduality has typically been in the realm of Tibetan Buddhism. Consciousness research, as far as I can tell, has typically taken an approach more informed by sutric Buddhism, with its emphasis on emptiness. This paper is interesting partly because it brings the lens of consciousness science to nonduality. A note on language: The paper (and my post) attempts to talk about some things that language is not very good at capturing. It is at various parts be vague and confusing. That’s going to be a mixture of my lack of understanding and the fact that the thing I’m pointing to is wholistic, irreducible, outside of the subject-object dichotomy (nondual), while language mostly functions with a subject-object structure (dualistic), reducible1. A big caveat is that many of the points I’m going to make are probably subtly wrong. I’ve done my best to be clear but I hope this is essay treated as a loose provocation rather than a precise description of experience. The paper is filled with many provocations, and I’ve selected a couple here for a closer look. For the most part, I ignored the neuroscience (~40%) because I’m unfamiliar with the terminology. Nondual awareness as the broadest way of knowing The concept “Apples are red” is one type of knowing. It ‘exists’ in the prefrontal cortex of the brain. You might call this ‘factual’ or ‘propositional’ knowing. How to ride a bike is another type of knowing. You might call this ‘procedural knowing’2. A conventional view holds that all types of knowing ultimately resides in the brain, and that we could be ‘brains living in vats’ (ref: that we form explicit representations of all inputs and then act to minimize prediction error. The part nondual awareness people object to is the explicit representation part). People who take nondual awareness seriously reject this view, and believe that there is a very broad form of knowing that unifies the brain, body, and environment in a non-conceptual but direct experience. It’s difficult to talk about with words, and sounds mystical until an experience in an altered state of mind (whether through substances, meditation or by happenstance), ‘pulls back the veil’ of concepts, and reveals this background knowing. ‘Nondual awareness’ points to this background knowing. Nondual awareness is always present, on a gradient from explicit to implicit Achieving and sustaining explicit nondual awareness is often the stated goal within Tibetan Buddhism3. Here are some dimensions of explicit nondual awareness4: From the paper: - Being or presence—the obvious fact of awareness being present or phenomenally existing. - Emptiness—an absence of conceptually assigned identity and conceptualizations about itself or phenomena that reify awareness as the subject and phenomena as objects essentially separate from it. - Nonduality—a corollary of the above, without subject–object structuring of experience - No self/self—without a constructed self, but the self-same awareness in all experiences, hence also termed the self. - Boundless, timeless spaciousness—single aware space, in itself without edges or boundaries, the background context of any experience, pervading and encompassing both internal and external environments, without a psychological sense of time. - Ecstatic pleasure—near orgasmic-like enjoyment of contact between nondual awareness and phenomena, beyond pleasure–pain dichotomy. Nondual awareness as a mirror to phenomena Imagine a room with furniture in it, and a large mirror. You are standing in front of the mirror, and all you can see is what appears in the mirror. The mirror simply reflects what is in the room, without changing any of the objects. The brightness of the room is global awareness. The amount of furniture is phenomenal content. Nondual awareness is the mirror, and it contains everything open to your perception. What is the function of nondual awareness? It unifies the external and internal, and provides space for conscious experience5. Axes of consciousness. Consciousness is high dimensional, but consciousness researchers have often conceptualized it in 2 dimensions: global state and phenomenal content6. Global state is about how alert you are to the stuff in your field of awareness. Phenomenal content is about how much stuff is in your field of awareness. |Low global state||High global state| |Low phenomenal content||Non-REM sleep||Open awareness meditation| |High phenomenal content||Very tired while dancing at a concert||In the flow of a competitive basketball game| Josipovic proposes that we should think of a third axis, nondual awareness, that is independent of these two axes. He proposes that it’s always there, but is either implicit, transitory, or explicit. I have a felt sense of what he’s pointing to here, but I can’t provide examples. The earlier attempts in the paper (described above) is as far as he (and thus I) can describe. Nondual awareness is always obscuring itself from itself This is right at the edge of my understanding, but I want to point out a parallel here. From the paper: “Nondual awareness is also the most intimate aspect of experience—who one is as a conscious, aware presence in all one’s experiences—so that the ways in which one is defensively distancing from one’s authenticity contribute to it remaining hidden and implicit. The non-preferential, all-encompassing mode of knowing and experiencing that characterizes nondual aware-ness can trigger psychological defenses that keep unacceptable and threatening aspects of one’s experience from one’s conscious self, so that, at a subconscious level, allowing nondual awareness to become explicit may be experienced as threatening (Blackstone 2007; Lindahl et al. 2017). Nondual contemplative traditions point to an even deeper level, at which nondual awareness is obscured from itself by the unconscious indeterminate substrate, or store-house consciousness, which is thought to function as a container for storing memories, akin to the psychodynamic notion of the unconscious (Germano and Waldron 2006; Higgins 2019)” In Hubert Dreyfus’ commentary on Heidegger, he points out: “Dasein’s way of being, however, is so unsettling that, just because it is constantly sensed, it is constantly covered up”7. I also think that there’s a deep relationship between Heidegger’s account of death and the response to emptiness that meditative practitioners will discuss8. (If you don’t know what Dasein (also sometimes translated as ‘Being’) is, you can read about it here but you’ll probably not find this section very interesting). Dasein and nondual awareness are conceptually distinct, or at least they come with fairly different associations. Dasein is firmly rooted in the realm of Heideggerian thought, with its association with other concepts like fallenness and authenticity. Nondual awareness is less context-laden, but hard to explicate for a different reason. I think they are talking about the same thing. There’s some contextual evidence for this. In Heidegger’s Hidden Sources, Reinhard May argues that Heidegger was heavily influenced by the eastern philosophy of both Daoism and Zen Buddhism. It’s only in the context of Zorapovic’s paper that I’ve seen the explicit link. This is incidentally partly why Heidegger is so damn hard to read; he was trying to fashion language to escape some of these problems ↩︎ Vervaeke’s 4 P’s of knowing ↩︎ Really, I mean Vajrayana Buddhism. For textual evidence, see Namkhai Norbu’s “The Crystal and the Way of Light”, The Six Yogas of Naropa, or the teachings of any Vajrayana master. ↩︎ It’s tempting to think of nondual awareness, like bliss or ecstasy, as ‘just another state’, but traditions hold explicitly that it is a ‘stateless state’ that is always present, and is paradoxically felt by non-doing, rather than doing ↩︎ My explanation here is unsatisfying to me. Many volumes of books on the nature of nonduality. “In the Buddhist tradition, non-duality (advaya) is associated with the teachings of interdependence and emptiness (śūnyatā) and the two truths doctrine, particularly the Madhyamaka teaching of the non-duality of absolute and relative truth; and with the Yogachara notion of “mind/thought only” (citta-matra) or “representation-only” (vijñaptimātra).” Source ↩︎ See Bayne, Tim, Jakob Hohwy, and Adrian M. Owen. “Are there levels of consciousness?.” Trends in cognitive sciences 20.6 (2016): 405-413. ↩︎ Pp 33, Dreyfus, Hubert L. Being-in-the-world: A commentary on Heidegger’s being in time, division I. MIT Press, 1990. ↩︎ There is a section in Ngak’chang Rinpoche’s _Roaring Silence _that I can’t find right now that talks about the experience of emptiness as being akin to confronting death. This is a speculative loose link for now. ↩︎
OPCFW_CODE
And pathToSuccessfullyGeneratedLanguageModelWithRequestedName: 2 3 Tap the Delete button that appears. Tick the checkboxs before the corresponding deleted messages that you want to recover. ; So as examples, here are some sentences that this ruleset will report as hypotheses from user utterances: Please give your language models unique names within your session if you want to switch between them, so there is no danger of the engine getting confused between new voice read text messages iphone and old models and dictionaries at the time of switching. "Default" to use AVAudioSessionModeDefault "VoiceChat" to use AVAudioSessionModeVoiceChat "VideoRecording" AVAudioSessionModeVideoRecording "Measurement" AVAudioSessionModeMeasurement If you dont set it to anything, "Default" will automatically be used. , forAcousticModelAtPath: Amazon is about to make it a Voice Read Text Messages Iphone Echo alerts you to voice or text messages with a yellow light The updated notifications make it easier to tell calls from messages. // This means that a valid utterance for this ruleset will obey all of the following rules in sequence in a single complete utterance: OELanguageModelGenerator no longer has any case preference when inputting text, so you dont have to be concerned about whether your input is capitalized or not; you only have to pay attention in your own app implementation that phrases you are trying to detect are matchable against the case you actually used to create your model using this class. "THANK YOU" and there can be an optional single statement of the phrase "THANK YOU" at the end. "10", "20","30", OneOfTheseWillBeSaidOnce : SmartCMN is disabled during testing so that how to read received text messages online verizon the test gets the same results when run for different people and for different devices. Method 4: 2 The button will be labeled Mobile Data for British users. To prevent the iCloud backup file from being updated and modified, please dont connect your iPhone with the computer during the whole process of the iPhone SMS recovery. to get your paths to your newlygenerated language models and grammars and dictionaries for use with OEPocketsphinxController. DMP. Create your app to see another phones text messages jokes own voiceprint so your assistant will be attentive to only your commands and no one else and choose a voice and create a name for your assistant. iPhone Messages Text Read Voice If it returns nil, you can use the methods pathToSuccessfullyGeneratedDictionaryWithRequestedName: Generally, the more data types you choose, the more time it takes to scan. "GO", "MOVE", // Next, an utterance will have exactly one of the following required statements: Method 2: (BOOL) isSuspended (BOOL) removingNoise readwritenonatomicassign Try not to decode probable noise as speech (this can result in more noise robustness, but it can also result in omitted segments – defaults to YES, override to set to NO) (BOOL) removingSilence readwritenonatomicassign Try not to decode probable silence as speech (this can result in more accuracy, but it can also result in omitted text message tracking free quick segments – defaults to YES, override to set to NO) (float) vadThreshold readwritenonatomicassign Speech/Silence threshhold setting. gram file) and corresponding phonetic dictionary (a. "DO THE FOLLOWING", "INSTRUCTION", OneOfTheseWillBeSaidOnce : "THANK YOU" and there can be an optional single statement of the phrase "THANK YOU" at the end. When the scan finishes, all the data found will be listed by categories. Click Start Scan. , languageModelIsJSGF: 2 Select a conversation from the Messages menu. String! voice read text messages iphone (funny text messages, hilarious text messages, funny Voice Read Text Messages Iphone Shocked mom number two! Method 3 Deleting Multiple Conversations 1 Open your iPhones messages. Step 3: "DO THE FOLLOWING", "INSTRUCTION", OneOfTheseWillBeSaidOnce : sms tracker removal 2017 Csv file or select Recover to Device to move the found text messages back to the iPhone. This wikiHow teaches you how to delete messages from the Messages app on an iPhone. You must do this before releasing a parent view controller that contains OEPocketsphinxController. iPhone Read Voice Messages Text For the Spanish model, higher values can be used. "LEFT", "RIGHT", "FORWARD" , OneOfTheseWillBeSaidOnce : If you downloaded any media from the conversation to your Camera Roll, it will still be stored there. cell tracker exact location Highlight the Messages subcategory of Messages & Call log, all the contacts who had conversions (SMS, MMS, iMessages) with you will be displayed on the right. Actually, as long as you have enabled iCloud backup of your iPhone, iCloud will automatically back up all the content and settings on your iPhone when your iPhone is connected to a power source, a WiFi network and screenlocked even if you have never done it manually. Read Voice Messages Text iPhone - Nokia Maps Asset Tracking 2017 Da14 - Smartphone Monitoring Software Java - WhatsApp Last Seen With Gps Location - Remote Cell Spy Software Scam - Cell Phone Spy Exposed Reviews - Mobile Spy Zina Walkthrough 007 Everything or Nothing - Pc Monitor Software Samsung Galaxy S3 - Ultimate Mobile Phone Spy Download - Blackberry Monitoring Sitescope - iPhone 5 Store Locator - Cell Phone Tracker Software 07 Spybubble - Mobile Spy iPhone 5 Upgrade
OPCFW_CODE
Vision for macros in Swift Updated and revised from the gist initially posted to the forums back in October (https://forums.swift.org/t/a-possible-vision-for-macros-in-swift/60900), reflecting improvements to the design. General thoughts: When you discuss conformance macros, you allude to it being possible to have a single macro declaration with multiple roles. How does this work? Is the macro only valid to apply in a place where all of its declared roles make sense? (Can a macro declaration be overloaded on role to support a disjunction of roles under the same name?) Do all of the roles have to be implemented in the same #externalMacro type, or can different roles be implemented by different types? Some of the role names use compiler jargon, like codeItem and witness, that I dearly hope we don’t end up using in the actual feature. Nit: I hope @attached(peer) gets workshopped a little more—it’s kinda strange to have it have the same name but a very different function. @beccadax the "attached macros" proposal has a lot more detail about macro roles and what it means to inhabit several different roles. Does that address your questions/concerns? Do you feel that the discussion needs to be lifted up into the vision document? I'm happy to workshop names like "codeItem" and "witness" a bit more. the "attached macros" proposal has a lot more detail about macro roles and what it means to inhabit several different roles. Does that address your questions/concerns? Maybe it's subtle enough that I overlooked it, but I don't quite see the answer to my question there; you mention that a macro can have multiple roles and that the compiler expands each of them, but I don't think you talk about what happens when only some of those roles are applicable. Let me make the question more concrete. Consider the Clamping macro described in that document: @attached(peer, prefixed(_)) @attached(accessor) macro Clamping<T: Comparable>(min: T, max: T) = #externalMacro(module: "MyMacros", type: "ClampingMacro") What happens if I do this? struct MyStruct { @Clamping(min: 0, max: 255) func fn() { ... } } Clamping has both an @attached(peer) role, which is perfectly valid to use on a method, and an @attached(accessor) role, which is only valid on a property (or subscript, I suppose). Does the Swift compiler expand the @attached(peer) role but not the @attached(accessor) role? Or does it diagnose an error because you tried to apply Clamping in a place where the @attached(accessor) role is not valid? (ClampingMacro.expansion(of:providingPeersOf:in:) could, of course, diagnose an attempt to use it on anything but a var. For the sake of argument, assume that it doesn't. Maybe its designer wants it to apply a postcondition to the return value when it's used on a method.) Do you feel that the discussion needs to be lifted up into the vision document? Probably not the whole discussion, but a brief mention of what it means when a macro has multiple roles might help readers understand how different roles are supposed to be used in concert. @beccadax we should move this discussion elsewhere. Attached macros has a pitch Maybe it's subtle enough that I overlooked it, but I don't quite see the answer to my question there; you mention that a macro can have multiple roles and that the compiler expands each of them, but I don't think you talk about what happens when only some of those roles are applicable. I've added a paragraph to this effect in the attached macros proposal (https://github.com/apple/swift-evolution/pull/1932/commits/7add6441c3d55643a38dc24ace02657eb3385b03), but I think further discussion should occur on the forums, not here. Probably not the whole discussion, but a brief mention of what it means when a macro has multiple roles might help readers understand how different roles are supposed to be used in concert. We have the Clamping example where a single macro inhabits multiple roles, and they are used in concert to achieve the property-wrapper effect. This seems sufficient to me. Again, I think we should continue this discussion on the forums.
GITHUB_ARCHIVE
2014 IEEE Vehicular Networking Conference (VNC) Vehicular networking and communication systems is an area of significant importance in our increasingly connected and mobile world. Effective vehicular connectivity techniques can significantly enhance efficiency of travel, reduce traffic incidents and improve safety, mitigate the impact of congestion, and overall provide a more comfortable experience. Towards this goal, the 2014 IEEE Vehicular Networking Conference (VNC) seeks to bring together researchers, professionals, and practitioners to present and discuss recent developments and challenges in vehicular networking technologies, and their applications. P. R. Kumar P. R. Kumar holds the College of Engineering Chair in Computer Engineering at Texas A&M University. His current research is focused on energy systems, wireless networks, secure networking, automated transportation, and cyberphysical systems. He obtained his B. Tech. from IIT Madras, and his D.Sc. from Washington University, St. Louis. He is a member of the National Academy of Engineering of the USA, and a fellow of the World Academy of Sciences. He was awarded an honorary doctorate by the ETH, Zurich. He received the Outstanding Contribution Award of ACM SIGMOBILE, the IEEE Field Award for Control Systems, the Donald P. Eckman Award of the American Automatic Control Council, and the Fred W. Ellersick Prize of the IEEE Communications Society. He is an ACM Fellow and a Fellow of IEEE. He was awarded the Distinguished Alumnus Award from IIT Madras, the Alumni Achievement Award from Washington University in St. Louis, and the Daniel C. Drucker Eminent Faculty Award from the College of Engineering at the University of Illinois. He is an Honorary Professor at IIT Hyderabad, and a D. J. Gandhi Distinguished Visiting Professor at IIT Bombay. Katrin Sjöberg is a connected vehicle technology specialist at Advanced Technology & Research, Volvo Group Trucks Technology in Sweden. For over a decade, she has been working in the field of wireless communication and her research interests range from channel modeling to applications for vehicular networks. She is very active in the CAR 2 CAR Communication Consortium (C2C-CC) and in the European standardization on cooperative intelligent transport systems (C-ITS) within ETSI Technical Committee on ITS (ETSI TC ITS). In April 2013, she defended her PhD thesis entitled "Medium Access Control for Vehicular Ad Hoc Networks". Industry PanelChaired by Fan Bai, General Motors, USA, and Michael Wagner, Adam Opel AG, Germany, the industry panel brings together leading industry experts. - Fan Bai, General Motors - Michael Wagner, Adam Opel AG - Tim Leinmüller, DENSO AUTOMOTIVE Deutschland GmbH - Andre Rolfsmeier, dSPACE - Katrin Sjöberg, Volvo - Jim Lanford, CSR Technologies October 13, 2014 Katrin Sjöberg, Volvo, agreed to give the industry keynote. October 10, 2014 Check the preliminary program page for details on our keynotes, industry panel, and an IEEE Young Professional Event hosted at VNC. October 10, 2014 Selected papers will be considered for publication in a special issue of the Elsevier Ad Hoc Networks journal. October 6, 2014 Poster and Demo submission deadline extended to October 15. Submit now. September 19, 2014 Hotel information is available on the venue page. September 8, 2014 Submission for full and short papers closed. August 22, 2014 PR Kumar, Texas A&M University agreed to give the academic keynote. July 8, 2014 Call for posters and demos published. April 25, 2014 Call for papers published. Important datesPoster/Demo Submissions Deadline: October 15, 2014 (extended) Acceptance Notification: October 25, 2014 (full and short papers), October 27, 2014 (posters and demos) Camera-Ready Paper Due: November 10, 2014 (all papers, posters, and demos) Go to the VNC 2013 website.
OPCFW_CODE
plugwash wrote:Basically there are four main possibilities 1: True PoE, this is the least risky soloution because it's got a load of protection systems built in but it requires relatively complex (and therefore relatively expensive) hardware at both ends to inject the power and to extract and convert it. Not tried that. plugwash wrote:2: 5V ghetto PoE, this sounds like a good idea at first but due to cable resistance it simply will not work properly over nontrivial cable lengths. The voltage drop will just be too much. Yep, works fine for modest cable lengths (blog post plugwash wrote:3: 12V ghetto PoE with a switched mode converter (you DO NOT want to try and use a linear regular, it will waste a lot of power and get bloody hot) at the Pi end. With the right DC-DC converter and all wires used in the cable it should be possible to get up to 50-60M with such a setup. I wonder why the astronomy guy failed, my guess is his ebay adaptors were using using two wires rather than four. I assume by astronomy guy you mean me, and yes, even with 12V and a DC-DC convert I only got this to work at 30m not 40m (blog post ). Your hunch about the passive PoE adaptors not using as many wires as possibly is intriguing. plugwash wrote:4: 24V ghetto PoE with a step down converter at the Pi end. This should work fine out to the 100M maximum distance of ethernet (and probablly beyond), main downside is that you will likely have to buy the 24V PSU specifically (whereas a 12V PSU you are likely to have lying around). This has worked for me at 40m (I don't currently have a longer ethernet cable or need to try any longer distances) (24V PoE blog post plugwash wrote:I also notice the astronomy guy tried to use higher voltages to compensate for cable loss without using a DC-DC converter. I strongly reccomend against this practice because it will make the voltage delivered to the Pi swing wildly depending on what is currently drawing current and carries a high risk of damaging the Pi. Again I presume that was me you were talking about - that was a one off experiment (blog post ), and I was monitoring the voltage at the Raspberry Pi via the test points while doing this (rather than ramping up the voltage blindly). One of the nice things about the low cost of the Raspberry Pi is I feel a bit less worried about damaging it I had the 40m 24V PoE setup running indoors for a few weeks (filming mice invading the house ), and noticed every so often the Pi would lock up. The webserver stopped responding, I couldn't SSH in. After rebooting I could see it had also stopped recording images. I didn't establish if this was an intermittent power issue, some other hardware issue (overheating in the closed case I was trying perhaps) or a software problem. I'm intending to use the 40m 24V PoE setup in my garden this spring (for monitoring bird nesting boxes rather than telescope usage).
OPCFW_CODE
perf: Simplify and optimize interest management In this benchmark I spawn 1000 objects with a NetworkProximityChecker. This PR rebuilds the observer list by reusing sets and dictionaries. Branch Garbage Time master 285.2K 7.11 ms #822 0 2.64 ms Before: After: fails if no proximity checkers are used, at least in ummorpg: NullReferenceException: Object reference not set to an instance of an object at Mirror.ClientScene.SpawnSceneObject (System.UInt64 sceneId) [0x00001] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/ClientScene.cs:169 at Mirror.ClientScene.OnSpawnSceneObject (Mirror.NetworkConnection conn, Mirror.SpawnSceneObjectMessage msg) [0x000cb] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/ClientScene.cs:443 at Mirror.MessagePacker+<>c__DisplayClass6_0`1[T].<MessageHandler>b__0 (Mirror.NetworkMessage networkMessage) [0x0006c] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/MessagePacker.cs:128 at Mirror.NetworkConnection.InvokeHandler (System.Int32 msgType, Mirror.NetworkReader reader) [0x00036] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/NetworkConnection.cs:205 at Mirror.NetworkConnection.TransportReceive (System.Byte[] buffer) [0x00068] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/NetworkConnection.cs:236 at Mirror.NetworkClient.OnDataReceived (System.Byte[] data) [0x0000e] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/NetworkClient.cs:126 at UnityEngine.Events.InvokableCall`1[T1].Invoke (T1 args0) [0x00011] in <2bdf0ff7c0a14dfe9c7464c95135858f>:0 at UnityEngine.Events.UnityEvent`1[T0].Invoke (T0 arg0) [0x00023] in <2bdf0ff7c0a14dfe9c7464c95135858f>:0 at Mirror.TelepathyTransport.ProcessClientMessage () [0x0003c] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/Transport/TelepathyTransport.cs:69 at Mirror.TelepathyTransport.LateUpdate () [0x00005] in /Users/qwerty/x/dev/project_uMMORPG/Repository/Source/Assets/uMMORPG/Plugins/Mirror/Runtime/Transport/TelepathyTransport.cs:96 Replaced by #899
GITHUB_ARCHIVE
I'd personally give my assignments per day right before and he would anyhow get it done with no hesitations and I'd personally continue to get total rating on my Projects and Assignments. I'm basically an exceedingly active person Performing and going to highschool is absolutely demanding, but when Sam is there you are able to sleep extremely peacefully, with no tension. He is extremely helpful and would recognize your preferences, urgency and quality from the perform According to your preferences. I read from the testimonials and other people were complaining about the costs he fees, I'd say if you'll want to Obtain your perform completed in just one day who'd want to get it done? No one but Sam, and the standard is 100%. In my opinion I'd personally extremely advise his providers, remember to check with him and he will get by means of your assignments as with whole awareness and error totally free. I had been troubled a scholar getting challenging time in my profession but utilizing his products and services I'm near having my diploma Virtually. Thank you a great deal of Sam, I remarkably value your products and services to me. My Good friend often used to choose help from Assignment Desk and he was equipped to attain good grades too. So, this time I much too thought of taking Java assignment help from them. Their writers solved all my complications of assignment creating, And that i also scored good grades this time. Thanks Assignment Desk! This technique permits Every person to deal with their taxes nicely. This technique pushes the globe to an improved-taxed environment. Java assignment help providers are the expert services which happen to be supplied by Codingparks.com for the customers, who need to have help in java assignment and project advancement. Java assignment help service are in essence divided into two expert services models as following: Booking systems will ease the best way persons e book seats and revel in entry to matches. Here's a technique that simplifies lives. Here is to all the final calendar year pupils, will not be fearful, I am listed here to help you out. You may generally count on me and make the very best use in the accessible here time and sources for building a project that can help you fetch exceptional grades. An average scholar finds it difficult to take care of the incessant circulation of situation scientific studies, investigate papers, studies, and essays of varied styles. Making an allowance for all of these jobs ought to be published down, a tutorial aid is usually a extremely demanded services these days. A scholar checking procedure differs from the technique that shops the information for college students. A tracking method will keep a tab on the effectiveness, health, and necessities of the kids. You can even have two people click to investigate Performing at the boards as “Study Recorders” that the senior scientists report back to if you'll want to have roles For additional learners. Specify The trail for the JAR file. (To the appropriate of The trail to JAR field, simply click and select the JAR file from the dialog that opens.) The remainder of the settings in this case don't issue, nonetheless, you can find yet one more issue that we'll do - only for comfort. To take a look at the quality of perform that we deliver to our customers, you are able to go with the free of charge Java assignment samples and illustrations that are offered on our Web site. So, tactic us now to avail the excellent Java programming assignment help services at most reasonably priced selling prices! College students from around the world have created utilization of our java programming homework help. Our specialists also offer online java programming homework help to learners along with Performing Qualified According to their have to have. So, if you need help with Java programming assignment help, A single click on and you may guide your air ticket. Receiving out to the earth and observing the very best of every thing will make your lifetime truly worth living. You could delight in a wonderful life. Learners can get started engaged on this project and direct an improved life. Here's for the many schools and universities a technique that makes data administration a lot easier and enjoyment. School details management is for every scholar and considered one of the smartest java project Tips to operate on.
OPCFW_CODE