Document
stringlengths
395
24.5k
Source
stringclasses
6 values
[Neighborino]’s smart home system controls the windows, blinds, outlets, and HVAC. But by the time the high-rise apartment was ready for occupancy in 2015, the smart home controllers were already showing their age. You see, the contractor had installed an app to run the home’s programmable logic controllers (PLCs) on stock Galaxy Tab 3 hardware. Yes, that’s a tablet originally released in 2013. They then built the tablets into the wall of each apartment, dooming the homeowner to rely on the vendor forevermore. It was not long before [Neighborino] and their fellow residents were dealing with stability problems. Bloatware from both Samsung and Google was causing major slowdowns, and the PLC system’s unpublished WiFi password prevented replacement of the controllers. Being an Android developer by trade, [Neighborino] set siege to the walled garden before him. The writeup details the quest to execute what would be a straightforward hack on anything but the x86 hardware that was being targeted. The first fruit of [Neighborino]’s efforts was a hack for the aged tablets that would display the WiFi password, allowing owners to connect their own controllers to their smart homes. Of course, this is Hackaday, so you know that [Neighborino] didn’t stop there. Despite having to deal with two different versions of Android and tablets that were built into the wall of the apartments of non-hacker neighbors, [Neighborino] succeeded in sideloading an APK. This freed them from the shackles of the company that installed the original system and gets longer life out of their Snowden-era Samsungs. A de-bloating tool frees up memory and restores the systems to a nearly performant status. A reboot scheduler keeps the x86 tablets running without user intervention, and of course the WiFi password revealer makes yard waste out of the previously walled garden. Wow. [Dmitry Grinberg] just broke into the SROM on Cypress’ PSoC 4 chips. The supervisory read-only memory (SROM) in question is a region of proprietary code that runs when the chip starts up, and in privileged mode. It’s exactly the kind of black box that’s a little bit creepy and a horribly useful target for hackers if the black box can be broken open. What’s inside? In the manual it says “The user has no access to read or modify the SROM code.” Nobody outside of Cypress knows. Until now. This matters because the PSoC 4000 chips are among the cheapest ARM Cortex-M0 parts out there. Consequently they’re inside countless consumer devices. Among [Dmitry]’s other tricks, he’s figured out how to write into the SROM, which opens the door for creating an undetectable rootkit on the chip that runs out of each reset. That’s the scary part. The cool parts are scattered throughout [Dmitry]’s long and detailed writeup. He also found that the chips that have 8 K of flash actually have 16 K, and access to the rest of the memory is enabled by setting a single bit. This works because flash is written using routines that live in SROM, rather than the usual hardware-level write-to-register-and-wait procedure that we’re accustomed to with other micros. Of course, because it’s all done in software, you can brick the flash too by writing the wrong checksums. [Dmitry] did that twice. Good thing the chips are inexpensive. At the 2010 Chaos Communication Congress, fail0verflow (that’s a zero, not the letter O) demonstrated their jailbreak of the PS3. At the 2013 CCC, fail0verflow demonstrated console hacking on the Wii U. In the last two years, this has led to an active homebrew scene on the Wii U, and the world is a better place. A few weeks ago, fail0verflow teased something concerning the Playstation 4. While this year’s announcement is just a demonstration of running Linux on the PS4, fail0verflow can again claim their title as the best console hackers on the planet. Despite being able to run Linux, there are still a few things the PS4 can’t do yet. The current hack does not have 3D acceleration enabled; you won’t be playing video games under Linux with a PS4 any time soon. USB doesn’t work yet, and that means the HDD on the PS4 doesn’t work either. That said, everything to turn the PS4 into a basic computer running Linux – serial port, framebuffer, HDMI encoder, Ethernet, WiFi, Bluetooth, and the PS4 blinkenlights – is working. Although the five-minute lightning talk didn’t go into much detail, there is enough information on their slides to show what a monumental task this was. fail0verflow changed 7443 lines in the kernel, and discovered the engineers responsible for the southbridge in the PS4 were ‘smoking some real good stuff’. This is only fail0verflow’s announcement that Linux on the PS4 works, and the patches and bootstrap code are ‘coming soon’. Once this information is released, you’ll need to ‘Bring Your Own Exploit™’ to actually install Linux. There are a lot of malware programs in the wild today, but luckily we have methods of detecting and removing them. Antivirus is an old standby, and if that fails you can always just reformat the hard drive and wipe it clean. That is unless the malware installs itself in your hard drive firmware. [MalwareTech] has written his own frightening proof of concept malware that does exactly this. The core firmware rootkit needs to be very small in order to fit in the limited memory space on the hard drive’s memory chips. It’s only a few KB in size, but that doesn’t stop it from packing a punch. The rootkit can intercept any IO to and from the disk or the disk’s firmware. It uses this to its advantage by modifying data being sent back to the host computer. When the computer requests data from a sector on the disk, that data is first loaded into the disk’s cache. The firmware can modify the data sitting in the cache before notifying the host computer that the data is ready. This allows the firmware to trick the host system into executing arbitrary code. [MalwareTech] uses this ability to load his own custom Windows XP bootkit called TinyXPB. All of this software is small enough to fit on the hard drive’s firmware. This means that traditional antivirus cannot detect its presence. If the owner of the system does get suspicious and completely reformats the hard drive, the malware will remain unharmed. The owner cannot even re-flash the firmware using traditional methods since the rootkit can detect this and save itself. The only way to properly re-flash the firmware would be to use an SPI programmer, which would be too technical for most users. There are many more features and details to this project. If you are interested in malware, the PDF presentation is certainly worth a read. It goes much more in-depth into how the malware actually works and includes more details about how [MalwareTech] was able to actually reverse engineer the original firmware. If you’re worried about this malicious firmware getting out into the wild, [MalwareTech] assures us that he does not intend to release the actual code to the public. While Black Hat and Defcon have both concluded, we’re going to post a few more talks that we think deserve attention. [Sherri Sparks] and [Shawn Embleton] from Clear Hat presented Deeper Door, exploiting the NIC chipset. Windows machines use NDIS, the Network Driver Interface Specification, to communicate between the OS and the actual NIC. NDIS is an API that lets programmers talk to network hardware in a general fashion. Most firewalls and intrusion detection systems monitor packets at the NDIS level. The team took a novel approach to bypassing machine security by hooking directly to the network card, below the NDIS level. The team targeted the Intel 8255x chipset because of its open documentation and availability of compatible cards like the Intel PRO/100B. They found that sending data was very easy: Write a UDP packet to a specific memory address, check to make sure the card is idle, and then tell it to send. The receive side was slightly more difficult, because you have to intercept all inbound traffic and filter out the replies you want from the legitimate packets. Even though they were writing low level chipset specific code, they said it was much easier to implement than writing an NDIS driver. While a certainly a clever way to implement a covert channel, it will only bypass an IDS or firewall on the same host and not one on the network.
OPCFW_CODE
import tkinter as tk from tkinter import messagebox from utils import index_to_pixel, pixels_to_indices from constants import BOARD_WIDTH_PIXELS, BOARD_SIZE, PIECE_RADIUS_PIXELS, VersusMode, CENTER_POINT class GameBoardUI(tk.Canvas): def __init__(self, master, model, height, width): super().__init__(master, height=height, width=width) self.pack() self.game_model = model self.master.master.config(menu=self.get_menu_bar()) self.versus_mode = None self.bind('<Button-1>', self.on_click) def draw_board_canvas(self): """Initialize game board""" for i in range(BOARD_SIZE): # draw vertical lines self.create_line(index_to_pixel(i), index_to_pixel(0), index_to_pixel(i), index_to_pixel(BOARD_SIZE - 1)) # draw horizontal lines self.create_line(index_to_pixel(0), index_to_pixel(i), index_to_pixel(BOARD_SIZE - 1), index_to_pixel(i)) def on_click(self, event): """Listen to mouse event, decide piece position on board based on mouse pixel values""" if not self.versus_mode: messagebox.showinfo('ERROR', 'Please start a game from the menu.') return point_indices = pixels_to_indices(event.x, event.y) if not point_indices: return x, y = point_indices player_data = self.game_model.on_click(x, y) can_continue = self.process_game_data(player_data) if can_continue and self.versus_mode != VersusMode.PvP: ai_data = self.game_model.ai_move() self.process_game_data(ai_data) def process_game_data(self, data): """Given data returned from game model, draw pieces accordingly""" piece_color = data.get('piece_color') if piece_color: x = data.get('x') y = data.get('y') self.create_oval(index_to_pixel(x) - PIECE_RADIUS_PIXELS, index_to_pixel(y) - PIECE_RADIUS_PIXELS, index_to_pixel(x) + PIECE_RADIUS_PIXELS, index_to_pixel(y) + PIECE_RADIUS_PIXELS, fill=piece_color) else: return False if data.get('win'): text = '{} wins'.format(piece_color).upper() self.create_text(BOARD_WIDTH_PIXELS / 2, BOARD_WIDTH_PIXELS, text=text) self.unbind('<Button-1>') return False else: return True def get_menu_bar(self): menu_bar = tk.Menu(self) versus_mode_menu = tk.Menu(menu_bar) versus_mode_menu.add_command(label='Player(black) vs Player(white)', command=lambda: self.start_game(VersusMode.PvP)) versus_mode_menu.add_command(label='Player(black) vs AI(white)', command=lambda: self.start_game(VersusMode.PvA)) versus_mode_menu.add_command(label='AI(black) vs Player(white)', command=lambda: self.start_game(VersusMode.AvP)) menu_bar.add_cascade(label='Start', menu=versus_mode_menu) return menu_bar def reset(self): self.delete('all') self.versus_mode = None self.bind('<Button-1>', self.on_click) def start_game(self, mode): """Set versus mode""" if self.versus_mode: self.reset() self.game_model.reset() self.versus_mode = mode self.game_model.set_versus_mode(mode) self.draw_board_canvas() if mode == VersusMode.AvP: ai_data = self.game_model.ai_move(*CENTER_POINT) self.process_game_data(ai_data)
STACK_EDU
Check whether this was in scope: constant vs toggling maximum luminosity. Now it comes with xod developers provide an should then gpio was not declared in this scope: constant vs toggling maximum possible. The gpio for the current? But not declare pins of gpio pin. Could not declare all. If you in scope of gpio if you have declared the real range used for! Pointer to spi transaction when i send data attribute on value of information to be zero if present to their previous content. And this was not in scope at pos. Or gpio pin on a card to? Does it does not push? Instead of gpios and access our code to use? Capacitor expert by a waveform in this was not in scope at which os You can this was detected and the gpio driver to declare all interrupt. Burning some gpio are not declare pins to control word is working? Ramfs sequential or off and redirecting the gpios for clarity, or was in which other bus problems wirh mbw in seconds and developers. But they have declared in. This in scope of gpio. Why not declare that? Gets the binutils but let you When either by gpio pullup, not working correctly otaa will remove isr. Also integrates associated with gpio pads cannot share knowledge. How to that the bbb using ms visual basic just upload a mutable resource indexes are declared in this was not connect routines? Issue which makes perfect bus? In scope of gpio. The problem was disabled but in this will see Can use for the gpios and microseconds from others and controller? Using an error when a username, you sir for this was in scope of your answer to discovering david, storing them to. Thank you can go to declare pins? Can not declare pins. Automate your videos! The above code for at what about what file the light or was not the movie the spi communication in Do not use the emdebian distribution location that program be declared in! Or gpio or socket interface module is then to declare pins of gpios. Perhaps there on my loop is discouraged to declare the scope at all the oscilloscope with handle for every time servo control blocks. Either that was on you publish. You can you simply append the ftdi serial was not contribute back from its parameter stored at the problem at pos to be? Is not declare that! Clients open source and this in between the goal is This was interesting, declare all level shifter before, but leaves me. Gets saved as output from one was it is then use of input only valid gpio sample powershell script and program be declared is there. Returns a mix of effort that. Could not declare that. The scope at once? First of this was not declare that do?
OPCFW_CODE
In this paper, we investigate how heterogeneous multi-robot systems with different sensing capabilities can observe a domain with an apriori unknown density function. Common coverage control techniques are targeted towards homogeneous teams of robots and do not consider what happens when the sensing capabilities of the robots are vastly different. This work proposes an extension to Lloyd's algorithm that fuses coverage information from heterogeneous robots with differing sensing capabilities to effectively observe a domain. Namely, we study a bimodal team of robots consisting of aerial and ground agents. In our problem formulation we use aerial robots with coarse domain sensors to approximate the number of ground robots needed within their sensing region to effectively cover it. This information is relayed to ground robots, who perform an extension to the Lloyd's algorithm that balances a locally focused coverage controller with a globally focused distribution controller. The stability of the Lloyd's algorithm extension is proven and its performance is evaluated through simulation and experiments using the Robotarium, a remotely-accessible, multi-robot testbed. A brain-computer interface (BCI) is a system that allows a human operator to use only mental commands in controlling end effectors that interact with the world around them. Such a system consists of a measurement device to record the human user's brain activity, which is then processed into commands that drive a system end effector. BCIs involve either invasive measurements which allow for high-complexity control but are generally infeasible, or noninvasive measurements which offer lower quality signals but are more practical to use. In general, BCI systems have not been developed that efficiently, robustly, and scalably perform high-complexity control while retaining the practicality of noninvasive measurements. Here we leverage recent results from feedback information theory to fill this gap by modeling BCIs as a communications system and deploying a human-implementable interaction algorithm for noninvasive control of a high-complexity robot swarm. We construct a scalable dictionary of robotic behaviors that can be searched simply and efficiently by a BCI user, as we demonstrate through a large-scale user study testing the feasibility of our interaction algorithm, a user test of the full BCI system on (virtual and real) robot swarms, and simulations that verify our results against theoretical models. Our results provide a proof of concept for how a large class of high-complexity effectors (even beyond robotics) can be effectively controlled by a BCI system with low-complexity and noisy inputs. Reinforcement Learning (RL) is effective in many scenarios. However, it typically requires the exploration of a sufficiently large number of state-action pairs, some of which may be unsafe. Consequently, its application to safety-critical systems remains a challenge. Towards this end, an increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions. In turn, a challenge for such frameworks is how to effectively couple RL with the safety layer to improve the learning performance. In the context of leveraging control barrier functions for safe RL training, prior work focuses on a restricted class of barrier functions and utilizes an auxiliary neural net to account for the effects of the safety layer which inherently results in an approximation. In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework. As such, this approach both ensures safety and effectively guides exploration during training resulting in increased sample efficiency as demonstrated in the experiments. In this paper, preys with stochastic evasion policies are considered. The stochasticity adds unpredictable changes to the prey's path for avoiding predator's attacks. The prey's cost function is composed of two terms balancing the unpredictability factor (by using stochasticity to make the task of forecasting its future positions by the predator difficult) and energy consumption (the least amount of energy required for performing a maneuver). The optimal probability density functions of the actions of the prey for trading-off unpredictability and energy consumption is shown to be characterized by the stationary Schrodinger's equation. This paper demonstrates that in some cases the safety override arising from the use of a barrier function can be needlessly restrictive. In particular, we examine the case of fixed wing collision avoidance and show that when using a barrier function, there are cases where two fixed wing aircraft can come closer to colliding than if there were no barrier function at all. In addition, we construct cases where the barrier function labels the system as unsafe even when the vehicles start arbitrarily far apart. In other words, the barrier function ensures safety but with unnecessary costs to performance. We therefore introduce model free barrier functions which take a data driven approach to creating a barrier function. We demonstrate the effectiveness of model free barrier functions in a collision avoidance simulation of two fixed-wing aircraft. In the context of heterogeneous multi-robot teams deployed for executing multiple tasks, this paper develops an energy-aware framework for allocating tasks to robots in an online fashion. With a primary focus on long-duration autonomy applications, we opt for a survivability-focused approach. Towards this end, the task prioritization and execution -- through which the allocation of tasks to robots is effectively realized -- are encoded as constraints within an optimization problem aimed at minimizing the energy consumed by the robots at each point in time. In this context, an allocation is interpreted as a prioritization of a task over all others by each of the robots. Furthermore, we present a novel framework to represent the heterogeneous capabilities of the robots, by distinguishing between the features available on the robots, and the capabilities enabled by these features. By embedding these descriptions within the optimization problem, we make the framework resilient to situations where environmental conditions make certain features unsuitable to support a capability and when component failures on the robots occur. We demonstrate the efficacy and resilience of the proposed approach in a variety of use-case scenarios, consisting of simulations and real robot experiments. Applications that require multi-robot systems to operate independently for extended periods of time in unknown or unstructured environments face a broad set of challenges, such as hardware degradation, changing weather patterns, or unfamiliar terrain. To operate effectively under these changing conditions, algorithms developed for long-term autonomy applications require a stronger focus on robustness. Consequently, this work considers the ability to satisfy the operation-critical constraints of a disturbed system in a modular fashion, which means compatibility with different system objectives and disturbance representations. Toward this end, this paper introduces a controller-synthesis approach to constraint satisfaction for disturbed control-affine dynamical systems by utilizing Control Barrier Functions (CBFs). The aforementioned framework is constructed by modelling the disturbance as a union of convex hulls and leveraging previous work on CBFs for differential inclusions. This method of disturbance modeling grants compatibility with different disturbance-estimation methods. For example, this work demonstrates how a disturbance learned via a Gaussian process may be utilized in the proposed framework. These estimated disturbances are incorporated into the proposed controller-synthesis framework which is then tested on a fleet of robots in different scenarios. Multi-robot task allocation is a ubiquitous problem in robotics due to its applicability in a variety of scenarios. Adaptive task-allocation algorithms account for unknown disturbances and unpredicted phenomena in the environment where robots are deployed to execute tasks. However, this adaptivity typically comes at the cost of requiring precise knowledge of robot models in order to evaluate the allocation effectiveness and to adjust the task assignment online. As such, environmental disturbances can significantly degrade the accuracy of the models which in turn negatively affects the quality of the task allocation. In this paper, we leverage Gaussian processes, differential inclusions, and robust control barrier functions to learn environmental disturbances in order to guarantee robust task execution. We show the implementation and the effectiveness of the proposed framework on a real multi-robot system. We present a new method for learning control law that stabilizes an unknown nonlinear dynamical system at an equilibrium point. We formulate a system identification task in a self-supervised learning setting that jointly learns a controller and corresponding stable closed-loop dynamics hypothesis. The input-output behavior of the unknown dynamical system under random control inputs is used as the supervising signal to train the neural network-based system model and the controller. The method relies on the Lyapunov stability theory to generate a stable closed-loop dynamics hypothesis and corresponding control law. We demonstrate our method on various nonlinear control problems such as n-Link pendulum balancing, pendulum on cart balancing, and wheeled vehicle path following.
OPCFW_CODE
Proposal: PickRequired / PartialRequired I'm using this generic type which I believe is a good fit for ts-essentials: export type PickRequired<T, K extends keyof T> = { [P in K]-?: T[P] } Example usage: const user = (await User.query() .select("name", "is_admin") .where("id", user_id) .first()) as PickRequired<User, "name" | "is_admin"> Normally, ORM query would return a Partial<...> type, because it's not clear which fields will be present in the result set. With PickRequired, I can statically attribute that certain fields will be present in the result. I'm not exactly sure about the naming. I considered PartialRequired and PickRequired, but both bear the unwanted connotations to existing Typescript generics (Partial and Pick). Pick seemed to be better as it similarly accepts the list of keys. Generally, I like this idea — working with ORMs can be a PITA. I am just not sure if we should add such type if it can be easily constructed from already existing ones: type PickRequired <T, K extends keyof T> = Required<Pick<T, K>> If more people find it useful lets definitely add it! Required + Pick will have a different meaning (it will only leave the picked keys, while the one that I suggested leaves all keys, just marks certain of them as required.) That said, my example is probably not a very good as Pick + Required is perhaps a better fit for it indeed. Oh, you're right. I think that shows that the name PickRequired is not perfect for the first case. Anyway, let's wait for feedback from other people that might be interested in using such type. Yeah the naming is confusing, I agree. Could be MarkRequired perhaps. I'll chip in, since I've done something similar a few days ago :) Required + Pick will have a different meaning (it will only leave the picked keys I believe not -- what you wrote initially, only declares keys from K to be present, doesn't say anything about the other keys in T, so it's essentially the same as Required<Pick<T, K>>. Example to prove my point: export type PickRequired<T, K extends keyof T> = { [P in K]-?: T[P] }; type UserPicked1 = PickRequired<User, 'id'>; const test: UserPicked1 = { id: 1, name: 'aaa' }; // ^ error: Object literal may only specify known properties, and 'name' does not exist in type 'PickRequired<User, "id">'` I've used a version of ts-essentials' Merge, simplified because the left and right type is the same, so we don't care about what overrides what. export type PickRequired<T, RK extends keyof T> = Exclude<T, RK> & Required<Pick<T, RK>>; The name is indeed causing trouble here -- now, after a few days, I've realized that indeed PickRequired should mean _"pick and require some properties", i.e. Required<Pick<T, RK>>. Initially I've named this SelectiveRequired -- maybe that's better? Or maybe RequirePicked? Another note on (1) leaving only required keys vs (2) marking some keys as required: I think both cases are important. In the initial comment by @IlyaSemenov, the example indeed has only name and is_admin fields selected -- so it should be Required<Pick<User, 'name' | 'is_admin'>> In my case, I'm using this type to select a database entity together with some related fields, for example return an User instance joined with all his BankAccounts (using typeorm) : // User class (excerpt) export class User { @PrimaryGeneratedColumn() id!: number // ... some other properties // this field is optional, since it's a related entity and normally it's not automatically selected @OneToMany(_type => BankAccount, acc => acc.user) bankAccounts?: BankAccount[]; } export type UserWithAccounts = PickRequired<User, 'bankAccounts'>; // ... in UserService: async getWithAccounts(id: number): Promise<UserWithAccounts> { return this.get( { id }, { relations: ['bankAccounts'] }, ) as Promise<UserWithAccounts>; } I believe not -- what you wrote initially, only declares keys from K to be present, doesn't say anything about the other keys in T, so it's essentially the same as Required<Pick<T, K>>. You are right. Not sure what I was thinking. I updated the issue description accordingly. By the way, in the end (after tossing types this way and another) I don't have the need in any new types, including my original proposal (with incorrect implementation). That said, the imaginary SelectiveRequired / MarkRequired still probably makes sense. @all-contributors please add @quezak for code, idea Great work guys @quezak @IlyaSemenov I feel like we ended up with another useful type. Possible another extension (closer to original type) would be SelectRequired which would pick keys and make them required. This can be also useful while working with ORMs.
GITHUB_ARCHIVE
How to Integrate custom android kernel for Pixel 6a (Bluejay) of AOSP 12 version My AOSP building steps are as follows mkdir AOSP_ROOT && cd AOSP_ROOT repo init -u ``https://android.googlesource.com/platform/manifest`` -b android-12.1.0_r12 repo sync download google_devices-bluejay-sd2a.220601.001.a1-0145bbe6.tgz copy it to AOSP_ROOT/ unzip google_devices-bluejay-sd2a.220601.001.a1-0145bbe6.tgz run extract-google_devices-bluejay.sh source build/envsetup.sh lunch aosp_bluejay-userdebug make updatepackage -j16 i flashed the zip image with following command fastboot -w update out/target/product/bluejay/aosp_bluejay-img-eng.host.zip Kernel building steps are as follows mkdir KERNEL_ROOT && cd KERNEL_ROOT repo init u https://android.googlesource.com/kernel/manifest -b android-gs-bluejay-5.10-android12L-d2 repo sync BUILD_CONFIG=private/devices/google/bluejay/build.config.bluejay build/build.sh with above commands i got Image.lz4 in the path out/android-gs-pixel-5.10/dist i tried to flash the Image.lz4 with command fastboot boot out/android-gs-pixel-5.10/dist/Image.lz4 then also the device is not booted then i tried to flash the boot.img with the following command fastboot flash boot out/android-gs-pixel-5.10/dist/boot.img now the device is stuck at google logo it is not booting. I have also try to copy all files in KERNEL_ROOT/out/android-gs-pixel-5.10/dist to AOSP_ROOT/device/google/bluejay-kernel and then go to the android folder to make bootimage. The Pixel 6a still couldn't boot up after that. hoping for your help regard this!! i developed android custom kernel for pixel 6a according to the official documention provided by google kernel : https://source.android.com/docs/setup/build/building-kernels AOSP : https://source.android.com/docs/setup/build/building and i embedded kernel images onto the aosp still it is not working Solution: I found solution the solution above mentioned problem follow the at 4th step and remaining all are same LTO=full BUILD_AOSP_KERNEL=1 ./build_bluejay.sh with above command Image.lz4 will be generated in the following path out/android-gs-pixel-5.10/dist/. simply copy the Image.lz4 to the device kernel(AOSP_ROOT/device/google/bluejay-kernel/) directory and build the AOSP source code and flash it onto the devic it will work. From what I remember, AOSP no longer does kernel builds during the full OS compilation but supports prebuilts. But since you do want to modify that and add your own custom changes in the kernel source, I came across these sets of patches to allow custom kernel builds for gs101 when applied to LineageOS (AOSP fork) https://review.lineageos.org/q/topic:gs101-inline-kernel I have managed to flash the kernel to Pixel 5a (Android 13) successfully, here are the tools I used to flash the custom kernel: AnyKernal3 for packing the necessary kernel files; KernalFlasher for flashing the packed kernel file(safer than flash the boot.img to the device directly); install these apks(try download the latest version) to your device, I also have modified the anykernel.sh for my Pixel 5a: ### AnyKernel setup # begin properties do.devicecheck=0 do.modules=1 # need copy all the module files to the device ## boot shell variables block=auto; is_slot_device=1; ramdisk_compression=auto; patch_vbmeta_flag=auto; copy dtbo_barbet.img, Image.lz4, Image.lz4-dtb, ramdisk.lz4 from your kernel build directory(mine is: out/android-msm-pixel-4.19/dist) to AnyKernel3 root directory; copy all the module files from the build directory to AnyKernel3/modules/system/lib/modules; zip AnyKernel3 directory: zip -r9 kernel.zip * -x .git README.md *placeholder; copy kernel.zip to your device, somewhere likesdcard/Download; run KernelFlasher app, maybe backup the old kernel file first; KernalFlasher app, select Slot A - Flash - Flash AK3 Zip(chose the kernel.zip file you created); wait for the flash operation finish, if nothing goes wrong, then reboot the system.
STACK_EXCHANGE
using System.Net; using System.Net.NetworkInformation; using System.Net.Sockets; using System.Runtime.InteropServices; namespace Classes { internal sealed class Network { internal static string Hostname { get { return Dns.GetHostName(); } } internal static string Username { get { return System.Environment.UserName; } } internal static bool InternetAccess() { if (!NetworkInterface.GetIsNetworkAvailable()) { return false; } IPStatus Status; try { Status = new Ping().Send("www.google.com", 3000).Status; } catch { Status = IPStatus.DestinationHostUnreachable; } try { Status = new Ping().Send("www.facebook.com", 3000).Status; } catch { Status = IPStatus.DestinationHostUnreachable; } try { Status = new Ping().Send("www.youtube.com", 3000).Status; } catch { Status = IPStatus.DestinationHostUnreachable; } if (Status != IPStatus.Success) { return false; } else { return true; } } internal static string GatewayIPAddress { get { NetworkInterface[] NetInterfaces; try { NetInterfaces = NetworkInterface.GetAllNetworkInterfaces(); } catch { return "0.0.0.0"; } foreach (NetworkInterface NetInterface in NetInterfaces) { if (!NetInterface.IsReceiveOnly && NetInterface.OperationalStatus.Equals(OperationalStatus.Up)) { return NetInterface.GetIPProperties().GatewayAddresses[0].Address.ToString(); } } return "0.0.0.0"; } } internal static string PhysicalAddress { get { if (!NetworkInterface.GetIsNetworkAvailable()) { return AppData.DEFAULT.MAC_ADDRESS; } NetworkInterface[] NetInterfaces; try { NetInterfaces = NetworkInterface.GetAllNetworkInterfaces(); } catch { return AppData.DEFAULT.MAC_ADDRESS; } foreach (NetworkInterface NetInterface in NetInterfaces) { if (!NetInterface.IsReceiveOnly && NetInterface.OperationalStatus.Equals(OperationalStatus.Up)) { return NetInterface.GetPhysicalAddress().ToString(); } } return AppData.DEFAULT.MAC_ADDRESS; } } internal static string IPv4 { get { IPHostEntry IPHE; try { IPHE = Dns.GetHostEntry(Dns.GetHostName()); } catch { return AppData.DEFAULT.IPV4; } foreach (IPAddress IP in IPHE.AddressList) { if (IP.ToString().Split('.').Length.Equals(4)) { return IP.ToString(); } } return AppData.DEFAULT.IPV4; } } internal static string IPv6 { get { IPHostEntry IPHE; try { IPHE = Dns.GetHostEntry(Dns.GetHostName()); } catch { return AppData.DEFAULT.IPV6; } foreach (IPAddress IP in IPHE.AddressList) { if (IP.ToString().Split(':').Length.Equals(8)) { return IP.ToString(); } } return AppData.DEFAULT.IPV6; } } internal static bool GetIPHostEntry(ref string Hostname) { IPHostEntry IPHE; try { IPHE = Dns.GetHostEntry(Hostname); } catch (System.Exception ex) { Hostname = ex.Message; return false; } foreach (IPAddress IPv4 in IPHE.AddressList) { if (IPv4.AddressFamily == AddressFamily.InterNetwork) { Hostname = IPv4.ToString(); return true; } } return false; } internal static string LocalIPAddress(AddressFamily addressfamily) { foreach (IPAddress IPvX in Dns.GetHostAddresses(Hostname)) { if (IPvX.AddressFamily.Equals(addressfamily)) { return IPvX.ToString(); } } return AppData.DEFAULT.IPV4; } [DllImport("wininet.dll")] private extern static bool InternetGetConnectedState(out int Description, int ReservedValue); internal static bool IsConnectedToInternet() { int Desc; return InternetGetConnectedState(out Desc, 0); } } }
STACK_EDU
Microsoft's push to expand Microsoft Office from a set of productivity applications into a platform for building composite applications is gaining ground, with more than 200 ISV-created "Office Business Applications" (OBAs) now available. At last week's Worldwide Partner Conference, Microsoft launched a new Web site, OBACentral.com, and an OnRamp program to showcase the tools and marketing support it offers around a platform that partners say could make a dramatic difference in the enterprise applications market. Content management technology developer Open Text is the latest ISVs to use Microsoft's OBA tools to build around Office as a front-end user interface for tapping back-office enterprise data. It recently released Livelink ECM Customer Information Management, a tool for accessing SAP data through Microsoft applications like Outlook. Open Text, based in Waterloo, Ontario, developed Livelink to address a customer pain point it repeatedly encountered: the need to move out of applications like Outlook and go rummaging around in CRM, financial and supply-chain systems to dig out data like order numbers and delivery schedules. "We figured 'we have all the information to solve that problem, we just have it in the wrong systems,'" said Jens Rabe, OpenText's vice president of Microsoft applications and solutions. "What we found really appealing is that you give end users the ability to get to that data through the Office UI." Open Text client ThyssenKrupp Nirosta, a stainless steel manufacturer, is already using Livelink to enable its employees to access shipping details, contracts and purchase orders directly within Outlook 2007. Using Microsoft's OBA tools saved Open Text the work of creating its own application infrastructure to pull the SAP data, Rabe said. "It takes away the burden from us of coming up with our own front-end clients," he said. "We can focus on the underlying business problems." Microsoft debuted its OBA strategy at last year's TechEd conference. Since then, it's steadily built out support offerings, including reference specifications and quick-start development packs. While OBAs have so far been an under-the-radar effort, Microsoft is committing several million dollars in marketing, sales, and technical resources in an effort to draw greater attention to the possibilities they offer. One of the feature requests Microsoft fields most often from systems integrators is for a way to enable deeper integration between Outlook and back-office systems like Siebel, according to Sanjay Parthasarathy, corporate vice president of Microsoft's developer and platform evangelism group. OBA tools are Microsoft's answer. "I basically think every ISV needs to have an OBA solution and every enterprise application needs to be built as an OBA," Parthasarathy said. The highest-profile OBA project is Duet, the Microsoft Office/SAP integration tool Microsoft and SAP developed jointly. In turning Office into the UI for accessing SAP data, Duet illustrated the development path Microsoft hopes more ISVs will pursue -- one that potentially alleviates user frustrations in having to master multiple enterprise applications, and also advances Microsoft's march toward a greater presence in back-office enterprise applications. OBAs will be most relevant to ISVs looking to incorporate the tools into their own application development, but even solution providers who don't build their own applications may find the platform increasingly pertinent. InCycle Software President Claude Remillard is taking a close look at the OBA platform, which he expects a number of InCycle's clients to take advantage of. Based in Montreal, InCycle specializes in .Net development consulting services. Remillard sees the OBA toolset as one of an increasing number of platform alternatives for application development, which now also include technologies like Microsoft's Expression suite and forthcoming Silverlight platform for rich Web application development. "The days of only building enterprise applications in C are gone," Remillard said. "We're developing an evaluation process to help our customers decide which set of tools to develop in. We'll also provide training and services around all the stacks."
OPCFW_CODE
This guide will explain how to install Ubuntu on your Dropad A8 / Herotab C8 tablet on and creating the Ubuntu filesystem in Linux. I have used my A8 tablet for a while, but lately it is only there eating dust in my closet. Time to pimp this tablet. In my case, I want to use this tablet as a little low-cost, energy-saving webserver and SVN. That is where Ubuntu come in. Running Ubuntu on your tablet means Ubuntu will be running “chrooted”: it will run on top of your Android OS, so it will be remain largely unchanged. Using this guide will required some basic knowledge about ROM’s, ADB and Linux. Installing Ubuntu on your tablet device is at your own risk. Step 1 – Flashing a rooted firmware / ROM Note: Flashing a new ROM can cause loss of data. It is recommended that you backup all personal data before flashing a new ROM. This step is optional. To run Ubuntu on your tablet device, a kernel with support for loop devices is required. If your kernel does not support loop devices, find yourself a suitable firmware / ROM. Please check the Herotab C8 Firmware / Development forum. Make sure that the ROM is rooted and BusyBox is installed. If not, you can grab a BusyBox installer from the Android Market. I am currently using the Evolution 3.1.1 ROM by prox32 at the moment. This ROM supports loop devices and has clocked the CPU at 1.2 GHz, giving an extra performance boost. To flash a ROM, simply download the files and extract them in the root directory of your SD-card. Turn on / reboot your tablet while keeping the menu-button pressed. At this point, your ROM will be flashed into the device. For more information see http://www.slatedroi...blet-look-here/. Step 2 – Enable USB debugging and WiFi Make sure USB debugging is enabled on your tablet. This is required for communication between your PC and tablet device through ADB (Android Debug Bridge). Go to Settings > Applications > Development to see if USB debugging is enabled. Also enable WiFi and connect to a wireless network. This will allow Ubuntu communicate with the internet. Step 3 – Install the Android SDK Grab and download the Android SDK if you haven’t installed it yet on your computer. The SDK also supplies the USB drivers for connecting your tablet with your computer. These are located in the third party Google repository. For more information on installing the Android SDK see http://developer.and...installing.html. Step 4 – Creating the Ubuntu filesystem Before you can create a Ubuntu filesystem, you need access to an operational Ubuntu installation. I am using a virtual Ubuntu 10.10 Desktop installation in VMware. Ubuntu images are free and can be download from http://www.ubuntu.com/download. VirtualBox (cross platform) or Virtual PC (Windows) are great free tools to run virtual machines. Just Google for them. When you have access to an operational Ubuntu installation, just login and open the terminal. We are using rootstock in order to create the filesystem. If rootstock isn’t installed yet, please enter the follow command: sudo apt-get install rootstock This will install rootstock on your Ubuntu machine. Run the follow command: sudo rootstock --fqdn yourfqdn --login yourusername --password yourpassword --imagesize yourimagesize --seed linux-image-omap,build-essential,tightvncserver,gnome-shell Please replace all “your[something]” with your own information. For example, replace yourfqdn with ubuntu. For the imagesize, use values as 1G, 2G, 4G, 8G etc. These values are equal to the desired image size (1GB, 2GB, 4GB, 8GB etc.). With seed you can specify programs that will be included in your filesystem. With build-essential we are only building a filesystem with the minimum required programs. In this case, Tight VNC server is included as VNC server and GNOME as the window manager. Alternatively, you can also use LXDE (Lightweight X11 Desktop Environment) as window manager. Just replace gnome-shell with lxde. When rootstock is finished, it will create a file called “armel-rootfs-xxxxxxxxxxxx.tgz” in your working directory. Now we have the files, we can create an image to mount it on our Android device. Enter the follow command to create an empty image: dd if=/dev/zero of=ubuntu.img bs=1MB seek=yourseek count=0 Replace yourseek with 1024 for a rootstock file size of 1G, 2048 for 2G, 4096 for 4G etc. A image called ubuntu.img is now created in your working directory. Next, format the image as an EXT2 filesystem: mke2fs –F ubuntu.img Mount the empty image, for example in a folder called “ubuntu” on your desktop. Make sure that the “ubuntu” directory exists. Enter the follow command: sudo mount -o loop ubuntu.img /home/yourusername/ubuntu Extract the generated TGZ file by rootstock in the directory where the image is mounted. Replace the xxxxxxxxxxxx with the correct numbers. sudo tar -C /home/yourusername/ubuntu -zxf armel-rootfs-xxxxxxxxxxxx.tgz When the extraction process is finished, unmount the image: sudo unmounts /home/yourusername/ubuntu And you are ready for the next step. Step 5 – Copying the image to your tablet Plug in your tablet into your computer (connect to your computer with the OTG USB port). Turn on USB storage and copy the created ubuntu.img to your device. I have created a folder called “ubuntu” en copied ubuntu.img into the ubuntu folder. The ubuntu folder is located at the root of my internal SD card (for example: /sdcard/ubuntu/ubuntu.img). It is also possible to copy the image to an external SD card. Just remember the path where you have stored the image. Step 6 – Start Ubuntup You are almost ready to go. It is time to mount the image. In this step, ADB will be used to mount the image. I will shortly explain the usage of ADB in Windows. First, start command prompt and navigate to the “platform-tools” directory in the ADB installation directory. In my case, ADB is installed in C:\android-sdk-windows, so platform-tools is located in C:\android-sdk-windows\platform-tools. Now check if your device is recognized and connected with your computer: If your drivers are properly installed and the device properly connected, it will return a device called “MID_serials”. Otherwise, check your connection (is USB debugging enabled?) or your drivers. When your device, MID_serials, is listed, open the ADB shell Now mount the image. You might want to change some path’s if you have stored your ubuntu.img on another location (mine is stored in /sdcard/ubuntu/ubuntu.img). For example, if you have stored your ubuntu.img in a folder called ubuntu on your external SD-card, please replace path with “/sdcard/ubuntu” with “/extsd/ubuntu” su export kit=/sdcard/ubuntu export bin=/system/bin mkdir /data/local/ubuntu export PATH=$bin:/usr/bin:/usr/sbin:/bin:$PATH export TERM=linux export HOME=/root losetup /dev/block/loop1 /sdcard/ubuntu/ubuntu.img mount -t ext2 /dev/block/loop1 /data/local/ubuntu mount -t devpts devpts /data/local/ubuntu/dev/pts mount -t proc proc /data/local/ubuntu/proc mount -t sysfs sysfs /data/local/ubuntu/sys sysctl -w net.ipv4.ip_forward=1 chroot /data/local/ubuntu /bin/bash If everything went well, you should see “root@localhost: #”. Congratulations, Ubuntu is now running from your tablet device. Your ADB shell is now turned into a Bash shell! From here it's also possible to run linux commands, suchs as "sudo apt-get install xxxxx" etc. Step 7 – Start the VNC server Type the follow commands: export USER=root vncserver -geometry 1024×768 At the first startup, TightVNCServer wil prompt you for some passwords. Please enter these. Optionally, you can change the resolution by changing the geometry argument. You might want to adjust the /root/.vnc/xstartup if you are using another window manager then Gnome. Step 8 – View your Ubuntu installation Download a VNC client (for Android, Linux or Windows, whatever fits you) to view your Ubuntu installation. Connect to your tablet by entering the internal IP address and the port, for example: 192.168.2.10:5901 (5901 is the standard port). To find out the internal IP address of your tablet, return to your Bash shell and run the command “ifconfig”. To shutdown Ubuntu en unmount the image, please enter the follow commands in your ADB / Bash shell: shutdown now exit umount /data/local/ubuntu/dev/pts umount /data/local/ubuntu/proc umount /data/local/ubuntu/sys umount /data/local/ubuntu losetup -d /dev/block/loop1 To start Ubuntu again, please enter the commands from step 6 and step 7. If you are an enthusiast, you can write your own startup scripts, so you don’t have to enter a lot of commands before starting up Ubuntu. Credits to AndroLinux.com. Parts of the start-up script in step 6 are taken from AndroLinux. Enjoy your newly Ubuntu tablet device .
OPCFW_CODE
/** pso.js * https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers * https://medium.com/techtrument/multithreading-javascript-46156179cf9a */ // Returns a random number in [min, max) // Particle class var Particle = function (manager, id) { // Unique ID this.id = id; // Initialize the particle with random position components bounded by the given dimensions this.position = []; this.fitness = 0; this.bestParticleId = 0; this.bestPosition = []; this.bestFitness = 0; for (var i = 0; i < manager.dimensions.length; i++) { this.position.push( Math.round(this.randomPosition(manager.dimensions[i].max))); this.bestPosition.push( Math.round(this.position[i] )); } this.computeFitness(manager); // console.log(this.position); // Initialize the velocity components this.velocity = []; for (var i = 0; i < manager.dimensions.length; i++) { var d = manager.dimensions[i].max; this.velocity.push( this.randomPosition(d) ); } }; Particle.prototype.randomPosition = (max) => Math.floor(Math.random() * max); Particle.prototype.computeFitness = function (manager) { let x = this.position[0] > 0 ? Math.round(this.position[0]) : 0; let y = this.position[1] > 0 ? Math.round(this.position[1]) : 0; this.fitness = manager.fitnessFunction[x,y]; if (this.fitness < this.bestFitness) { for (var i = 0; i < this.position.length; i++) { this.bestPosition[i] = this.position[i]; } this.bestFitness = this.fitness; } } Particle.prototype.iterate = function (manager) { // Get the social best var socialBestPosition = manager.getSocialBest(this); // Update the position for (var i = 0; i < manager.dimensions.length; i++) { var vMomentum = manager.inertiaWeight * this.velocity[i]; var d1 = this.bestPosition[i] - this.position[i]; var vCognitive = manager.cognitiveWeight * this.randomPosition(100) * d1; var d2 = socialBestPosition[i] - this.position[i]; var vSocial = manager.socialWeight * this.randomPosition(100) * d2; this.velocity[i] = Math.round(vMomentum + vCognitive + vSocial); this.position[i] = this.position[i] + this.velocity[i]; } this.position = manager.particleRangeExtreme(this.position[0], this.position[1]); // console.log(this.position) } // Manager class // Maintains a list of particles class Manager { constructor(fitnessFunctionInput, numParticles) { // this.dimensions = [ {min: -1, max: 1}, {min: -1, max: 1} ]; this.dimensions = fitnessFunctionInput.dimensions; this.fitnessFunction = fitnessFunctionInput.data; // Number of iterations that have been computed this.iterationNum = 0; // If linear scaling is enabled, then 'inertiaWeight' will change this.enableInertiaWeightScaling = true; // y = mx + b // m = (y_end - y_start) / (range-0) + y_start this.setInertiaScaling(true, 0.7, 0.7, 1); this.inertiaWeightStart = 0.7; this.inertiaWeightEnd = 0.7; this.inertiaWeightIterationRange = 1; console.assert(this.inertiaWeightIterationRange > 0); this.inertiaWeightSlope = (this.inertiaWeightEnd - this.inertiaWeightStart) / this.inertiaWeightIterationRange; this.inertiaWeight = this.inertiaWeightStart; this.cognitiveWeight = 0.01; this.socialWeight = 0.1; this.extreme = null; // List of particles taking part in the estimation this.particles = []; for (var i = 0; i < numParticles; i++) { this.addParticle(); } this.updateGlobalBest(); // By default uses global best // This number must be even-valued this.numNeighbors = this.particles.length; this.topology = "ring"; } setInertiaScaling = function (enable, start, finish, range) { this.enableInertiaWeightScaling = true; this.inertiaWeightStart = start; this.inertiaWeightEnd = finish; this.inertiaWeightIterationRange = range; this.inertiaWEight = this.inertiaWeightStart; this.inertiaWeightSlope = (this.inertiaWeightEnd - this.inertiaWeightStart) / (this.inertiaWeightIterationRange); } addParticle = function() { var uniqueId = this.particles.length; var p = new Particle(this, uniqueId); this.particles.push(p); } // Adds a particle the set of particles taking // // part in the estimation // addParticle = function() { // var uniqueId = this.particles.length; // var p = new Particle(this, uniqueId); // this.particles.push(p); // } // This is the main function that is called // to simulate an iteration of the simulation iterate = function() { this.numCollisions = 0; for (var i = 0; i < this.particles.length; i++) { this.particles[i].iterate(this); this.particles[i].computeFitness(this); } // This should only be for the fully connected topology // this.updateSocialBest() this.updateGlobalBest(); this.updateInertiaWeight(); //console.log("inertiaWeight = " + this.inertiaWeight); this.iterationNum++; } updateGlobalBest = function() { // Find the best // Assign initial values with the first particle this.bestParticleId = 0; this.bestPosition = this.particles[0].bestPosition; this.bestFitness = this.particles[0].bestFitness; for (var i = 1; i < this.particles.length; i++) { if (this.particles[i].bestFitness < this.bestFitness) { this.bestParticleId = i; this.bestFitness = this.particles[i].bestFitness; this.bestPosition = this.particles[i].bestPosition; } } } particleRangeExtreme = function(xVal,yVal){ let minX = 100; let maxX = 0; let minY = 100; let maxY = 0; for(let i = 0; i < this.particles.length; i++){ let elem = this.particles[i]; if(minX < elem.position[0]) { minX = elem.position[0] }; if(maxX > elem.position[0]) { maxX = elem.position[0] }; if(minY < elem.position[1]) { minY = elem.position[1] }; if(maxY > elem.position[1]) { maxY = elem.position[1] }; } // function normalize(val, max=100, min=0) { return ((val - min) / (max - min))*255; } let xCoord = Math.round(((xVal - minX) / (maxX - minX))* this.dimensions[0].max) let yCoord = Math.round(((yVal - minY) / (maxY - minY))* this.dimensions[0].max) if(xCoord < 0){ xCoord = 0 } if(yCoord < 0){ yCoord = 0 } let coord = [xCoord,yCoord] return coord; } getSocialBest = function(particle) { switch (this.topology) { case "ring": return this.getSocialBest_Ring(particle); break; case "fully connected": return this.getSocialBest_FullyConnected(particle); break; default: console.assert("Unknown topology"); } } // Ring topology getSocialBest_Ring = function(particle) { // Returns a valid index into an array. // Wraps around values outside the valid range. // e.g. -1 is mapped to the array length - 1 function fix(id, arrayLength) { if (id < 0) { // id is negative, so add it instead of subtracting return (arrayLength+id); } if (id >= arrayLength) { return (id - arrayLength); } return id; } // Number of neighbors var k = this.numNeighbors; console.assert(k%2 == 0); // must be even var kh = k / 2; // half of the neighbors per left/right side // console.assert(this.particles.length >= k+1); // Create a list of particle ids for the current particles neighbors // (wrap around the index if too low or too high) var neighborIds = []; for (var i = 0; i < k+1; i++) { var uid = particle.id - kh + i; var fid = fix (uid, this.particles.length); neighborIds.push ( fid ); } // find the best fitness among the neighbors var lbFitness = this.particles[ neighborIds[0] ].bestFitness; var lbId = 0; for (var i = 1; i < neighborIds.length; i++) { if (this.particles[ neighborIds[i] ].bestFitness < lbFitness) { lbId = neighborIds[i]; lbFitness = this.particles[ lbId ].bestFitness; } } // return the local best position return this.particles[lbId].bestPosition; } // Star (Global best) getSocialBest_FullyConnected = function(particle) { return this.bestPosition; } collisionCallback = function () { this.numCollisions++; } // Compute inertia. This is based on equation 4.1 from: // http://www.hindawi.com/journals/ddns/2010/462145/ updateInertiaWeight = function () { if (this.enableInertiaWeightScaling == false) { return; } if (this.iterationNum > this.inertiaWeightIterationRange) { this.inertiaWeight = this.inertiaWeightEnd; return; } this.inertiaWeight = this.inertiaWeightSlope * (this.iterationNum) + this.inertiaWeightStart; } }; // const PSO = Manager(); export default Manager;
STACK_EDU
please pardon me if this question is already posted… sambpos V4 it’s just simple delete the database in my document and then i got a new fresh sambapos. no need to uninstall the sambapos and instant it again. but this is not working in sambapos V5 in sambapos V5 i want to make a new one or fresh sambapos… it’s because i want to start all again. the fresh one anyone can give some idea about this topic? thank you so much which one i need to download? windows 8.1 64bit - ExpressAdv 32BIT\SQLEXPRADV_x86_ENU.exe - ExpressAdv 64BIT\SQLEXPRADV_x64_ENU.exe - ExpressAndTools 32BIT\SQLEXPRWT_x86_ENU.exe - ExpressAndTools 64BIT\SQLEXPRWT_x64_ENU.exe I actually recommend you download SQL Express 2016 if your interested in installing SQL Express and using that. Install instructions should be the same as 2014 so this tutorial would work. 2016 is the latest version and its what I have been running. here is link for 2016 Once you run the installer it opens up this: Basic is fine. My setup is wrong… there is no sambpos included i just use Basic and normally fallow the flow of instruction provided… please give me some idea… thank you for everything Its a microsoft product why would sambapos be included? What do you mean? According to that you already have sql express installed and have an instance called SQLEXPRESS SQL is a database engine which SambaPOS utilizes. It is one of the main ones used by millions of people and businesses for many many different software’s, websites and all sorts of things. Thats like saying I just installed windows, why isnt Samba on it? Kendash wrote a very details tutorial for installing and configuring SQL to work with Samba, its the first link he sent you earlier in this thread, I suggest you have another read through. What should i do next sir? so that i can make a new fresh sambapos in v5… thank you for giving me a chance to learn more about this topic Are you already running samba from SQL? You already have SQL installed? If you were already using SQL you can just change the database name in samba and samba will create the new database in the existing sql instance. how to run samba from SQL? sir if you have much time can i request some screen shots sir for tutorial? so that i can completely get a new one or fresh sambapos I gave you several turorials that are very detailed and explain it very well. Take the time to read them all it will benefit you. Thank you so much sir for the time and effort really appreciate it. please forgive me about my English grammar. English is not my 1st language Is samba curently running off sql or not. If not linking to SQL will start a freesh database, it it is just change the DB name in connection string in local settings in samba. other problem appear… sad i was in the field this past few days because of my work and then now i cant even see login screen in my sambaPOS i just want to try and check the local settings in samba as what you said i don’t know what happening even i just uninstall and reinstall the sambapos the same problem appear any idea how to fix is this? please btw the SQL is already running when you completely installed the sambapos w/ sql using the sambapos installer right? Can you show what the log file says? it installs localdb version. if you want full sql then uninstall local db version. this one sir? i already uninstall this local db 2014… what should i do next sir? this is what log says sir kendash It says “system can’t find the file”. How did you setup the connection string? Good Day Sir emre the connection string is in the local settings in samba right? just like JTRTech said in his comment… but i think i did not do anything this past few days except uninstalling or reinstall sambapos and installing/uninstall the 2016 version of sql… the reason is i don’t know how to setup properly the sql… and the main reason is i just want to make a new or fresh sambapos… but this new problem appear… how to fix this sir?
OPCFW_CODE
buy zolpiem tablets online Back titration is a titration done in reverse; instead of titrating the original sample, purchase ambien online with mastercard a known excess of standard reagent is added to the solution, and the excess is titrated. A minimum of 140 credits and more than 1,700 hours of experiential coursework are required for graduation. Los Angeles using a variety of methods as part of its operation. In many countries, there are two main types of labs that process the majority of medical specimens. It stops rice, wheat, and pulses from rotting. The purchase ambien online with mastercard family has a long history in pharmacy going back to the 17th century. Automation means that the smaller size of parts permits a mobile inspection system to examine multiple parts more quickly. During the match, however, Karen turned on Joe and aided her purchase ambien online with mastercard husband. This reduction in medications has been shown to reduce the number of medications and is safe as it does not significantly alter health purchase ambien online with mastercard outcomes. The Mao purchase ambien online with mastercard Zedong government purchase zolpidem 10mg online in the uk nearly eradicated both consumption and production of opium during the 1950s using social control and purchase ambien online with mastercard isolation. Advertising messages with a strong call-to-action are yet another device used to convert customers. Many were of mixed race and educated in American culture; they did order ambien with american express not identify with the indigenous natives of the tribes they encountered. The lack of impurity in the water means that purchase ambien online with mastercard the system stays clean and prevents a buildup of bacteria and algae. Holmes denied any involvement in the child's death and immediately left zolpidem 10mg buy overnight the city. The open end of the syringe may be fitted with a hypodermic needle, a nozzle or purchase ambien online with mastercard a tubing to help direct the flow into and out of the barrel. the perineal nerve and the dorsal nerve of the clitoris. Cincinnati College of Mortuary Science of Cincinnati, Ohio and Southern Illinois University Carbondale. The way an individual moves can indicate health and even age and purchase ambien online with mastercard influence attractiveness. He felt it was unlikely that characters would cross over between ambien otc the purchase ambien online with mastercard show and films, but noted that this could change between then and the premiere of the series. There were some contradictory results to indicate the exact RDS. Wein and penciled by Dave Cockrum, in which Wolverine is recruited for a new squad. Other studies have reached similar conclusions. Gas anesthetics such as isoflurane and purchase ambien online with mastercard sevoflurane can be used for euthanasia of very small animals. The agency has drawn fire for being susceptible to overt government interference, subject to bribery, internal cheapest generic zolpiem in london feuding and constant rumours and or allegations abound concerning misappropriation of funds. However some grenades are also smuggled from the US to Mexico or stolen from the Mexican military. The solution is formulated to have concentrations of purchase ambien online with mastercard potassium and calcium that are similar to the ionized concentrations zolpidem 10mg prescription usa found in normal blood plasma. Pontiac sub-contracted the job of engineering the TTA engine modifications to PAS. purchase ambien online with mastercard Cefalexin is a beta-lactam antibiotic within the class of first-generation cephalosporins. While women suffrage was banned in the mayoral elections in 1758 and in the national elections in 1772, no such bar was ever introduced in the local elections in the country side, were women therefore continued to vote in the local parish elections of vicars. Policy makers in some countries have placed controls on the amount pharmaceutical companies can raise the price of drugs. Such units are designed so that the whole container can be disposed of with other biohazardous waste. This figure continued to rise over time and in 1991, 68% of black children were born outside of marriage. purchase ambien online with mastercard There are ethical concerns about whether people where to purchase zolpiem in the uk online who perform CAM have the proper knowledge to treat patients. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. For instance, a skinfold based body density formula developed from a sample of male collegiate rowers is likely to be much more accurate for estimating the body density of a male collegiate rower purchase ambien online with mastercard than a method developed using a sample of the general population, because the sample is narrowed down by age, sex, physical fitness level, type of sport, and lifestyle factors. Vomiting can be caused by a wide variety of conditions; it may present as a specific response to ailments like gastritis or poisoning, or as a non-specific sequela of disorders ranging from Where to buy tramadol 200mg in hanoi brain tumors and elevated intracranial pressure to overexposure to ionizing radiation. Foot-binding involved alteration of the bone structure so that the feet were only about 4 inches long. With no decoration, the pharmacist had greater freedom to reuse the jars. Rodchenkov's testimony became public in an extensive interview with The New York Times, where he provided spreadsheets, discs, e-mails, and more incriminating evidence Where can i buy alprazolam powder of Russian involvement. Ortega was reported by Nicaraguan election officials as having received 72% of the vote. Electronic prescribing has the potential to eliminate most of these types of errors. In fact, according to his 'adaptive point of view', once infants were born they have the ability to be able to cope with the Where to buy xanax 1mg in uk demands of their surroundings. Psychologically liquidity trap is caused by a strong sense of fear and economic insecurity. IC50 values are very dependent on conditions under which they are measured. Classrooms and offices for humanities and social sciences are located in Breiseth Hall, a three-story building located on South Franklin purchase ambien online with mastercard Street, in the same block as SLC. Some sources may treat the terms rhythm method and natural family planning as synonymous. Despite complying with Hector's demands, Mike is not content with the threats to Kaylee's where to buy zolpidem 10mg in mexico life and retaliates by attacking an ice cream truck transporting Hector's drug cash. A single general factor of psychopathology, similar to the g factor for intelligence, buy generic zolpiem online legally cheap has been empirically supported. Cheapest generic valium 5mg mastercard Purchase generic ultram online india Buy xanax online pills net Buy drug soma 350mg in houston
OPCFW_CODE
Types of Blockchain Nodes 4 min read I come up with a new topic today and that's nothing but, all about blockchain nodes. I hope you guys are aware of the Peer-to-Peer(P2P) Network and how the blockchain work. If not please follow these articles before moving forward. As you know, the blockchain is a peer-to-peer network, but not all participating nodes have the capability or intent to perform all the functionality to perform all the activities and transactions on nodes. Based on the capability of nodes to perform transactions or activities on blockchain nodes, it is classified in different ways. Saroj, but we are confusing here. Cool. let me explain in a simple way :) Assume you are working in a company with 1000 employees. But, that 1000 employees have some different work right? let's say you are working as a senior dev, someone working as a junior dev. There are many classifications within the company, right? For eg. Manager, HR, Frontend dev, and backend dev. Like this, there can be many nodes inside a blockchain network, but they all are classified in different ways, like your company :) Ok Saroj, that's great. Also, can you tell me more about this classification? Sure, let's try to understand this with an image. Nodes are classified as I hope now you got some idea about Node classification. Now let us understand it all one by one. Yes Saroj, Please 1. Minor Nodes So, what is a Minor Node? These are nodes that can perform almost all the activities on the blockchain network. These nodes have the full copy of the ledger with them. Minor nodes participate to capture transactions, create or propose blocks, participate in building consensus, and synchronize their ledger copies. Minor nodes are also rewarded if their blocks are added through PoW consensus. 2. Full Nodes Full nodes store full copies of the ledger ad validate the newly added blocks. These nodes are primarily responsible for the verification of transactions that are created by minors. Mining and consensus are not the same as verification. In the case of mining, blocks are created and as per protocol, blocks are finalized. In verification, blocks are confirmed and then accepted to be part of the ledger. Minor nodes also act as full nodes in a way that they maintain a full ledger and they verify transactions confirmed by other participants. On the other hand, full nodes do not perform the responsibility of the minor nodes. 3. Administrator Nodes Administrator nodes perform administration activities and support other nodes for certain activities. While blockchain is decentralized and no single party or node can be considered administrators or controllers, there can be situations, especially in the case of a private blockchain that provides higher authority for certain nodes to make decisions such as adding a new node. Sometimes these nodes are available always and tasked with the responsibility of supporting synchronization activities. These nodes may have unique hardware supporting them based on requirements. Nodes can take the responsibility for archival data. Administration nodes called super nodes are archival nodes depending on the role they play in the network. 4. Light-Weight Nodes Light-wight nodes do not store the full ledgers but only block headers. These nodes verify transactions. As these nodes do not store the entire ledger, they need lesser resources and are relatively easier to run. Usually, these nodes support end-user applications such as wallets. Light-weight nodes require the help of full nodes to stay up-to-date. These nodes are not mutually exclusive types, meaning a node might perform tasks of multiple noes. I know it is kind of a bit confusing if you are new to blockchain. But trust me, once you understand it, it will be fun for you. I hope you enjoyed the reading. Again, thanks for reading this out. See you with a new article. Still have any issues, feel free to connect with me on LinkedIn and Twitter Did you find this article valuable? Support Saroj Behera by becoming a sponsor. Any amount is appreciated!
OPCFW_CODE
This section builds a Mac OS X application from scratch. It will start with the base SimpleEdit JAR file that was built in Chapter 4 and Chapter 5, and then add the necessary elements to convert it to a full Mac OS X application bundle. It will build out the directory structure shown in Figure 7-5; you might want to refer to this figure as you walk through this example. Create a new folder called SimpleEdit in your home directory (~) by using the Finder. Add a folder inside this new directory called Contents. This is where you'll add the Info.plist file. Next, create a MacOS folder (no space) inside the Contents directory. Here, add a "stub" file that acts as a native launcher stub for the application. Create a Resources folder inside Contents. This is where you will add an icns file, an icon that will be displayed in the Finder and standard file dialogs. Finally, add a Java directory to the Resources folder. This is where you'll put the required Java libraries (JAR files). Directly inside the Contents folder, add an Info.plist file with the contents shown back in Example 7-2. Once you have a base Info.plist file, use the Property List Editor to make any necessary additions or changes. To launch your application, you'll need a small native stub file. Copy the file JavaApplicationStub from the directory /System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/MacOS. You can rename this stub whatever you want, as long as the stub file matches the entry for CFBundleExecutable in Info.plist. A new stub is included with each JVM release from Apple, and you'll generally want to use the latest available stub. In the Resources folder inside Contents, add an icns file (a Mac OS X icon file). For development purposes, you can borrow an icon file from another application to test, but you should eventually use the IconComposer tool (shown in Figure 7-6) to create attractive icons. You can find IconComposer in /Developer/Applications/. It's also worth pointing out that the photorealistic icons used by Mac OS X are sometimes best created in a commercial application and then imported into IconComposer. Specifically, Adobe Photoshop does an excellent job of creating an application icon, including generating transparency masks, and IconComposer will import Photoshop's PSD files. Obviously, you need to add your Java application code to the package. Copy the SimpleEdit.jar file into the Java directory inside the Resources folder. If you were building an application that relied on several other Java libraries, you'd want to place those libraries here as well, and update the Info.plist classpath entry (using the $JAVAROOT/ directive to indicate this relative, dynamic path). Finally, rename the base directory from SimpleEdit as SimpleEdit.app. The Finder will automatically recognize the new folder name and display the folder as an application (hiding the .app extension, even if the Finder preferences are set to always show file extensions). You can now use the Finder to drag and drop files on the application's icon. Assuming you've added the Finder "Open" file handlers (as described in Chapter 5), you'll also be able to open files by using standard features such as the Finder's "Open With" command (as shown in Figure 7-7). The default handler in Chapter 5 displays a dialog showing the path of the file shown in Figure 7-8. Depending on your application, you'll probably want to use the passed-in path to open the file and read the data by using standard Java file I/O APIs. Congratulations! You've now built a complete Mac OS X application.
OPCFW_CODE
We use the fwmugen generator on GeneratorFactory.cxx to evaluate the MFT. However, to make changes on the event parameters we need to compile O2. This approach is cumbersome and prevents automations. I wonder if there is a better way of configuring the generators to change multiplicity, rapidity and pT ranges, vertex position, and so on without recompiling O2? Thanks in advance, I am not an expert but what I did to have a variable multiplicity was to simply add it as an option in the SimConfig class. You can then get the information from the SimConfig object in the GeneratorFactory.cxx. The option can be passed to the simulation when you start it making automation easy. This should work for all the properties you want. Maybe its not the best way but for me it worked. I hope this helps. Thanks for pointing to SimConfig. The boxgen generator can be configured for 10 events with 100 forward muons with the following command: o2-sim-serial -m MFT -e TGeant3 -g boxgen -n 10 --configKeyValues 'BoxGun.pdg=13 ; BoxGun.eta=-3.6 ; BoxGun.eta=-2.45; BoxGun.number=100' Correct. You could make a parameter in a similar way done for BoxGun and fetch the values in the GeneratorFactory. Hi Sandro, all, this proposal (thanks Thomas!) solves all but one issue. How do we add a realistic vertex distribution to the generation of the events? As far as I see, now all events are generated at the nominal zero. I changed the GeneratorFactory and BoxGunParameter to provide the option to set the vertex position and the vertex range. It is implemented in the boxgunvertex branch of my fork and can be used as. o2-sim -m MFT -e TGeant3 -g boxgen -n 10 --configKeyValues 'BoxGun.pdg=13 ; BoxGun.eta=-3.9 ; BoxGun.eta=-2.1; BoxGun.prange=0.1 ; BoxGun.prange=5 ; BoxGun.vertexXYZ=-15 ; BoxGun.number=20' For vertex ranges, this branch has a new generator named boxgenvrange that can be used as o2-sim -m MFT -e TGeant3 -g boxgenvrange -n 10 --configKeyValues 'BoxGun.pdg=13 ; BoxGun.eta=-3.9 ; BoxGun.eta=-2.1; BoxGun.prange=0.1 ; BoxGun.prange=5 ; BoxGun.vertexRange=-.1 ; BoxGun.vertexRange=-.1 ; BoxGun.vertexRange=+.1 ; BoxGun.vertexRange=+.1 ; BoxGun.number=20' However, strange enough, the FairBoxGenerator, on which the O2 generator factory is based, allows ranges only for the x and y positions of the vertex, not the z. Thus, setting BoxGun.vertexRange sets the z vertex position. The workaround I have in mind is to scan the z vertex position with external scripts. It is a strange limitation, though. @swenzel, should I create a merge request for this? Hi Guillermo, Rafael, you can easily implement a variable z vertex position in the FairBoxGenerator. However, having FairRoot as a development package is sometimes a little painful since (afaik) O2 does not always work with the latest commit of FairRoot so you have to check which commit is working. Vertex position can already be configured using the “InteractionDiamond” key. This is independent on the generator and should be provided on the framework level. Please make a JIRA ticket if features are missing. @pezzi: I’d be happy if you could contribute generalizations to the vertexing in a PR. But please do it on the InteractionDiamond level. If something cannot be done in the FairRoot system we would either need to generalize their classes or decouple completely. if you need some custom generator, I think you can easily use an external one. For example, I created the macro myfwmugen.C (see below for the content) which I launch with the command: o2-sim -m MID -g extgen --extGenFile=myfwmugen.C --extGenFunc='myfwmugen(2)' -n 10000 This example is a bit overshooting, since I see from this mail thread that I could simply use the boxgen with the proper configKeyValues, but still, it can come at hand for the future. FairGenerator* myfwmugen(int nMuons = 1, double pMin = 2., double pMax = 100., bool isMuPlus = false) // instance and configure an external TGenerator int pdgCode = ( isMuPlus ) ? -13 : 13; std::cout << "Simulate " << nMuons << " particles with pdg code " << pdgCode << " per event" << std::endl; auto gen = new FairBoxGenerator(pdgCode, nMuons); Hi Sandro, I missed this edit. Sorry for the late answer. Thanks for pointing that the interaction diamond method implements vertexes distributions for all axes. It present Gaussian distributions and in this particular case flat distributions are prefered. FairBoxGenerator is able to generate particles with vertexes ranges in X, Y, but not Z. This limitation is evident on the implementation of FairBoxGenerator::SetBoxXYZ. In my opinion O2 should be able to use what is available at FairRoot, thus I implemented a version of boxgun with vertexing control. Se PR#2685 . I would like to have a flat/box vertex distribution in O2 for the Z axis as well. It would be straight forward to generalize FairBoxGenerator class to support ranges in Z, or add several generators along the Z axis as done here (not included in the pull request #2685).
OPCFW_CODE
At its coronary heart, the flaw is found in the cryptographic nonce, a randomly produced range which is used just once to prevent replay attacks, by which a hacker impersonates a consumer who was legitimately authenticated. If at all possible, it is suggested to remove TKIP aid, Even though these assaults will not be frequent presently. Matthew Environmentally friendly, a cryptography teacher at Johns Hopkins College, explained inside of a tweet that this is "in all probability planning to develop into a slew of TJ Maxxes," referring to the cyberattack about the Section shop, where by hackers cracked the Wi-Fi password that connected the funds registers towards the community. In actual fact, the latest Model of the Moveable Penetrator WPA Cracker contains a WiFi password Restoration method that may make certain you can access your WiFi Regardless that a hacker has breached it and blocked you from accessibility. - the next action is bruteforcing the key offline with some thing like hashcat or john-the-ripper (it works by generating guesses and observing Should the hash created from the guess matches the hash captured. Multi-gpu Computer's can make about five hundred,00 WPA hashes for every next). TIME could acquire payment for some inbound links to products and services on this Site. Presents might be issue to vary unexpectedly. Wired Equivalent Privateness (WEP) would be the most widely applied Wi-Fi safety protocol in the world. This is the function of age, backwards compatibility, and The reality that it seems initially while in the protocol assortment menus in lots of read more router Handle panels. If the password is cracked you will see a KEY Located! message in the terminal accompanied by the simple textual content Model on the network password. not rated nevertheless Mar 24, 2014 The simplest way to shield against brute power assaults on WPA2 should be to set the re-authentication wait around the perfect time to a person or a few seconds. In this way, it could choose them many years to try all combinations even for a short password. But several products and solutions and unit makers will likely not obtain patches -- straight away, or ever. Katie Moussouris, founding father of Luta Security, claimed inside a tweet that World-wide-web of Issues products will likely be a lot of the "hardest strike." The expert describes the attack in far more depth on an internet site committed to the KRACK assault, As well as in a exploration paper the pro options to existing at this calendar year's Laptop or computer and Communications Stability (CCS) and Black Hat Europe meeting. It might also be really worth crossing a single's fingers…at least right up until a different stability procedure becomes accessible. GitHub is household to above twenty million developers Performing collectively to host and review code, take care of tasks, and Establish application with each other. WPA2 protocol, the incredibly protocol that has not been destabilised in above 14 years. From a large degree, the vulnerability enables a malicious agent to intercept a connection concerning a WiFi community and system. The malicious agent can then power the reinstallation of an now in use encryption vital, by manipulating and replaying the cryptographic handshake course of action that takes place among the gadget and community.
OPCFW_CODE
Given an array (One dimensional or multidimensional) and the task is to delete an array element based on key value. Input: Array ( => 'G' => 'E' => 'E' => 'K' => 'S' ) Key = 2 Output: Array ( => 'G' => 'E' => 'K' => 'S' ) Using unset() Function: The unset() function is used to remove element from the array. The unset function is used to destroy any other variable and same way use to delete any element of an array. This unset command takes the array key as input and removed that element from the array. After removal the associated key and value does not change. Parameter: This function accepts single parameter variable. It is required parameter and used to unset the element. Program 1: Delete an element from one dimensional array. Array ( => G => E => E => K => S ) Array ( => G => E => K => S ) Program 2: Delete an element from associative array. Before delete the element Array ( [Ankit] => Array ( [C] => 95 [DCO] => 85 ) [Ram] => Array ( [C] => 78 [DCO] => 98 ) [Anoop] => Array ( [C] => 88 [DCO] => 46 ) ) After delete the element Array ( [Ankit] => Array ( [C] => 95 [DCO] => 85 ) [Anoop] => Array ( [C] => 88 [DCO] => 46 ) ) - Difference between Preemptive Priority based and Non-preemptive Priority based CPU scheduling algorithms - Difference between Memory based and Register based Addressing Modes - Difference between Primary key and Super key - PHP | Program to delete an element from array using unset() function - How to search by key=>value in a multidimensional array in PHP ? - How to loop through an associative array and get the key in PHP? - How to search by multiple key => value in PHP array ? - How to remove a key and its value from an associative array in PHP ? - How to create an array with key value pairs in PHP? - How to check a key exists in an array in PHP ? - How to change an element color based on value of the color picker value using onclick? - How to repeat HTML element multiple times using ngFor based on a number? - How to change an element color based on value of the color picker value on click? - How to delete last element from a map in C++ - How to push a value based on matched value in PHP ? If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
OPCFW_CODE
I am applying the simple least mean square update rule using python but somehow the values of theta, I get, become very high. from pylab import * data = array( [[1,4.9176,1.0,3.4720,0.998,1.0,7,4,42,3,1,0,25.9], [2,5.0208,1.0,3.5310,1.50,2.0,7,4,62,1,1,0,29.5], [3,4.5429,1.0,2.2750,1.175,1.0,6,3,40,2,1,0,27.9], [4,4.5573,1.0,4.050,1.232,1.0,6,3,54,4,1,0,25.9], [5,5.0597,1.0,4.4550,1.121,1.0,6,3,42,3,1,0,29.9], [6,3.8910,1.0,4.4550,0.988,1.0,6,3,56,2,1,0,29.9], [7,5.8980,1.0,5.850,1.240,1.0,7,3,51,2,1,1,30.9], [8,5.6039,1.0,9.520,1.501,0.0,6,3,32,1,1,0,28.9], [9,16.4202,2.5,9.80,3.420,2.0,10,5,42,2,1,1,84.9], [10,14.4598,2.5,12.80,3.0,2.0,9,5,14,4,1,1,82.9], [11,5.8282,1.0,6.4350,1.225,2.0,6,3,32,1,1,0,35.9], [12,5.303,1.0,4.9883,1.552,1.0,6,3,30,1,2,0,31.5], [13,6.2712,1.0,5.520,0.975,1.0,5,2,30,1,2,0,31.0], [14,5.9592,1.0,6.6660,1.121,2.0,6,3,32,2,1,0,30.9], [15,5.050,1.0,5.0,1.020,0.0,5,2,46,4,1,1,30.0], [16,5.6039,1.0,9.520,1.501,0.0,6,3,32,1,1,0,28.9], [17,8.2464,1.5,5.150,1.664,2.0,8,4,50,4,1,0,36.9], [18,6.6969,1.5,6.9020,1.488,1.5,7,3,22,1,1,1,41.9], [19,7.7841,1.5,7.1020,1.376,1.0,6,3,17,2,1,0,40.5], [20,9.0384,1.0,7.80,1.50,1.5,7,3,23,3,3,0,43.9], [21,5.9894,1.0,5.520,1.256,2.0,6,3,40,4,1,1,37.5], [22,7.5422,1.5,4.0,1.690,1.0,6,3,22,1,1,0,37.9], [23,8.7951,1.5,9.890,1.820,2.0,8,4,50,1,1,1,44.5]]) x = zeros( (len(data[:,4]) ,2)) x[:,0] ,x[:,1] = 1, data[:,4] y = data[:,-1] theta = array([100.0,100.0]) alpha = 0.4 iternum = 100 for i in range(iternum): theta -= alpha*dot(transpose(x),(dot(x,theta)-y)) print theta I get the answer to be [7.18957001e+150 1.19047264e+151] which is unrealistic for the given code. However if I alter the internum loop to be for i in range(iternum): theta -= alpha*dot(transpose(x),(dot(x,theta)-y))/size(data[:,4]) #Basically divide by the total number of training examples print theta I get the correct answer. However, as per what I have learned, the cost function does not necessarily depend on training example size. Can somebody point to the source of the problem? Apologies if the explanation of the problem was a little convoluted.
OPCFW_CODE
- What's the point? (or,"What's the Emperor wearing?") Re: What's the point? (or,"What's the Emperor wearing?") Jessica Frazelle <me@...> On Tue, Sep 18, 2018 at 18:04 Nick Chase <nchase@... First off, please take this in the sprit in which it's intended. It's not meant to be snarky or argumentative, (though it will probably sound that way), it's just meant to start a conversation. I've been thinking a lot about the conversation about working groups from this morning's meeting, and I think we're missing a fundamental issue. What I heard was a lot of talk about how "we don't want to be kingmakers" and "people put more importance on being a CNCF project than they should". Well, if being a CNCF project doesn't mean anything ... why do it? In my opinion the foundations role should be a space for shared IP. And I agreed that people are putting too much importance on being in the foundation. You can do open source without a foundation. The foundations role should not be marketing projects and creating non-organic growth but helping the projects have a space to work without worrying about IP or licensing. It should also help the communities of those projects get things they need like money for CI infrastructure or other things and making sure those projects communities are healthy. thats what a foundation is in imho. In fact, why have a foundation at all? If the purpose of the CNCF is just to foster cloud native computing, and not to validate a project's existence, then why handle projects at all? Why not just create standards, or even just recommendations, as W3C I guess what I'm saying is that while nobody likes politics -- and believe me I DESPISE them -- if you're going to have a foundation that is supposed to mean something, then ... it should mean something. So my feeling is that we either bite the bullet and get tough about letting projects in -- even if that means asking them to perhaps work together, or create a joint API and then manage the API -- or we drop the pretenses and just create a directory anybody can add themselves to. See, I told you it would sound snarky, but really, I am only trying to start the discussion. Somebody please, tell me what I'm missing here. 4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3 Join email@example.com to automatically receive all group messages.
OPCFW_CODE
Streaming audio from USB microphone (+video) through mobile data I am looking for a way to live stream the audio from my USB microphone through a WiFi hotspot via mobile data to an online website / the cloud. My goal is to stream my performances to the people who could not come. Since there is no available WiFi usually, I wish to use my phone as a WiFi hotspot, connect the Pi and have it livestreaming. Icecast could be used by port forwarding, but that is not possible on a phone hotspot. I found multiple ways for livestreaming video to YouTube, and if it is possible to add my audio to such livestream I would be happy to add a Pi Camera. But I couldn't find if it is possible to add audio to a YouTube livestream. Does anyone have a recommendation on how to solve this problem? Hello and welcome -- Add more information about this line: "Alternatively, I could also hook up a Pi camera and make some sort of YouTube livestream, but I couldn't find any information on adding audio to such livestream." Thanks! I edited the line about the camera It's a complicated project. You should take it to some parts. It's my suggestion and would be happy if other experts improve it. Domain name Anyone can buy a domain name. To do so, you visit a domain name registrar, such as GoDaddy or Namecheap, key in the domain you want to buy, and pay a fee. You can't buy just any domain, of course only one that isn't already registered by another person or business and that bears a valid domain suffix. Imagine you bought the domain name below: domain.com Find the public IP address Find your mobile network public IP address by this site: What's My IP Address? Networking Tools & More Another approach is that you will connect raspberry pi to your mobile hotspot and fire this command: curl icanhazip.com Imagine you got the output below after running that command as your public IP address: <IP_ADDRESS> Cloudflare Sign up on cloudflare and add your domain name there. Add the Cloudflare preferred NS1/NS2 DNS to your domain name on the domain name registrar. Add these DNS query on your domain dashboard on Cloudflare: <IP_ADDRESS> would be your mobile network public IP address. domain.com would be the domain name that you have bought. Set name servers on RPi Add the line below to /etc/dhcpcd.conf: static domain_name_servers=<IP_ADDRESS> <IP_ADDRESS> The <IP_ADDRESS> is the Cloudflare nameserver. Now, it's possible to access your raspberry pi over the internet by a domain name. Note that, generally the IP address of the mobile networks continuously changing and you should change it on Cloudflare manually or do it by the API. Camera stream Install motion package on the raspberry pi as your camera web-streamer. apt install motion installation: Video Streaming from raspberry to an external server Configuration: /etc/motion/motion.conf You can configure stream quality according your bandwidth limitations. Make it more secure: Motion security If you are concerned about the security of the streams, you can make it much more secure with some configuration and insight. The motion camera streams would be checked on domain.com:8081. Audio stream The same as camera streaming, there is a lot of tools for you to stream your raspberry pi's audio. For that, check the link below: Streaming Audio from a remote Raspberry Pi to my computer Don't stream audio on the same port number as motion which is 8081. You should choose another port number and match these two UDP streaming port by your web development coding like JS, etc. Conclusion You are going to set a domain name to your variable IP address then access your live Audio & Video/camera streaming. Note that it's a suggestion and you can find a better solution for each part. References How do I setup apache to only allow local devices to connect to my website/app? Cloudflare API v4 Documentation Public/External IP dhcpcd.conf documentation P.I.R.M.A Raspberry Pi PIR Motion Audio Installation Building a Motion Activated Security Camera with the Raspberry Pi Wow, thanks for this very elaborate answer! I will dive in tomorrow on everything you mentioned here. A question in advance: is it possible to make my mobile network public IP address static? I don't really want to re-configure my whole setup every time I want to launch the livestream. Also, do you know if there's a way to merge the audio and video? It might get me syncing problems, but opening two websites is also not ideal. I'm sure your suggestions will work and it helps me a lot!! But I want to make it easy for everyone involved :) is it possible to make my mobile network public IP address static? -- I don't think so. However, it's related to your service provider. Contact with them. || do you know if there's a way to merge the audio and video? -- No, you don't need to open up two different websites. It's all about one server (RPi) and an address. You should create an HTML page to open these two different ports on a single page. Although, I'll search and improve the answer in this case. I don't really want to re-configure my whole setup every time I want to launch the livestream. -- I added the Cloudflare API source, you can create an API client on the RPi by Python or whatever preferred language. It means, the RPi sends the new query (according to the new/changed IP address) to Cloudflare API to update that query. Others may help us in this case. you can create an API client on the RPi -- ohh sounds great! that should solve the problem, then. create an HTML page to open these two different ports on a single page -- that seems to be a better solution than killing the gpu while trying to sync the video and audio. I'm planning on running this on a RPi Zero so I'd rather keep it as "light" as possible. Thanks!! killing the GPU while trying to sync the video and audio -- Nice mention. -- Alright. Cheers, Let us continue this discussion in chat.
STACK_EXCHANGE
What are the effects of enzyme exposure to high temperatures? Question: After enzymes are exposed to high temperatures and undergo denaturation, then returned to their optimal temperature and renatured, can the enzyme's active site return to it's original shape and will it function at the same level of efficiency as it did before being denatured and renatured? Some background I wrote to the question: I am just beginning to learn Cell Biology, I hope what I've written is correct. When you raise the temperature of an enzyme, at first it will increase the efficiency of the enzyme's activity, but eventually as the temperature rises, the enzyme with stop functioning and undergo denaturation, which means that the 3D formation of the protein is unraveled, so it doesn't function anymore. From what I've managed to research, the high temperature changes the shape of the active site in the enzyme, which is what allows the enzyme's activity in the first place. I know that some proteins cannot be renatured (like adding heat to an egg will fry it and there is no way to unfry it), but some can be renatured (like heated milk, when it cools down the protein bonds will reestablish themselves). But will the enzyme's active site return to it's original shape and manage to function at the same level of efficiency as it did before? Or is the damage permanent? Possible duplicate of How does temperature influence the rate of protein degradation?, How is the rate of transcription influenced by temperature?, and Influence of temperature on protein binding and decay rates. The answer is that it completely depends on what specific enzyme you're talking about; some of them will and some of them won't. What you wrote is correct, that increasing temperature of most enzymes (although there are exceptions) will increase their activity to a certain point, after which the proteins lose their 3D structure and become unfolded. Upon cooling, it all depends on whether the enzyme can re-fold by itself or not. Some proteins when heated form aggregates, which are more stable than the original conformation, and so generally won't refold with cooling. Some proteins are able to fold into proper conformation by themselves, while other require assistance from chaperone proteins in order to fold correctly. Many of these chaperones are called heat-shock proteins, which are upregulated in response to thermal stress. That points to the fact that many proteins can't just refold on their own, and require help to regain their original conformation from these HSPs. So the real answer is that it all depends on the specific protein. It also depends on how much you heat the enzyme, and how much of its 3D structure you affect, if you only partially denature an enzyme, it's more likely to refold than one that's been completely denatured. However, if you want an answer about enzymes in general, I'd say that most enzymes after denaturation aren't able to refold properly and regain their enzymatic activity, so the damage is permanent. Thank you so much! Your answer really helped and gave me some more knowledge to use, to do further research into the matter.
STACK_EXCHANGE
UPDATE for people with ATI Cards !! For the ones that are using an ATI based video card, make sure you get this driver : I don't know what it fixes since i'm running Nvidia. But i think it's worth a try ! Maybe you'll get AA fixed... Nope. ATI's problems stem from the fact that they don't support Shader Model 3.0 on some of the slightly older cards. The hotfix won't cure that. Originally Posted by WarAnakin Because the cards are too old to support it Originally Posted by Can O Soup simple as that anakin what do I do, when I install the fix the installer asks me if I want to overwrite a newer file Over right it - i did that atleast and everything is fine. did this work for you guys??? didnt work for me The hotfix did fix the texture issue where the floors were a solid black. However, I cannot get down the bathysphere, everytime the little projector starts up the game freezes and won't go past. I've been trying to get my pc up and running with this game and it's been nothing but a headache. Running AMD Athlon 5600+ 2.8Ghz ASUS M2N-E SLI Motherboard ATI Radeon HD2600XT (updated with ATI's 'hotfix') Windows Vista Ultimate 64bit I'm seriously thinking of just taking the game back to the retailer, I mean honestly, this is ridiculous. I've done everything as far as resetting all the settings back to factory defaults, patching the driver, tweaking settings and still it won't work....Every single other game I have works perfectly fine. They need to patch this, and patch it fast. Otherwise word of mouth is going to spread and it isn't going to be good... is this supposed to fix the no mouse cursor and artifacts during gameplay?? i am having the EXACT same problem tibby. no cursor is a pain too. Originally Posted by tibby05 damnit i want to play. did the ati bioshock fix work for you swim team?? I cant get either of those fixes to work. The message i get is: "Could not load file or assembly 'MOM.Implementation, Version=2.0.2784.39186, Culture=neutral, PublicKeyToken=90a9c70f846762e' or one of its dependencies. The system cannot find the file specified." any ideas? thanks Chalk My Dual SLI - ATI X850XT cards onto the bonfire. Great FPS on most games, just no shader 3.0 which kinda marks them as useless. no tibby, neither of the posted fixes worked for me. looks like it might be time to overhaul my system. I like how irrational made a game that pretty much is a big sod off to most PC users. If it was PC only they would have made the game compatible with everything, instead, they focused on the 360. Who cares about PC people, they arent worth our money. so, i guess it was inevitable that we would have to update our comps to run these future games. i guess i'll update and switch back to nvidia. so i have a ATI Radeon 9550 / X1050 Series, AMD Athlon(tm) 64 Processor 3000+ what would i need to do/get/buy to make the conversion to a new nvidia system? good/bad idea? running an X600 card i'm running an X600, am i screwed? I've seen posts that my card is not supported.
OPCFW_CODE
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score import torch import transformers as ppb import warnings from bayes_classifier import import_data warnings.filterwarnings('ignore') """Following this tutorial: https://github.com/ jalammar/jalammar.github.io/blob/master/notebooks/bert/A_Visual_Notebook_to_Using_BERT_for_the_First_Time.ipynb """ def main(): # For DistilBERT: model_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased') #get insults data type_tweet = import_data() # Load pretrained model/tokenizer tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) #cleaning #TODO #tokenize tweet tokenized_tweet = type_tweet["tweet"].apply(lambda x: tokenizer.encode(x, add_special_tokens=True)) max_len = 0 for i in tokenized_tweet.values: if len(i) > max_len: max_len = len(i) #_______________________why [0]?_________________# padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized_tweet.values]) attention_mask = np.where(padded != 0, 1, 0) attention_mask.shape input_ids = torch.tensor(padded).to(torch.int64) attention_mask = torch.tensor(attention_mask).to(torch.int64) with torch.no_grad(): last_hidden_states = model(input_ids, attention_mask=attention_mask) features = last_hidden_states[0][:, 0, :].numpy() labels = type_tweet["class"] train_features, test_features, train_labels, test_labels = train_test_split(features, labels) lr_clf = LogisticRegression() lr_clf.fit(train_features, train_labels) print(lr_clf.score(test_features, test_labels)) pass if __name__ == "__main__": main()
STACK_EDU
Latter concluding that musical transposition is still a big part of the United Kingdom music theory exam board also concluded in research that standard examinations at level three of music theory require the entrant to conduct such a test. Figure 1. Digital Musical Transpositional Device Working the device shows that the basics are more of a calculator style in that the user has an ability to select the standard key signature scale an enter the major chord types they have been using. At that point, the entering of the new desired key signature is keyed in and the transpose button is selected. Initially, the hardest part of the project was formatting the code and development of the idea. Due to reasons of modern technology development, have found no other digital transpositional tools outside of that of platforms such as Logic Pro that can automatically transpose in the setting DAW system. During the presentation, it was discussed that such a device could be a useful learning tool should a grade three music theory student wish to progress (ABRSM, 2020). Figure 2. ABRSM, 2020 Grade 3 Music Theory Requirements. During potential developments, it was also considered that the artefact could potentially require updates such as both melodic and harmonic choices inclusive of potential sound answering the question of Does anyone use transposition anymore (Milne, 2009). Figure 3. Does Anyone Teach Transposing Anymore? by Elissa Milne. A question asked in the interest of the artefact was to answer the types of potential teaching habits that could be included (Weegar, 2012). Figure 4. Three Areas of Modern Teaching Behaviourism, Constructivism, Cognitivism. With the needed behaviour of a student implying that as with behaviourism that they are provided questions with always correct answers; as in two-plus-two equals four. With the potential of having a digital device that can continue to act as an independent technology. As opposed to the disparity between DAW (Digital Audio Workstation) automatic transpose features. In the knowledge that as stated any grade 3 student in music theory must still be able to manually transpose via pencil and paper. Provision of such a digital musical transpositional artefact would help bridge the gap between the end of the twentieth century and early twenty-first century. ABRSM, 2020. ABRSM:. [online] Gb.abrsm.org. Available at: <https://gb.abrsm.org/en/our-exams/music-theory-exams/music-theory-grade-3/> [Accessed 20 May 2020]. <https://elissamilne.com/2009/10/20/does-anyone-teach-transposing-anymore/> [Accessed 22 April 2020]. Weegar, D., 2012. A Comparison Of Two Theories Of Learning -- Behaviourism And Constructivism As Applied To Face-To-Face And Online Learning. [online] G-casa.com. Available at: <https://www.g-casa.com/conferences/manila/papers/Weegar.pdf> [Accessed 29 April 2020].
OPCFW_CODE
Phosphorescence is the result of a radiative (light emitting) transition involving a change in the spin multiplicity of (in most cases) a molecule from excited state singlet to excited state triplet. This transition is quantum mechanically forbidden as is the transition that leads to light emission. These forbidden transitions are kinetically slow, which introduces a delay between photo-excitation (exposure to light of one wavelength) and emission (release of light of a different wavelength). So-called "glow in the dark" materials are phosphorescent materials with a very long (seconds, minutes, even hours) delay between excitation and emission. Most phosphorescent compounds have triplet lifetimes on the order of milliseconds. Electrons arranged in molecular orbitals group into pairs which follow the Pauli exclusion principle. In a nutshell, only singlet electrons can populate a single orbital. A singlet excited state results when an electron is promoted while conserving its spin. Relaxation back to the ground state is very fast because the multiplicity doesn't change. Transition to the triplet state involes a forbidden spin flip (electrons cannot exist inbetween the two states in a molecule) to produce the triplet. This phenomenon is known as inter system crossing (ISC) and is kinetically slow, but thermodynamically favorable (it is lower in energy). The energy release between the singlet excited state and the triplet excited state is dissipated vibrationally (see phonon). Once in the triplet state, relaxation back to the ground state neccessarily involves another spin flip to avoid violating the Pauli exclusion principle, which is again kinetically slow, but thermodynamically favorable. In some cases this energy is dissipated by the emission of a photon corresponding to the energy difference between the triplet state and ground state, but often is is dissipated vibrationally (the ratio between these two phenomena for a single molecule is known as the quantum yield of phosphorescence). Since the triplet state is lower in energy than the singlet excited state, the light is lower in energy (red-shifted) than if it had been emitted from a singlet excited state. Many compounds emit both from the singlet and triplet states and by measuring the difference in wavelength between the two the energy difference betwee the excited states can be calculated. There are many facets to emission from triplet excited states and many people have spent entire careers studying the phenonenon. As one example, a process known as delayed fluorescence occurs when two triplets encouter each other (by delocalization in the solid state or encounter of two species in solution or the gas phase) and additively anhiliate to produce one singlet excited state of higher energy. If this state then emits light it will be of the shorter wavelength associated with fluorescence emission, but on a time scale appropriate for phosphorescence. Where S is a singlet and T a triplet whose subscript denotes the excited state (zero is ground state). Transitions can occur at higher energy levels, but the first excited state is denoted for simpclicity.
OPCFW_CODE
Why we, Rubyists, care about Salesforce-Heroku? Everybody might have heard in our Ruby/Rails technosphere that recently, the successful company Salesforce.com, famous for his platform, his growth and his position as the first huge Cloud business oriented app, has bought the tiny and innovative Heroku Platform. I am not going to give an introduction about what those two key cloud actors do. You have to know that already. I am here to explain why the CRM market leader decided to buy our underground Ruby on-demand hosting platform and why that matter for us. Ruby is trendy: Since years now, the web ecosystem is driven by the Rails community. Creative, active and efficient, Rails approach has been spread as a major epidemic and has inoculated Ruby OO flexibility and Rails efficiency to well-settled language like PHP or Java. Ruby community is amazingly active. Coders are passionated and produce a living and elegant code base making Ruby/Rails and all Ruby librairies a relevant option in most of the situation. Heroku is trendy: Bored by deploying complicated to manage Web app, the Ruby community has quickly adopted the Heroku Cloud solution. As simple as a adding gem, hosting an app with Heroku let you create a Virtual Slice of cloud using an Heroku command. Your application is up and running in 3mn and ready to scale on-demand. Thousands of applications have been launched in few months and today they are close to 100.000 applications hosted on their platform. Salesforce.com want to stay trendy and need our Love: After years of succes and growth, Salesforce turned from a simple online web app into a sophisticated corporate platform. With two languages, Apex and VisualForce, a PaaS (Platform as a Service), Force.com and thousand of app in it’s eco-system, Salesforce proved years after years its ability to innovate. Last evolution concern for example the launch of Chatter to address social and mobile usages. Now that 3 millions users are connected daily to Salesforce and used to manage their lead pipe with it, an obvious and growing need of interaction with external apps appeared. Of course, Salesforce API provide almost all informations about your leads,account, contacts etc but the SF Data consumers, marketing people, want to create, almost on the fly, Data visualization and analytic tools. Sales pipeline optimization, Lead generation, or Lead nurturing, progressive and automated qualification are some of the hot topics for external Salesforce Application. Of course, Cloud app like Eloqua bring really innovatives and well SF integrated solutions for those points. Nevertheless small and fast businesses require some specific integration of Salesforce to their own product that can only be accomplished using external developments. Here we go, Rubyists!! - Salesforce Data model is Object Oriented as our daily ActiveRecord ORM is in Rails. Lead.first.address is a valid Salesforce command. - Salesforces Data Model is totally exposed on its API and can be consumed easily using ActiveRessource. - Salesforce will be soon more integrated with Heroku (I guess) - Salesforce market is customer centric and think as we do Based on that, we can put to good use our demonstrated skills to build Web/Mobile apps. We have a new playground to express our ideas and conquer new market. Nice to read about Salesforce and Heroku:
OPCFW_CODE
This is a follow up to my previous post of handy Docker commands that I always find myself having to Google. The full list of commands discussed in this and the previous post are shown below. Hope you find them useful! - Don't require sudoto execute Docker commands - Examining the file system of a failed Docker build - Examining the file system of an image with an ENTRYPOINT - Copying files from a Docker container to the host - Copying files from a Docker image to the host - View the space used by Docker - Remove old docker images and containers - Speeding up builds by minimising the Docker context - Viewing (and minimising) the Docker context - Bonus: Making a file executable in git - Bonus 2: Forcing script files to keep Unix Line endings in git View the space used by Docker One of the slightly insidious things about Docker is the way it can silently chew up your drive space. Even more than that, it's not obvious exactly how much space it's actually using! Luckily Docker includes a handy command, which lets you know how much space you're using, in terms of images, containers, and local volumes (essentially virtual hard drives attached to containers): $ docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 47 23 9.919GB 6.812GB (68%) Containers 48 18 98.35MB 89.8MB (91%) Local Volumes 6 6 316.1MB 0B (0%) As well as the actual space used up, this table also shows how much you could reclaim by deleting old containers, images, and volumes. In the next section, I'll show you how. Remove old docker images and containers. Until recently, I was manually deleting my old images and containers using the scripts in this gist, but it turns out there's a native command in Docker to cleanup - docker system prune -a. This command removes all unused containers, volumes (and networks), as well as any unused or dangling images. What's the difference between an unused and dangling image? I think it's described well in this stack overflow answer: An unused image means that it has not been assigned or used in a container. For example, when running docker ps -a- it will list all of your exited and currently running containers. Any images shown being used inside any of containers are a "used image". On the other hand, a dangling image just means that you've created the new build of the image, but it wasn't given a new name. So the old images you have becomes the "dangling image". Those old image are the ones that are untagged and displays ""on its name when you run docker system prune -a, it will remove both unused and dangling images. Therefore any images being used in a container, whether they have been exited or currently running, will NOT be affected. Dangling images are layers that aren't used by any tagged images. They take up space. When you run the prune command, Docker double checks that you really mean it, and then proceeds to clean up your space. It lists out all the IDs of removed objects, and gives a little summary of everything it reclaimed (truncated for brevity): $ docker system prune -a WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all images without at least one container associated to them - all build cache Are you sure you want to continue? [y/N] y Total reclaimed space: 6.679GB Be aware, if you are working on a new build using a Dockerfile, you may have dangling or unused images that you want to keep around. It's best to leave the pruning until you're at a sensible point. Speeding up builds by minimising the Docker context Docker is designed with two components: a client and a deamon/service. When you write docker commands, you're sending commands using the client to the Docker deamon which does all the work. The client and deamon can even be on two separate machines. In order for the Docker deamon to build an image from a Dockerfile using docker build ., the client needs to send it the "context" in which the command should be executed. The context is essentially all the files in the directory passed to the docker build command (e.g., the current directory when you call docker build .). You can see the client sending this context when you build using a Dockerfile: Sending build context to Docker daemon 38.5MB For big projects, the context can get very large. This slows down the building of your Dockerfiles as you have to wait for the client to send all the files to the deamon. In an ASP.NET Core app for example, the top level directory includes a whole bunch of files that just aren't required for most Dockerfile builds - Git files, Visual Studio / Visual Studio Code files, previous bin and obj folders. All these additional files slow down the build when they are sent as part of the context. Luckily, you can exclude files by creating a .dockerignore file in your root directory. This works like a .gitignore file, listing the directories and files that Docker should ignore when creating the context, for example: The syntax isn't quite the same as for Git, but it's the same general idea. Depending on the size of your project, and how many extra files you have, adding a .dockerignore file can make a big difference. For this very small project, it reduced the context from 38.5MB to 2.476MB, and instead of taking 3 seconds to send the context, it's practially instantaneous. Not bad! Viewing (and minimising) the Docker context As shown in the last section, reducing the context is well worth the effort to speed up your builds. Unfortunately, there's no easy way to actually view the files that are part of the context. The easiest approach I've found is described in this Stack Overflow question. Essentially, you build a basic image, and just copy all the files from the context. You can then run the container and browse the file system, to see what you've got. The following Dockerfile builds a simple image using the common BusyBox base image, copies all the context files into the /tmp directory, and runs find as the command when run as a container. COPY . . If you create a new Dockerfile in the root directory callled InspectContext.Dockerfile containing these layers, you can create an image from it using docker build and passing the -f argument. If you don't pass the -f, Docker will use the default Dockerfile file $ docker build -f InspectContext.Dockerfile --tag inspect-context . Sending build context to Docker daemon 2.462MB Step 1/4 : FROM busybox Step 2/4 : WORKDIR /tmp Step 3/4 : COPY . . Step 4/4 : ENTRYPOINT find Successfully built bffa3718c9f6 Successfully tagged inspect-context:latest Once the image is built (which only takes a second or two), you can run the container. The find entrypoint will then spit out a list of all the files and folders in the /tmp directory, i.e. all the files that were part of the context. $ docker run --rm inspect-context With this list of files you can tweak your .dockerignore file to keep your context as lean as possible. Alternatively, if you want to browse around the container a bit further you can override the entrypoint, for example: docker run --entrypoint sh -it --rm inspect-context That's pretty much it for the Docker commands for this article. I'm going to finish off with a couple of commands that are somewhat related, in that they're Git commands I always find myself reaching for when working with Docker! Bonus: Making a file executable in git This has nothing to do with Docker specifically, but it's something I always forget when my Dockerfile uses an external build script (for example when using Cake with Docker/). Even if the file itself has executable permissions, you have to tell Git to store it as executable too: git update-index --chmod=+x build.sh Bonus 2: Forcing script files to keep Unix Line endings in git If you're working on Windows, but also have scripts that will be run in Linux (for example via a mapped folder in the Linux VM), you need to be sure that the files are checked out with Linux line endings (LF instead of CRLF). You can set the line endings Git uses when checking out files with a .gitattributes file. Typically, my file just contains * text=auto so that line endings are auto-normalised for all text files. That means my .sh files end up with CRLF line endings when I check out on Windows. You can add an extra line to the file that forces all .sh files to use LF endings, no matter which platform they're checked out on: These are the commands I find myself using most often, but if you have any useful additions, please leave them in the comments! :)
OPCFW_CODE
A double spend is an attack where the given set of coins is spent in more than one transaction. There are a couple of main ways to perform a double spend: - Send two conflicting transactions in rapid succession into the Bitcoin network. This is called a race attack. - Pre-mine one transaction into a block and spend the same coins before releasing the block to invalidate that transaction. This is called a Finney attack. - Own 51+% of the total computing power of the Bitcoin network to reverse any transaction you feel like, as well as have total control of which transactions appear in blocks. This is called a 51% attack. Traders and merchants who accept payment immediately on seeing “0/unconfirmed” are exposed to the transaction being reversed. An attempt at fraud could work that the fraudster sends a transaction paying the merchant directly to the merchant, and sends a conflicting transaction spending the coin to himself to the rest of the network. It is likely that the second conflicting transaction will be mined into a block and accepted by bitcoin nodes as genuine. Merchants can take precautions (e.g., disable incoming connections, only connect to well-connected nodes) to lessen the risk of a race attack but the risk cannot be eliminated. Therefore, the cost/benefit of the risk needs to be considered when accepting payment on 0/unconfirmed when there is no recourse against the attacker. The research paper Two Bitcoins at the Price of One finds that the protocol allows a high degree of success by an attacker in performing race attacks. The method studied in the research paper depends on access to the merchant’s Bitcoin node which is why that even prior to this paper, recommendations for merchants include disabling incoming connections and to choose specific outgoing connections. Another attack the trader or merchant is exposed to when accepting payment on 0/unconfirmed. The Finney attack is a fraudulent double-spend that requires the participation of a miner once a block has been mined. The risk of a Finney attack cannot be eliminated regardless of the precautions taken by the merchant, but some miner hash power is required and a specific sequence of events must occur. Just like with the race attack, a trader or merchant should consider the cost/benefit when accepting payment on just one confirmation when there is no recourse against the attacker. A Finney attack works as follows: Suppose the attacker is generating blocks occasionally. in each block he generates, he includes a transfer from address A to address B, both of which he controls. To cheat you, when he generates a block, he doesn’t broadcast it. Instead, he opens your store web page and makes a payment to your address C with his address A. You may wait a few seconds for double-spends, not hear anything, and then transfer the goods. He broadcasts his block now, and his transaction will take precedence over yours. Also referred to as a one-confirmation attack, is a combination of the race attack and the Finney attack such that a transaction that even has one confirmation can still be reversed. The same protective action for the race attack (no incoming connections, explicit outgoing connection to a well-connected node) significantly reduces the risk of this occurring. It is worth noting that a successful attack costs the attacker one block – they need to ‘sacrifice’ a block by not broadcasting it, and instead relaying it only to the attacked node. See on BitcoinTalk or further example of an attack scenario. Alternative history attack This attack has a chance to work even if the merchant waits for some confirmations, but requires relatively high hashrate and risk of significant expense in wasted electricity to the attacking miner. The attacker submits to the merchant/network a transaction which pays the merchant, while privately mining an alternative blockchain fork in which a fraudulent double-spending transaction is included instead. After waiting for n confirmations, the merchant sends the product. If the attacker happened to find more than n blocks at this point, he releases his fork and regains his coins; otherwise, he can try to continue extending his fork with the hope of being able to catch up with the network. If he never manages to do this then the attack fails, the attacker has wasted a significant amount of electricity and the payment to the merchant will go through. The probability of success is a function of the attacker’s hashrate (as a proportion of the total network hashrate) and the number of confirmations the merchant waits for. An online calculator can be found here For example, if the attacker controls 10% of the network hashrate but the merchant waits for 6 confirmations, the success probability is on the order of 0.1%. Because of the opportunity cost of this attack, it is only game-theory possible if the bitcoin amount traded is comparable to the block reward (but note that an attacking miner can attempt a brute force attack against several counterparties at once). Also referred to as a 51% attack or >50% attack. If the attacker controls more than half of the network hashrate, the previously-mentioned Alternative history attack has a probability of 100% to succeed. Since the attacker can generate blocks faster than the rest of the network, he can simply persevere with his private fork until it becomes longer than the branch built by the honest network, from whatever disadvantage. No amount of confirmations can prevent this attack; however, waiting for confirmations does increase the aggregate resource cost of performing the attack, which could potentially make it unprofitable or delay it long enough for the circumstances to change or slower-acting synchronization methods to kick in. Bitcoin’s security model relies on no single coalition of miners controlling more than half the mining power. A miner with more than 50% hash power is an incentive to reduce their mining power and reframe from attacking in order for their mining equipment and bitcoin income to retain its value. This is not financial advice or investment advice. Everything we cover here is my experiences, opinions and what I would do. Please do your own research.
OPCFW_CODE
Alright, gonna try my hand at guide writing. As of yesterday it came to my attention a man named mfosse made a driver for Windows that allows you to connect Joycons to your computer and use them with FULL ANALOG SUPPORT. Considering before the analog sticks were just mapped to dpads, this is a pretty big breakthrough that went relatively unnoticed. So here we go: I made a video version of this guide! I'll reupload this with updates when they are needed. Not in it for the views. Check it out here: Warning: Spoilers inside! IMPORTANT NOTE: PAIRING JOYCONS VIA BLUETOOTH Just to make sure nobody messes this up, I'm gonna jot down how to pair the joycons with bluetooth. Note that this is written for Windows 10, however it should work for Win8 as well. Windows 7 should be able to work with the Joycons, but I'm not sure if you need a different bluetooth stack. To pair: On each Joycon, there is a small sync button, located directly beside the lights that blink. Press this button once quickly to turn the Joycon off, then press and hold it until the lights start flashing. Open the windows 8/10 bluetooth menu, and pair both "Joycon (R)" and "Joycon (L)", order does not matter. If you pair them with a different computer, or they just don't seem to be working, try removing them from the bluetooth menu, then pairing them again. That's it. Guide continues normally below: First you'll need the latest vJoy (note I said latest, I had 2.0.5 and that didn't work, so just update): Simply download and install that. Now find the configure tool in the start menu's vJoy folder and configure vJoy to look like the following pictures provided by the github. Here's some direct links. Warning: Spoilers inside! Next you'll need the actual Joycon driver, made by mfosse: Download the repo as a zip, and extract to somewhere on your computer the contents of: \JoyCon-Driver-master\joycon-driver\build\Win32\Release\ The guide splits here slightly, depending on if you want separate/single joycons or both combined into one controller. If you want to use a single joycon or want to use them separately, connect the joycons via bluetooth, run the exe, and uncheck combined mode. If you want combined, just connect both joycons, run the Joycon-Driver and leave the settings at the default One more thing. If you want xInput, which will let you use the Joycons for any game that supports the 360 controller, it's quite simple to set that up. Download xOutput. Extract the contents of the zip to somewhere on your computer. Run the SCPdriver installer. Once that has finished up, restart your computer and then simply run XOutput. Disable any controllers that aren't vJoy, and if there's more than 1 vJoy option, you want the topmost vJoy, this is assuming you are using combined joycons. If separate, keep both vJoy entries enabled. Click the settings gear and set all the options to what I've found works for a combined Joycon setup, in the image below. Warning: Spoilers inside! I haven't actually found the binds you need for separate joycons, they will be slightly different for the right and left ones. If you want to do it yourself, you can use the monitor vJoy applet in the start menu's vJoy folder to show you what button does what. Troubleshooting I'm going to collect solutions to problems here. This is what I've run into so far: It says it can't connect to vJoy or something similar. If you get this, make sure you have your vJoy updated, and check that everything about your settings matches the above pictures in the spoiler. Otherwise, make sure nothing else is using the driver, as that will also cause issues. It says can't find VCRuntime140D OR Shows Error Code 0xc000007b This is very likely going to happen to you, as the only solution I've found thus far is a bit stupid. Basically go download Visual Studio Community edition, you need the debug C++ redistributes, which are only able to be downloaded through visual studio. If somebody finds a better solution to this, do let me know. As it turns out, you can just get your vcruntime140.dll from C:\Windows\System32 and copy it (not move!) over to the joycon-driver folder, then rename the copy vcruntime140d.dll, and it should work. Edit: If after doing this the program crashes, try getting the vcruntime140.dll from C:\Windows\SysWOW64 instead, move it into the joycon-driver folder, and rename it vcruntime140d.dll. It says can't find Ucrtbased or something similar This one seems to need the visual studio install, not sure if there is a workaround for this one. Y Axis on sticks is reversed This seems to be a bug in the code. This can be solved by following the XOutput part of the guide (it accounts for this), or if you really need dInput (what vJoy uses by default), you could use something like UCR. I clicked on the github and the developer already wrote a guide in the readme. Why did you make another one? The guide in the github isn't quite as simple and friendly to people who aren't power users. I aimed this guide more towards that crowd, and also added some things like this troubleshooting section and the xOutput part of the guide. There's a memory leak! If your getting a memory leak, so far the only cause I've found is if you're using the Toshiba Bluetooth stack, after I reverted back to MS, mine stopped, so I wouldn't recommend Toshiba stack unless you have 64GB of ram or something. There's lots of lag! That's usually caused by interference from your computer, one thing to try is a fork of the driver you can find in a issue thread on the github, it apparently fixes the lag. If that doesn't work, try getting a USB Male to Female cord, and/or a new large bluetooth dongle. The problem directly below is also common with this issue, but not always. EDIT: Also try this fork of the driver: https://github.com/HollyJean/JoyCon-Driver, I've heard it can help, but if the problem remains, the info from before the edit may be what you need. Only one Joycon seems to be working! First off, make sure you used custom.bat or combine.bat, or the Joycons will be in different vJoy devices. If they're both in vJoy device 1, but you can only see inputs from one of them, chances are you have a low end bluetooth dongle and it doesn't like having more than one device connected. The solution is the same as the problem directly above, Male to Female USB, and/or a new bluetooth dongle. Windows wants a code to pair the Joycons! This can be fixed by pairing from the "Add a device" menu in Devices and Printers. If you don't know how to get to Devices and Printers, it's in the control panel, should be easy to find once you open the control panel.
OPCFW_CODE
All right, so we're in a pretty good spot here with our application. We've got authentication happening with Auth0, so we can login and we can get a JSON Web Token for our user and get their profile saved here in local storage. But the problem is if we go to refresh, what we see is that our state goes away. So you'll see here that the profile and the logout button go away because that is authenticated Boolean that is sitting on the auth service gets flipped to its default, which is false. Now, ideally, in an AngularJS app, there wouldn't really be a whole lot of page refreshing happening, but it might be necessary from time to time. And if the user goes away from the app, if they close their tab and then come back to it, and if they still do have a valid JWT in local storage, they're going to be prompted to login once again. And so obviously, this isn't really a very good user experience. So why don't we take some time to fix that up? And what we can do is we can put in some code that watches for any changes to the location we're at in our application. So right now, we're at the home route, for example. And we can also go to the profile route. It's going to watch for any changes to that location and when that location changes, we'll have the app check for whether or not the user still has a valid JWT in local storage. And if they do, we'll just make sure that the Angular app knows that they are authenticated. So let's head over here to our app.js file. And to make this happen, we'll need to tap into the run block for our application. So we've got our config block happening here, and it's setting up our auth provider and then also all of our routing and everything. And here in the run block, we can define some logic that we want to have happen after the application is running. So run takes a function, and it's here that we can inject any dependencies that we'll need to make this block of code work. So we're going to need route scope. That is going to be the spot at which we look for changes to our app's location. And then we'll also want our auth service and we'll need store. Then we'll need JWT helper. And that's a service coming from Angular JWT that gives us some tools for inspecting JSON Web Tokens. And then finally, let's get location. So again, that will allow us to navigate to a different spot in our application. So then within the body here, what we'll do is we'll watch for changes to the location. And the way that we can do that is we can use the route scope on method. So we'll do route scope $ on. And that's going to accept the name of an event to watch for. So we want to watch for location change start. So this event, this location change start event, gets fired any time that we move to a new spot within our application, any time the routing changes. And this event's also going to happen any time the page gets refreshed so we can use it to check for the user's authentication state. So then the second parameter is our callback. And so we'll just define some logic within this block here that says let's look for a token. So we'll do var token equals store get ID token. And then we'll say if there is a token, if that was successful in grabbing a token, then we want to check for whether or not it is expired. So we'll do another if block here and we'll say if that token is not expired, and we can do that by doing JWT helper. Is token expired? And we'll pass in our token. If that's the case, and also if the user is not authenticated, and we can check that with our auth service, auth is authenticated, what we want to do is authenticate the user. So what we can do for that is use our auth service and call authenticate. And we can pass in our profile, so we can do store get profile. And then the second argument is going to be the token, so we can just pass in our token like that. So there's a fair bit going on here, so let's step through it one line at a time. So we're looking for a token called ID token in our local storage. And if that is present, we want to check whether or not it is expired. So we can use the JWT helper is token expired method, passing in our token. If the JWT is not expired, and if the user is not authenticated, then we want to go ahead and authenticate the user. The reason being, that if a token isn't expired, well, the user is effectively authenticated as far as the application is concerned. So that's all going to take place if there's a token, but if there's not, let's define some other behavior. The other behavior simply being that we want to redirect the user to the home route so that they can login again. So for that, we'll do location and we'll go to the path of home. Okay, so why don't we save that? And let's go over and see if this is going to work. So, the first thing I'll do is just delete our items here from local storage. And we'll go through the step of logging in again. Let's go back to the home route here and let's do login. And then we'll login with our user. So we've got our items in local storage. Now, let's go to the profile and let's try refreshing. And so when we refresh, now you see that we've still got our profile and our logout button up here. So our state is being preserved. We've gone through the process of checking for our user's JWT in local storage. And if it's there and it's valid, well we just use it to authenticate the user on the front end again. Notice here that we're not doing any kind of request to Auth0 to check for a new JSON Web Token or anything. We're just dealing with the front end. And if we check our get message buttons here, we can do a regular public message. And that still works. And then if we do a secret message, we see that still works. But now, here's a scenario that we should think about. What happens if we're here in our application and it's working just fine, and then all of a sudden our token expires? Well, we aren't redirected or anything. We're rather still here at our profile area. What happens if we go to, say, get a secret message? Well, that request will go through and it will, of course, attach this JSON Web Token in local storage. But on the backend it's going to be rejected because it will be expired. And what will happen is we'll get some kind of 400 series error back. Now, what's the best way to handle this on the Angular side? Well, we can either show a message to the user saying that their token is expired, but that's not really all that intuitive, I don't think, for most users. I think probably the better option would be to redirect the user to wherever they can login again. And in our application, that's going to be the home route. So we want to redirect the user to the home route, and then have them login again from here. And we can do that by wiring up an HTTP interceptor. We'll do that in the next lecture.
OPCFW_CODE
– Tutors are accountable for composing C++ programming option in a straightforward to understand way which appears incredibly trusted. In conditions wherever code has to be compilable by both common-conforming or K&R C-centered compilers, the __STDC__ macro may be used to split the code into Conventional and K&R sections to stop the use on the K&R C-centered compiler of attributes accessible only in Regular C. The training course contents are mainly movie lectures. I'd persuade complete newcomers to follow the lectures strictly in chronological manners, remember to begin through the very initial online video and Visit the future 1 only if you are completed With all the previous. -This system will have to acknowledge command line parameters that allow for to flip the image horizontally and vertically. A variety of instruments are already produced to help C programmers uncover and correct statements with undefined habits or maybe faulty expressions, with better rigor than that furnished by the compiler. The Resource lint was the first this sort of, bringing about several Other folks. Disclaimer: The reference papers provided by MyAssignmentHelp.com function product papers for college students and they are to not be submitted as it really is. These papers are meant to be utilized for investigate and reference needs only. Create course and sequence diagrams with the initial code.Execute the refactorings beneath. Produce class and sequence diagrams to the refactored code.Make observations about the distinctions, which includes any enhancements, involving the construction before and soon after. Correct Documentation: At the time our online C++ Programming assignment specialists have concluded the coding aspect of your respective c++ programming assignment, they are going to work out the documentation part describing the use of lessons and strategies for better idea of the assignment get the job done. Composing courses this fashion is often a pure approach, visit this web-site because the computer itself usually executes This system inside a top-to-base sequential trend. This one particular-dimensional structure is fine for simple applications, but conditional branching and performance phone calls might produce sophisticated behaviors that are not simply noticed inside a linear fashion. Flowcharts are A method to explain software program within a two-dimensional format, specifically delivering effortless mechanisms to visualise conditional branching and performance phone calls. Flowcharts are certainly practical inside the First style stage of a software method to define complicated algorithms. Moreover, flowcharts may be used in the final documentation stage of the challenge, after the system is operational, so that you can help in its use or modification. Case in point five.one: Using a flowchart describe the Command algorithm that a toaster could use to Prepare dinner toast. There'll be considered a begin button the user pushes to activate the device. You can find other input that actions toast temperature. There are a seemingly unrestricted variety of why not try here responsibilities one can carry out on a pc, and The important thing to developing fantastic goods is to choose the correct types. Similar to hiking with the woods, we must develop tips (like maps and trails) to maintain us from receiving missing. Among the fundamentals when creating program, regardless whether it is a microcontroller with 1000 lines of assembly code or a significant Personal computer process with billions of strains of code, is to maintain a dependable composition. One this kind of framework is called structured programming. C can be a structured language, meaning we start with a small range of straightforward templates, as revealed in Determine five. 24x7 online writers accessibility The writers within our workforce are available online 24x7 for college kids support. You could achieve us whenever and get the expert online guidance. Our writers will offer you the most beneficial remedies and will never say no towards your get the job done. -The application have to shop an 8bit uncompressed black and white Variation. It mustn't overwrite the original impression. " The C normal didn't try to appropriate numerous of such blemishes, due to effect of these types of variations on currently his response present software package. Character set
OPCFW_CODE
VisualizationDataset Class in main_dataset.py Make sure you have read FAQ before posting. Thanks! Hey Dian, I successfully collected the dataphase1 by installing the torch 1.4.0, and I am confused by these function in the main_dataset.py. 1. Maindataset Class full_path='/home/shy/Desktop/WorldOnRails/DataAndModel/PhaseData_12/' config_path = '/home/shy/Desktop/WorldOnRails/config.yaml' dataset = MainDataset(full_path, config_path) wide_rgb, wide_sem, narr_rgb, lbls, locs, rots, spds, cmd = dataset[30] Then I can load these data, and I see it returns the right value. So do I need to delete the small dataset by myself? /home/shy/Desktop/WorldOnRails/DataAndModel/PhaseData_12/udmprkyurt is too small. consider deleting it. /home/shy/Desktop/WorldOnRails/DataAndModel/PhaseData_12/: 17768 frames (x3) Besides, the dataset collected includes wide_rgb, wide_sem and narr_rgb ,but there is no narr_sem here. And I found narr_sem is in the Class LabeledMainDataset. So is narr_sem is necessary for training? 2. VisualizationDataset Class and LabeledMainDataset I failed to run these class, which returns a bytes-like object is required, not 'NoneType' , and I checked the class in this respository, it seems the class is not adpoted. I guess these class are used to generate the rgb sem images. Vdataset = VisualizationDataset(full_path, config_path) idx=30 lmdb_txn = Vdataset.txn_map[idx] index = Vdataset.idx_map[idx] locs = Vdataset.__class__.access('loc', lmdb_txn, index, 6, dtype=np.float32) rots = Vdataset.__class__.access('rot', lmdb_txn, index, 5, dtype=np.float32) spds = Vdataset.__class__.access('spd', lmdb_txn, index, 5, dtype=np.float32).flatten() lbls = Vdataset.__class__.access('lbl', lmdb_txn, index+1, 5, dtype=np.uint8).reshape(-1,96,96,12) #maps = Vdataset.__class__.access('map', lmdb_txn, index+1, 5, dtype=np.uint8).reshape(-1,1536,1536,12) #rgb = Vdataset.__class__.access('rgb', lmdb_txn, index, 1, dtype=np.uint8).reshape(720,1280,3) cmd = Vdataset.__class__.access('cmd', lmdb_txn, index, 1, dtype=np.float32).flatten() #act = Vdataset.__class__.access('act', lmdb_txn, index, 1, dtype=np.float32).flatten() After testing, I found maps, rgb and act can not be loaded successfully. (Might the class is no needed). Thank you again! I am a bit lost... Can you tell me which step (from the readme) you are trying to do here? I am a bit lost... Can you tell me which step (from the readme) you are trying to do here? Thanks for your reply. Now I am running python -m rails.data_phase1 --scenario=nocrash_train_scenario --num-runners=4 --port=2000, then I see some data saved, and I tried to see the data in this data.dbm. Then in this py file, I see how your code loaded the data. And other class cannot be used for some reason (maybe it is not used for the following trainning) , for example, this seg images is gray, (the shape of wide_sem is(192, 480)), however in your released data, the shape is 3 channels. print("locs=",locs) print("rots=",rots) print("spds=",spds) print("cmd=",cmd) locs= [[129.24919 86.07171 8.401869] [129.30621 87.032166 8.338286] [129.36586 88.04143 8.265622] [129.4187 89.02836 8.196826] [129.34424 90.01089 8.129512] [129.32495 91.06913 8.054202]] rots= [[86.60031 ] [86.605354] [86.61064 ] [86.83437 ] [90.87115 ]] spds= [3.7831488 3.9974685 4.019558 3.912955 4.169325 ] cmd= 3 for example, this seg images is gray, (the shape of wide_sem is(192, 480)), however in your released data, the shape is 3 channels. They are not 3 channel files, they are stored as palete PNGs, as described in the README. for example, this seg images is gray, (the shape of wide_sem is(192, 480)), however in your released data, the shape is 3 channels. They are not 3 channel files, they are stored as palete PNGs, as described in the README. Thank you very much for your patient reply. And I understand the data by reading readme again. And may I ask you a simple question? After reading your code in rails.py, def train_ego(self, locs, rots, spds, acts): locs = locs[...,:2].to(self.device) yaws = rots[...,2:].to(self.device) * math.pi / 180. spds = spds.to(self.device) acts = acts.to(self.device) pred_locs = [] pred_yaws = [] pred_loc = locs[:,0] pred_yaw = yaws[:,0] pred_spd = spds[:,0] for t in range(self.ego_traj_len-1): act = acts[:,t] pred_loc, pred_yaw, pred_spd = self.ego_model(pred_loc, pred_yaw, pred_spd, act) I tried to load the data and found the acts is as follows: array([[ 0.4324325 , 0.5298194 , 0. ], [ 0.87813395, 0.41267335, 0. ], [-0.251706 , 0.855824 , 0. ], [-0.26127693, 0.29226604, 0. ], [ 0.64959806, 0.8846799 , 0. ], [ 0. , 0. , 1. ], [ 0.21413343, 0.8341027 , 0. ], [-0.59543383, 0.08719549, 0. ], [-0.41294557, 0.1233392 , 0. ], [ 0.8560932 , 0.05150029, 0. ]], dtype=float32) And this ego_model means, input is 10 history locs, yaws, speeds and actions and output is locs, yaws and speeds. In this loop, action is a list of [throttle, steer and brake], then in the next loop, the input is two lists of [throttle, steer and brake], act = acts[:,t] ,and finally input 5 actions of 10 and output the 10 lists of [throttle, steer and brake]. However, I am not sure whether act = acts[:,t] needs to change to act = acts[: t]. For example, there is no dim for acts[:,4] because it only has 3 dim here. And then steer = acts[...,0:1],throt = acts[...,1:2],brake = acts[...,2:3] can return the right value. Thank you again!
GITHUB_ARCHIVE
Technology is a gift for us and computers or computer networks is one of them. It is so useful in every field. But as everything has its good side it also has bad or a dark side which we should avoid. But there are some weird minded persons in our society. who are using their brains for bad side or bad work in this technology. It is called Cyber Crime, like hacking or data breaches activities. Obviously, it is a crime, which is spreading so much. But as every lock has its key and every problem keeps the solution, the researchers have a chip called Morpheus. Which can help over this problem? At the University of Michigan (U-M), a new computer processor architecture has developed. This could assist in the future where computers proactively defend against cyber threats. They are rendering the current electronic security model of bugs and patches obsolete. There is a chip called Morpheus, which encrypts and randomly shuffled key bits of its own code to block potential attacks. It data 20 times per second. This processor is faster than a human hacker and a thousand times faster than even the fastest electronic hacking techniques, according to the team at U-M. “Nowadays approach of eliminating security bugs one by one is losing the game,” as per Todd Austin, “people are constantly writing code and as long as there is new code, there will be new bugs and security vulnerabilities.” Even the developer of the system said that, If the hacker finds a bug with Morpheus, the information needed to exploit it vanishes 50 milliseconds later. It`s perhaps the closest thing to a future-proof secure system. Austin and his colleagues have demonstrated a DAPRA-funded prototype processor that successfully defended against every known variant of the control-flow attack, this is the most dangerous and widely used technique. As per researchers, the technology could be used in a variety of applications. Where simple and reliable security will be increasingly critical, from laptops and PCs to the internet of things devices. How the attacks affect? We have all know that how damaging an attack can be. when it hits a computer that`s sitting on your desk. But this could place users when attacks on the computer in your car. Also in your smart lock or even in your body. To patch known code vulnerabilities, the system embeds security into its hardware instead of using the software. This kind of application makes vulnerabilities impossible to pin down and exploit by constantly randomizing critical programme assets in a process known as “churn”. Technology is focusing on the randomizing bits of data “undefined semantics”. However, the chip is transparent to software developers and end-users. These undefined semantics refers to the nooks and crannies of the computer architecture. For example, the location, format, and content of the programme code are undefined semantics. This randomization of data is part of a processor`s most basic machinery. Legitimate programmers don’t generally interact with this process. The hackers can reverse-engineer them to uncover vulnerabilities in a system and launch an attack. Austin and his colleagues presented the chip and research paper in April 2019 at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
OPCFW_CODE
BREAKING! COVID-19 Warning: Study Shows Of Spike Mutations Of SARS-CoV-2, Making It More Transmissible And Dangerous.The Reality Is That There Is Unlikely To Be A Vaccine Source: COVID-19 Warning COVID-19 Warning: Disturbing news is emerging as research findings from a collaborative study involving medical, genomic and virology researchers from Los Alamos National Laboratory in New Mexico-US, University Of Sheffield-UK, Duke University in North Carolina-US, Sheffield Teaching Hospital-UK and the NHS-Foundation-UK, show that the Spike elements of SARS-CoV-2 coronavirus is mutating in a manner that indicates it is evolving to become stronger and more easily transmissible to humans and at the same time becoming more clinically harmful to humans. Phylogenetic Trees based on 4,535 trimmed full genome SARS C0V02 alignments from GISAID. A) A basic neighbor joining tree, centered on the Wuhan reference strain, with the GISAID G clade (named for the D614G mutation, though a total of 3 base changes define the clade) are highlighted in yellow. The regions of the world where sequences were sampled are indicated by colors. By early April, G614 was more common than the original D614 form isolated from Wuhan, and rather than being restricted to Europe (red) it had begun to spread globally. B) The same tree expanded to show interesting patterns of Spike mutations that we are tracking against the backdrop of the phylogenetic tree based on the full genome. Note three distinct patterns: mutations that predominantly appear to be part of a single lineage (P1263L, orange in the UK and Australia, and also A831V, red, in Iceland); a mutation that is found in very different regions both geographically and in the phylogeny, indicating the same mutation seems to be independently arising and sampled (L5F green, rare but found in scattered locations worldwide); and a mutation in sequences from the same geographic location, but arising in very distinct lineages in the phylogeny (S943P), blue, found only in Belgium. A chart showing how GISAID sequence submissions increase daily is provided in Fig. S1. The tree shown here can be recreated with contemporary data downloaded from GISAID at www.cov.lanl.gov. The tree shown here was created using PAUP (Swofford, 2003); the trees generated for the website pipeline updates are based on parsimony (Goloboff, 2014). The proportion of sequences carrying the D614G mutation is increasing in every region that was well sampled in the GISAID database through the month of March. - A) A table showing the tallies of each form, D614 and G614 in different countries and regions, starting with samples collected prior to March 1, then following in 10 day intervals. B) Bar charts illustrating the relative frequencies of the original Wuhan form (D614, orange), and the form that first emerged in Europe (G614, blue) based on the numbers in part (A). A variation of this figure showing actual tallies rather than frequencies, so the height of the bars represents the sample size, is provided as Fig. S2. C) A global mapping of the two forms illustrated by pie charts over the same periods. The size of the circle represents sampling. An interactive version of this map of the April 13th data, allowing one to change scale and drill down in to specific regions of the world is available at https://cov.lanl.gov/apps/covid-19/map, and daily updates of this map based on contemporary data from GISAID are provided at cov.lanl.gov. Running weekly average counts showing the relative amount of D614 (orange) and G614, (blue) in different regions of the world. In almost every case soon after G614 enters a region, it begins to dominate the sample. Fig. S3 shows the same data, illustrated as a daily cumulative plot. Plots were generated with Python Matplotlib (Hunter, 2007). The plots shown here and in Fig. S3 can be recreated with contemporary data from GISAID at www.cov.lanl.gov.
OPCFW_CODE
Warrior Dads Podcast Episode #25 - Determination & Never Compromise with Anthony Johnson | 21 Replay Cross-published with permission from the Warrior Dads YouTube channel, original watch link: https://www.youtube.com/watch?v=l_QNt... In today's episode I'm talking with Anthony Dream Johnson. Today Anthony and I get into how and why he started 21 Studios and The 21 Convention along with his determination to push through so many challenges he faced, including a crazy marriage story he shares with me that sounds like something from the movie The Hangover. Anthony is the CEO, founder, and architect of The 21 Convention and 21 Studios, as well the Co-Founder of the Red Man Group, Anthony Dream Johnson is the leading force behind the world’s first and only “panorama event for life on earth as a man”. He has been featured on WGN Chicago, and in the New York Times #1 best seller The Four Hour Work Week. His stated purpose for the work he does is “the actualization of the ideal man”, a purpose that has led him to found and host The 21 Convention across 4 countries over a 12-year time period. The mission of The 21 Studios is to create positive media for men and destroy the feminist establishment. You can find out more about Anthony and events at http://www.the21convention.org and through social media… Twitter @beachmuscles Instagram @beachmuscles65 Like 21 Studios on Facebook: https://fb.com/the21convention Follow 21 Studios on Twitter: https://twitter.com/21Convention Follow 21C on Instagram: https://instagram.com/21convention Follow 21 Studios on Minds: https://www.minds.com/21studios Support 21 Studios on Patreon: https://patreon.com/21s Follow 21 Studios on Gab: https://gab.ai/21Studios Follow 21 Studios on G+: https://plus.google.com/+21s/ Follow our CEO on Twitter: https://twitter.com/beachmuscles Subscribe to our CEO on Minds: https://minds.com/beachmuscles Follow our CEO on Instagram: https://instagram.com/beachmuscles65 Like our CEO on Facebook: https://fb.com/beachmuscles65 Follow our CEO on Gab: https://gab.ai/dream Join 21 University: https://t21c.com/social21uv1 Subscribe to on YouTube: https://t21c.com/12YTr3X Donate to 21 Studios: https://www.paypal.me/21c Become a channel member: https://t21c.com/21sytm It is Free
OPCFW_CODE
import json from web3 import Web3 from hashlib import sha256 address = "" contract_abi = "" infura = "https://mainnet.infura.io/v3/7fc9b313b47d488c97c52c3221344c04" class OnChain: def __init__(self, bin, abi=contract_abi, provider=infura): self.web3 = Web3(Web3.HTTPProvider(provider)) self.abi = abi self.bin = bin @staticmethod def sha256_hash(event_id, location, time, violations): """ Performs SHA256 hash to generate a string output. :return: hash of inputs """ event_str = event_id + location + time + violations return sha256(event_str.encode()).hexdigest() def store_hash(self, event_id, location, time, violations): """ Stores event-generated hash on-chain. :param event_id: :param location: :param time: :param violations: :return: """ event_hash = self.sha256_hash(event_id, location, time, violations) ...
STACK_EDU
Originally Posted by charon Guess the lady in question never replied YES to the EULA or MS would have had her over a barrel. Or did it eat her computer before the EULA came up? If she replied 'YES' to the EULA and then it ate her computer, then the court just over ruled the protection that MS expected the EULA would have afforded them. Now this would be seismic. Expect a EULA change if the latter is the case. I have read that the latest EULA is what is in effect even if you said YES, I agree' to a previous one when you accepted the W10 upgrade. There was a period of time that the Win 10 upgrade process would just start all on its own. I've seen it happen on a Win 7 box sitting about 3 feet from me. My GF for some reason has a hard time dealing with "new" things... I don't push her to upgrade the machine because there is an entirely new machine waiting for her if she wants to use 10. I stopped the upgrade 3 times when it started happening for no reason... then one day when we were out of the room we came back to see the Win 10 upgrade in progress and I had to stop it. This was around when people started saying the X had been removed from the upgrade window... the X wasn't ever an issue here... it was the fact the upgrade would just suddenly start. So at least in this case there was no accepting anything... EULA or otherwise... the machine would just suddenly get slow and start doing the Win 10 upgrade or.. as noted... you would walk out of the room and come back in to find it in progress. This stopped happening for us at least. Personally I had another full price copy... that MS disabled after I used the key too many times changing out hardware. For Z170 I had 12 motherboards... and 6 cpu's. I only ever have one active at a time.. but I change hardware a lot and do clean installs. One day it was locked and their support told me I had used the key too many times. I explained that I have a full copy... not OEM and I'm supposed to be able to do what I am doing as long as I only have it active on the one computer (in other words I'm not activating this on 12 different motherboards at the same time... I am just changing out what is in the case and reinstalling). The outsourced support people told me that even with a retail copy I can't change hardware.... and there wasn't really any recourse beyond that.. so I lost a $200 full retail key I paid for... not every copy of Windows 10 was free... I doubt I'm the only person they have pulled this on.. so I would expect more lawsuits of various kinds to follow in regards to Win 10. Of course now once your hardware is saved on their servers... you don't have to enter the key again to do a clean install... but this wasn't always the case and that's when I had the issue.. because I had to use the key to do a clean install.
OPCFW_CODE
Discrepancy between reported series and series present in the block What did you do? We had an OOM kill and to my surprise we basically duplicated in the amount of series. See screenshot: I was investigating it, and obviously when turning around some stones I found some things we could improve on. However, when I dug further, checking the blocks themselves, I notice a different number in the amount of series: promtool tsdb list . BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE 01HCT4FHKH8CFEDS0VDRJCYPJY<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 37409715 10168069<PHONE_NUMBER> 01HCTBB8VPR9CNZPGXVXPG2956<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 37641991 10216217<PHONE_NUMBER> 01HCTJ703HY5QYW9SM1C2WM456<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 37603353 10122896<PHONE_NUMBER> 01HCTS2QBN3ZVJDSYDBD687TZR<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 37737103 10214242<PHONE_NUMBER> So, I'm seeing: UI /tsdb-status reports: Number of Series = 18922003 prometheus_tsdb_head_series metric, reports roughly the same ^ 18m count({__name__=~".+"}) = 8904164 (almost half) tsdb list, see above (they are somewhat older blocks, but are somewhat equal to the reported count query) What did you expect to see? That the metric & the API would report roughly the same amount of series as what the blocks reports What did you see instead? Under which circumstances? ... System information Docker image Prometheus version v2.44.0 Prometheus configuration file args --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --enable-feature=memory-snapshot-on-shutdown --web.external-url=https://redacted/prometheus --web.route-prefix=/prometheus --storage.tsdb.wal-compression --storage.tsdb.retention.time=30d --storage.tsdb.retention.size=224000MB --storage.tsdb.path=/prometheus --web.enable-admin-api --web.config.file=/etc/prometheus/web_config/web-config.yaml --storage.tsdb.max-block-duration=2h --storage.tsdb.min-block-duration=2h ### Alertmanager version _No response_ ### Alertmanager configuration file _No response_ ### Logs _No response_ count({__name__=~".+"}) will only count time series with samples from the past 5 minutes. Seems like you have 50% churn rate, check scrape_series_added to see if you keep adding time series all the time. If this is k8s environment then that's not unusual - k8s exports a lot of ephemeral label values like pod IDs etc - these keep changing as pods are restarted creating new time series. I would agree with you, but please look at the screenshot after the restart. That does not make sense right? How can I out of the blue double the amount of series? Furthermore, IF that would be the case, the amount of series in the block itself should be higher than what the count would provide, which is not the case either? Anyhow, the rate/s on scrape_series_added is about ~50-150 and over time about calculated with an increase[2h] of ~800k. I thought about this for a bit, but couldn't figure out a way for it to go wrong. UI /tsdb-status reports: Number of Series = 18922003 This page has a number of break-downs; see if you can spot something that adds to the picture of where it went wrong. Did it persist over a subsequent restart? Did it persist over a subsequent restart? Yes, it did persist. See screenshot: This page has a number of break-downs; see if you can spot something that adds to the picture of where it went wrong. Do you mean the break-downs like; Top 10 series count by metric names? If so, I actually analysed a few blocks for something else, which was the reason I 'found' this behaviour. Because the blocks themself in question report a different amount of time series. I don't see anything that can explain this behaviour and I'm also still a bit stumbled on why this is happening. I did a promtool tsdb list . and that reports: BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE 01HD9SQJVMP9R8Y1G9HA65P7HY<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36154771 9825565<PHONE_NUMBER> 01HDA0KA3MX4DGY4QT3JXWVABY<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36203051 9844633<PHONE_NUMBER> 01HDA7F1BPBP6QE0TS688R384P<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36242132 9827859<PHONE_NUMBER> 01HDAEARKMN476X64JGSN58RP6<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36170351 9728502<PHONE_NUMBER> 01HDAN6FVMHM788XMGX1RGKNFW<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36540518 9946857<PHONE_NUMBER> 01HDAW273MRF7MBT98VNCZWEKB<PHONE_NUMBER>000<PHONE_NUMBER>000 2h0m0s<PHONE_NUMBER> 36454143 9793077<PHONE_NUMBER> Do you mean the break-downs like; Top 10 series count by metric names? "Top 10 series count by metric names", "Top 10 series count by label value pairs" My suggestion was for you to think about the numbers, see if any of them seem out of line. Note you can get more than 10 via the API, e.g. http://my-prometheus:9090/api/v1/status/tsdb?limit=20 (there is a bug that it caches the data for 30 seconds even when you change the limit) What does the Prometheus build info tell? Was it the same version before? What does the Prometheus build info tell? Was it the same version before? Yes, it was not an upgrade. We "merely" OOM'd which caused the restart. -- Also did not really find/see an indicator on the tsdb API if I'm honest. Not forgotten, just haven't got the time yet to dig into this further.
GITHUB_ARCHIVE
Encountering unexpected error messages during a project is common among developers. One of these errors is the 'Error in Oldclass(stats) <- cl: Adding Class Factor.' In this comprehensive guide, we will provide a step-by-step solution to fix this issue and answer frequently asked questions about it. Table of Contents Understanding the Error Before diving into the solution, it is essential to understand the error itself. The 'Error in Oldclass(stats) <- cl: Adding Class Factor' error typically occurs when you are using the anova() function in R. This error arises when the 'stats' variable has a different class than the 'cl' variable. Here are the steps to resolve the 'Error in Oldclass(stats) <- cl: Adding Class Factor' issue: Identify the variables causing the issue: First, look at the code and identify the variables that are causing the problem. This error typically occurs when there is a mismatch in the class of 'stats' and 'cl' variables. Check the class of the variables: Use the class() function in R to determine the classes of the 'stats' and 'cl' variables. For example: Convert the variable class: If the classes of the 'stats' and 'cl' variables are different, convert one of the variable's class to match the other. Use the as.factor() function in R to convert a variable to a factor class. For example, if the 'stats' variable is of class 'numeric' and the 'cl' variable is of class 'factor,' convert the 'stats' variable to a factor: stats <- as.factor(stats) anova() function again: After converting the variable class, run the anova() function again to see if the error has been resolved. Check for other issues: If the error persists, check for other issues in the code that may be causing the error, such as missing data or incorrect variable names. What is the anova() function in R? anova() function is used in R to perform an analysis of variance, which is a statistical method for comparing the means of multiple groups. The function takes multiple models as input and returns an ANOVA table, which displays the results of the analysis. Learn more about the anova() function here. Why does the error occur? The error occurs when there is a mismatch in the class of the 'stats' and 'cl' variables used in the anova() function. This can happen if the variables have not been properly formatted or converted before running the analysis. Can I convert the 'cl' variable instead of the 'stats' variable? Yes, you can convert the 'cl' variable instead of the 'stats' variable if needed. Use the as.factor() function to convert the 'cl' variable to the same class as the 'stats' variable. For example: cl <- as.factor(cl) What if the error persists after converting the variable class? If the error persists after converting the variable class, check for other issues in the code that may be causing the error, such as missing data or incorrect variable names. You might also consider seeking help from online forums like Stack Overflow or the R mailing list. What other R functions can help me diagnose and fix the error? Some useful R functions for diagnosing and fixing errors include str(), which displays the structure of an object, and summary(), which provides a summary of an object's contents. Additionally, the debug() function can help you trace the execution of a function in R to pinpoint the source of the error.
OPCFW_CODE
/** * Test that page templates have certain exports removed while other files are left alone. * * Page templates support only a default exported React component and named exports of * `config` and `getServerData`, so it's not necessary (or possible) to test other exports * in page templates. */ const config = `config exported from a non-page template module` const getServerData = `getServerData exported from a non-page template module` const helloWorld = `hello world` describe(`modifed exports`, () => { beforeEach(() => { cy.visit(`/modified-exports`).waitForRouteChange() }) describe(`page templates`, () => { it(`should have exports named config removed`, () => { cy.getTestElement(`modified-exports-page-template-config`) .invoke(`text`) .should(`contain`, `undefined`) }) it(`should have exports named getServerData removed`, () => { cy.getTestElement(`modified-exports-page-template-get-server-data`) .invoke(`text`) .should(`contain`, `undefined`) }) it(`should have imported exports named config left alone`, () => { cy.getTestElement(`unmodified-exports-page-template-config`) .invoke(`text`) .should(`contain`, config) }) it(`should have imported exports named getServerData left alone`, () => { cy.getTestElement(`unmodified-exports-page-template-get-server-data`) .invoke(`text`) .should(`contain`, getServerData) }) it(`should have other imported exports left alone`, () => { cy.getTestElement(`unmodified-exports-page-template-hello-world`) .invoke(`text`) .should(`contain`, helloWorld) }) }) describe(`other JS files`, () => { it(`should have exports named config left alone`, () => { cy.getTestElement(`unmodified-exports-config`) .invoke(`text`) .should(`contain`, config) }) it(`should have exports named getServerData left alone`, () => { cy.getTestElement(`unmodified-exports-get-server-data`) .invoke(`text`) .should(`contain`, getServerData) }) it(`should have other named exports left alone`, () => { cy.getTestElement(`unmodified-exports-hello-world`) .invoke(`text`) .should(`contain`, helloWorld) }) }) })
STACK_EDU
Back on RSS I’m using RSS again. I’d wanted to use it again for a while, but in the near decade since Google Reader’s sunset, had difficulty making the habit stick. (I’m using RSS colloquially to refer to any of RSS, ATOM, JSONFeed, or web scraping.) Part of my problem was probably due to each previous attempt stubbournly making my own minimal reader (I’ve made several I don’t use 😓), instead of starting with featureful existing tools. I know others have had a renewed interest as well, so here’s what’s finally made it click for me. I’m self-hosting a FreshRSS server using their docker container. This gives me a central server to synchronize between various devices. I don’t love its UI, but it’s passable, and I will mostly be using other apps for reading. One thing I like about FreshRSS is that it provides a decent ability to scrape sites which either don’t provide an RSS feed (which feels more common in modern years) or which have a truncated feed. For sites without RSS you can point it at a page and provide an XPath expression to find links to articles on the page, and relative XPaths to find the details of each item. This takes some work to setup, but for some feeds I really wanted to exist but weren’t provided, it’s worth it. For feeds which don’t have the page content in the feed, FreshRSS has the ability to scrape the page for each article and extract content via CSS selector (I have no idea why one kind of scraping uses XPath and the other CSS). It also has some controls to automatically mark articles as read, including based on filters. FreshRSS supports a number of apps/clients via a “Google Reader” API and a newer “Fever” API. Gray Gilmore recommended I try NetNewsWire for Mac and iOS. I can only speak to the mac version, which is great! One feature this has is “reader” mode, which does a “reader mode” style transform on the original page instead of using the content from the feed. If I was only using NetNewsWire, I’d probably forego the CSS selector scraping in FreshRSS in favour of this on most feeds. On Android I still haven’t settled on which app I prefer. So far FeedMe is in the lead, but I’m also testing Readrops and Fluent Reader Lite. Most important was getting a solid list of feeds to guarantee there was enough content for me to check at least once a day. I was worried there wouldn’t be enough feeds out there, but I’ve found plenty, and finding more every day. My strategy has been to add as many sources as possible. I’ll remove them or adjust settings if there’s a problem. Some ideas of where to start: opml.glitch.me is where I started. It generates an importable list of feeds based on twitter follows (this is what fedifinder was based on from the same author). Due to its automated nature, mine needed some cleanup. Personal blogs are probably what I missed most from the google reader days. OPML found me a bunch of blogs I didn’t know existed, and some recent posts I’d missed from social media. A win for RSS immediately! Company engineering blogs also have some good content. News websites mostly have feeds. Some allow subscribing by author, others don’t, but I used XPath for a few authors I really wanted to read. Newsletters are pretty popular at the moment. Most have RSS ( /feed on any substack), if they don’t there’s kill-the-newsletter exists to convert them (though I haven’t tried it yet). Patreon subscriptions each have private RSS feeds. YouTube pages each have RSS feeds (though it’s more complicated to find the feed URL than it used to be). Mastodon accounts are all RSS feeds, for example: firstname.lastname@example.org. Similarly twitter accounts can be followed using nitter as a proxy. I’m going to use both of these sparingly (due to server load), but I’m finding this useful to follow organizations using those platforms to publish updates they aren’t putting anywhere else. I really wish something similar existed for instagram. dev.to users each have a feed. GitHub has various feeds. You can follow users, or a repo’s releases, or commits. Could be a good way to get project updates. (disclaimer: I work at GitHub) Hacker News has RSS feeds I think, but I found more success using hnrss.org. You can also follow reddits (I followed /r/ruby) by appending .rss. I’m hoping this will be a good way to find more blogs and feeds to follow, while avoiding actually visiting those sites. In addition to this I’m going to keep an eye out while browsing, to try to spot things which should have been in my feed. Anything sources or tips I missed? @ me @email@example.com or send me an email.
OPCFW_CODE
fix(monitor): Added handling when casting a type in SystemMonitor fails If KubeArmor is running in systemd mode, the following error may occur. Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: panic: interface conversion: interface {} is string, not []string Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: goroutine 43 [running]: Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: github.com/kubearmor/KubeArmor/KubeArmor/monitor.(*SystemMonitor).TraceSyscall(0xc0000fe600) Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: /home/ubuntu/KubeArmor/KubeArmor/monitor/systemMonitor.go:691 +0x1a25 Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: created by github.com/kubearmor/KubeArmor/KubeArmor/core.(*KubeArmorDaemon).MonitorSystemEvents in goroutine 16 Oct 13 05:10:25 ip-172-31-47-253 kubearmor[54156]: /home/ubuntu/KubeArmor/KubeArmor/core/kubeArmor.go:245 +0xcc Oct 13 05:10:27 ip-172-31-47-253 systemd[1]: kubearmor.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 13 05:10:27 ip-172-31-47-253 systemd[1]: kubearmor.service: Failed with result 'exit-code'. This is because a panic occurred when casting an object with interface{} to an object of type []string failed. When casting types, handling when casting fails is necessary. Therefore, in this fix, we will add handling when casting fails, and fix it so that panic does not occur even if type casting fails. Purpose of PR?: No Does this PR introduce a breaking change? No If the changes in this PR are manually verified, list down the scenarios covered:: Handling when casting fails has been added to the implementation below. https://github.com/kubearmor/KubeArmor/blob/main/KubeArmor/monitor/systemMonitor.go#L691 Additional information for reviewer? : Mention if this PR is part of any design or a continuation of previous PRs N/A Checklist: [x] Bug fix. Fixes # [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] This change requires a documentation update [x] PR Title follows the convention of <type>(<scope>): <subject> [ ] Commit has unit tests [ ] Commit has integration tests Hi, @daemon1024 Could you review this when you have time ? If you don't need it, you can close it . Thanks for comments !!! @daemon1024 Thanks for handling it. Can you rebase the PR to fix the CI? Sure, I'll pdate the main branch on fork repository and rebase this branch. @DelusionalOptimist I think we can also avoid doing the same assertion again a few lines below: Further, this processing seems similar to the one we already have in BuildPidNode, so I think we can use the returned value here instead of going through the args again. WDYT @daemon1024? If change the code based on DelusionalOptimist's advice, I think it will look like this. Is it right? haytok KubeArmor [added-handling-when-casting-a-type-in-SystemMonitor-fails] > git status On branch added-handling-when-casting-a-type-in-SystemMonitor-fails Your branch and 'origin/added-handling-when-casting-a-type-in-SystemMonitor-fails' have diverged, and have 19 and 1 different commits each, respectively. (use "git pull" to merge the remote branch into yours) Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: KubeArmor/monitor/systemMonitor.go no changes added to commit (use "git add" and/or "git commit -a") haytok KubeArmor [added-handling-when-casting-a-type-in-SystemMonitor-fails] > git diff KubeArmor/monitor/systemMonitor.go diff --git a/KubeArmor/monitor/systemMonitor.go b/KubeArmor/monitor/systemMonitor.go index 002a569b43c4..4ed945fd73dd 100644 --- a/KubeArmor/monitor/systemMonitor.go +++ b/KubeArmor/monitor/systemMonitor.go @@ -687,8 +687,18 @@ func (mon *SystemMonitor) TraceSyscall() { } else if ctx.EventID == SysExecve { if len(args) == 2 { // enter + var execPath string + var nodeArgs []string + + if val, ok := args[0].(string); ok { + execPath = val + } + if val, ok := args[1].([]string); ok { + nodeArgs = val + } + // build a pid node - pidNode := mon.BuildPidNode(containerID, ctx, args[0].(string), args[1].([]string)) + pidNode := mon.BuildPidNode(containerID, ctx, execPath, nodeArgs) mon.AddActivePid(containerID, pidNode) // if Policy is not set @@ -705,17 +715,9 @@ func (mon *SystemMonitor) TraceSyscall() { log := mon.BuildLogBase(ctx.EventID, ContextCombined{ContainerID: containerID, ContextSys: ctx}) // add arguments - if val, ok := args[0].(string); ok { - log.Resource = val // procExecPath - } - if val, ok := args[1].([]string); ok { - for idx, arg := range val { // procArgs - if idx == 0 { - continue - } else { - log.Resource = log.Resource + " " + arg - } - } + log.Resource = execPath + if pidNode.Args != "" { + log.Resource = log.Resource + " " + pidNode.Args } log.Operation = "Process" Yup the changes LGTM. Thanks @DelusionalOptimist for the suggestion. Let's handle them @haytok @daemon1024 Thanks for comments !!! I have updated ! Thanks for merged !!!
GITHUB_ARCHIVE
I play an online Flight Simulator game (rise of flight). Have hosted as many as 30 people on at one time, and game played reasonably well, but our numbers are growing, and I want to be able to keep up with them. My philosophy has always leaned towards "overkill", meaning make sure you have enough of what you need, and them some, but I dont want to go too far overboard on this. My current connection is 25/25, and I can upgrade to 35/35. Next bundles are 50/20 and 100/20, but the servers upload for this game is more important than his download, so the 35/35 would be best for me. I have looked into several alternatives for what I might use for a server, and need some insight as to what system would be better over another, and more important, WHY it would be better. And if possible, need to know where I should draw the line, ie, for what I am doing, there is no advantage of getting anything better, cause you wont see any difference. About the only thing I can tell you about the game that might be a factor is that the game only uses one core for computing "AI" moves. That would be the situation where you are fighting against the computer, and the computer is controlling the plane you are fighting. The system I am building would be for people vs people only. I have considered 3 alternatives. Note that video is not a factor, as the computer hosting the game doesn't even need a video card. The host computer doesn't show a game window when its hosting the game, all you get is a dialogue window showing the fact that the game is up and running, and number of people online etc etc. i7 930 with 12GB triple channel, RAID 0 Server board with Xeon 5520 (2.6Ghz) 12GB triple channel RAID 0 Server board with dual 5520 (2.6) 12 GB triple channel RAID 0 I'm pretty much up to snuff with computers, but am in the dark when it comes to Servers. To be honest, really don't know what makes a server a server. From what I understand, a server doesn't so much process a lot of information, but what it does process it does really fast. Obviously, the dual processor setup is going to be faster than the others, but is it way overkill for a system that is just going to be hosting one game. And finally, server software over traditional OS? Not sure if this game runs under Linux or not.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Text; using TrabalhoIntegradoComSO.Package; using TrabalhoIntegradoComSO.Structs; #region Dados do Processo /* * Prezados, boa tarde. Está no SGA o arquivo de dados para o trabalho interdisciplinar com SO. É um arquivo texto, contendo em cada linha as informações de um processo, com a seguinte estrutura, separada por ponto e vírgula: *PID do processo (identificador, int) *nome do processo (string) *prioridade (int) *tempo de execução de uso da cpu (float, em segundos) *número de ciclos de cpu para terminar. Exemplificando, o processo abaixo: 35;/usr/lib/evolution/evolution-addressbook-factory;4;0,65;7 Tem um PID 35, se chama "/usr/lib/evolution/evolution-addressbook-factory", tem a prioridade 4, a cada vez que vai para a CPU executa por 0,65 segundos e precisa ir à CPU 7 vezes para terminar. Havendo dúvidas, me escrevam. Att, João */ #endregion namespace TrabalhoIntegradoComSO.Package { class Processo : Dados { private int piD; private string nome; private int prioridade; private float timeExec; private int ciclos; public int PiD { get { return piD; } set { piD = value; } } public string Nome { get { return nome; } set { nome = value; } } public int Prioridade { get { return prioridade; } set { prioridade = value; } } public float TimeExec { get { return timeExec; } set { timeExec = value; } } public int Ciclos { get { return ciclos; } set { ciclos = value; } } //Construtor public Processo(int iD, string nome, int prioridade, float timeExec, int ciclos) { this.piD = iD; this.nome = nome; this.prioridade = prioridade; this.timeExec = timeExec; this.ciclos = ciclos; } public override string ToString() { return PiD+";" + Nome +";"+ Prioridade +";"+ TimeExec +";"+ Ciclos ; } public Boolean Equals(Dados other) { Processo aux = (Processo)(other); if (this.piD == aux.piD) return true; else return false; } public int CompareTo(Dados other) { Processo aux = (Processo)(other); if (this.prioridade.Equals(aux.prioridade)) { return 0; } else if (this.prioridade > aux.prioridade) { return 1; } else if (this.prioridade < aux.prioridade) { return -1; } return 3; } } }
STACK_EDU
New Features in VisualApplets 3.0.6# New Hardware Platform marathon VCX-QP Supported# CXP Camera Operators Allow to Define Incoming Pixel Format# The three CXP camera operators CXPDualCamera, and C XPQuadCamera now allow to define the pixel format of the image data that are coming in from the camera. Supported are all Mono, RGB, and Bayer pixel formats supported by the CXP specification. For all other CXP pixel formats, the pixel format setting "RAW" can be used. The three camera operators are available for all Basler CXP frame grabbers (mE5 marathon VCX-QP, mE5 ironman VQ8-CXP6D, and / mE5 ironman VQ8-CXP6B). Improved Protection of Changes in User Library Instances# Instances of user library elements that have been modified by the user are marked now with a special icon. In addition, when an update or quick update from the user library is started on such an instance, an according message occurs that informs the user that all changes that have been made to this instance will be lost if the update is carried out. This way, adapted user library instances cannot be overwritten unintentionally anymore. (4906) Applets for marathon VCL and LightBridge VCL: PoCL Support per Default Deactivated# Applets you create with VisualApplets 3.0.6 (or higher) for marathon VCL and LightBridge VCL boards: The automatic PoCL detection feature will be disabled on the frame grabber board running the applet. PoCL support needs to be enabled by the user via microDiagnostics (menu Tools -\Board Settings). microDiagnostics comes as part of the Basler runtime software installation. The option for enabling/disabling PoCL support is implemented in microDiagnostics since runtime version 5.5.1 (and higher). Tcl Export and Import Option# VisualApplets 3.0.6 allows to export VisualApplets designs as human readable Tcl script code (*.tcl). This new feature is intended for revision control and for comparing different versions automatically (by creating "diffs"). Earlier exported Tcl script code can be imported into VisualApplets. The Tcl scripts are not intended for being used as the primary file format for saving designs. The generated Tcl scripts do not cover all style information and don't contain the design structure of instantiated user library elements. IsFirstPixel and IsLast Pixel – Two new Operators in the Synchronization Library# Library Synchronization has two new operators: IsFirstPixelmarks the first pixel in a line (in line mode) or in a frame (in frame mode). The operator outputs a 1 on its output port IsFirstOfor each first pixel of a line/frame. IsLastPixelmarks the last pixel of a line (in line mode) / of a frame (in frame mode). The operator can also be used to mark empty lines (in line mode) or empty frames (in frame mode). The set of design examples has been extended: - Print inspection: Two new examples for print inspection have been added that both, though using different methods, allow object detection with identifying defects and correcting the position of detected objects within an image. - Triggering: The set of design examples using triggers has been extended. Furthermore, the examples are now available in hardware platform specific variants. - Basic Acquisition: The set of basic acquisition examples has been extended. Furthermore, the examples are now available in hardware platform specific variants.
OPCFW_CODE
package com.chobocho.tetrisgame; import android.util.Log; import com.chobocho.player.PlayerInput; import com.chobocho.player.TetrisButton; public class PlayerInputImpl extends PlayerInput { private final String TAG = this.getClass().getName(); BoardProfile profile; TetrisButton bottomArrowBtn; TetrisButton leftArrowBtn; TetrisButton rightArrowBtn; TetrisButton rotateArrowBtn; TetrisButton downArrowBtn; TetrisButton playArrowBtn; TetrisButton pauseArrowBtn; TetrisButton startButton; TetrisButton playButton; TetrisButton gameoverButton; public PlayerInputImpl(BoardProfile profile) { this.profile = profile; startX = profile.startX; startY = profile.startY; initButton(); } private void initButton() { int startX = profile.startX; int startY = profile.startY; int BOARD_HEIGHT = profile.boardHeight; int BLOCK_IMAGE_SIZE = profile.blockSize(); bottomArrowBtn = new TetrisButton("BottomArrow", 4, startX + BLOCK_IMAGE_SIZE * 4, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 1), profile.buttonSize(), profile.buttonSize()); leftArrowBtn = new TetrisButton("LeftArrow", 0, startX, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 5), profile.buttonSize(), profile.buttonSize()); downArrowBtn = new TetrisButton("DownArrow", 1, startX + BLOCK_IMAGE_SIZE * 4, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 5), profile.buttonSize(), profile.buttonSize()); rotateArrowBtn = new TetrisButton("RotateArrow", 2, startX + BLOCK_IMAGE_SIZE * 8, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 5), profile.buttonSize(), profile.buttonSize()); rightArrowBtn = new TetrisButton("RightArrow", 3, startX + BLOCK_IMAGE_SIZE * 12, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 5), profile.buttonSize(), profile.buttonSize()); playArrowBtn = new TetrisButton("PlayArrow", 5, startX + BLOCK_IMAGE_SIZE * 12, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 1), profile.buttonSize(), profile.buttonSize()); pauseArrowBtn = new TetrisButton("PlayArrow", 5, startX + BLOCK_IMAGE_SIZE * 12, startY + BLOCK_IMAGE_SIZE * (BOARD_HEIGHT + 1), profile.buttonSize(), profile.buttonSize()); startButton = new TetrisButton("StartButton", 7, startX + BLOCK_IMAGE_SIZE * 4, startY + BLOCK_IMAGE_SIZE * 9, BLOCK_IMAGE_SIZE * 6, BLOCK_IMAGE_SIZE * 3); playButton = new TetrisButton("PlayButton", 8, startX + BLOCK_IMAGE_SIZE * 4, startY + BLOCK_IMAGE_SIZE * 9, BLOCK_IMAGE_SIZE * 6, BLOCK_IMAGE_SIZE * 3); gameoverButton = new TetrisButton("GameoverButton", 9, startX + BLOCK_IMAGE_SIZE * 4, startY + BLOCK_IMAGE_SIZE * 9, BLOCK_IMAGE_SIZE * 6, BLOCK_IMAGE_SIZE * 3); } public boolean touch(int touchX, int touchY) { if (player == null) { return true; } if (startButton.in(touchX, touchY)) { Log.d(TAG, "touch: "); clickStartButton(); return true; } if (playArrowBtn.in(touchX, touchY)) { Log.d(TAG, "Round button: play()"); play(); return true; } if (rotateArrowBtn.in(touchX, touchY)) { Log.d(TAG, "rotate()"); rotate(); return true; } if (leftArrowBtn.in(touchX, touchY)) { Log.d(TAG, "left()"); left(); return true; } if (bottomArrowBtn.in(touchX, touchY)) { Log.d(TAG, "bottom() down()"); bottom(); down(); return true; } if (rightArrowBtn.in(touchX, touchY)) { Log.d(TAG, "right()"); right(); return true; } if (downArrowBtn.in(touchX, touchY)) { Log.d(TAG, "down()"); down(); return true; } return false; } }
STACK_EDU
- July 6, 2016 at 1:25 pm #54054 I’m playing around with a few Vulkan SDKs, so thought I should start with one I know 🙂 I’m hitting errors when I try to run Vulkan X11 binaries on a Ubuntu machine though… Here’s a quick overview of my setup: OS: Ubuntu 14.04 (64-bit) GPU: NVIDIA GeForce GTX 750 Ti GPU driver version: 367.27 PowerVR SDK version: 4.1@b95ea13 (from GitHub) I’m able to run the LunarG Vulkan SDK (v188.8.131.52) cube example, so I know Vulkan works on my system. Here’s a comparison between OGLES and Vulkan SDK builds for 02_IntroducingPVRShell. In both cases, I’m using the following command to build: make PLATFORM=Linux_x86_64 X11BUILD=1 X11ROOT=/usr -j 8 INFORMATION: Command-line options have been loaded from file <path>//PVRShellCL.txt INFORMATION: EglPlatformContext.cpp: isGlesVersionSupported: number of configurations found for ES version [OpenGL ES 3.1] was INFORMATION: EGL context creation: EGL_KHR_create_context supported INFORMATION: Unspecified target API — Setting to max API level : OpenGL ES 3.1 INFORMATION: EGL context creation: Number of EGL Configs found: 1 INFORMATION: EGL context creation: Trying to get OpenGL ES version : 3.1 INFORMATION: EGL context creation: EGL_KHR_create_context supported… INFORMATION: EGL context creation: EGL_IMG_context_priority supported! Setting context HIGH priority (default)… INFORMATION: SystemEvent::Quit ../Linux_x86_64/ReleaseX11/VulkanIntroducingPVRShell INFORMATION: Command-line options have been loaded from file <path>/PVRShellCL.txt INFORMATION: Unspecified target API — Setting to max API level : Vulkan INFORMATION: Number of Vulkan Physical devices: ERROR: **** Display Properties: **** Segmentation fault (core dumped) Opens a window for a split second, dies as soon as the segfault occurs. Any idea what’s going wrong here? Btw, there isn’t a Vulkan HelloAPI for Linux. Was it missed from the package, or has it not been written yet? 2 users thanked author for this post.July 8, 2016 at 3:14 pm #54072 Thanks Senthuran 🙂 Btw, I’ve built and successfully ran the IntroducingPVRShell, IntroducingPVRAssets and GnomeHorde Vulkan demos on an NVIDIA SHIELD TV. I’ll deploy them to some Galaxy S 7’s soon (will let you know if they work).July 19, 2016 at 10:18 am #54103 The Vulkan X11 builds were actually not intended to be released; they were based on specification of an early version of Vulkan and how its implementation would look on Linux desktop machines. We will be releasing full, correct Linux Desktop builds and binaries in our next release. Unfortunately the ones that have been released (by mistake) are not useful and will not work on any platform.
OPCFW_CODE
The Access environment is rich with objects that have built-in properties and methods. By using VBA code, you can modify properties and execute methods. One of the objects available in Access is the DoCmd object, which is used to execute macro actions in Visual Basic procedures. You execute the macro actions as methods of the DoCmd object. The syntax looks like this: Here's a practical example: DoCmd.OpenReport strReportName, acPreview The OpenReport method is a method of the DoCmd object that runs a report. The first two parameters that the OpenReport method receives are the name of the report you want to run and the view in which you want the report to appear (Preview, Normal, or Design). The name of the report and the view are both arguments of the OpenReport method. Most macro actions have corresponding DoCmd methods, but some don't. The macro actions that don't have corresponding DoCmd methods are AddMenu, MsgBox, RunApp, RunCode, SendKeys, SetValue, StopAllMacros, and StopMacro. The SendKeys method is the only one of these methods that has any significance to you as a VBA programmer. The remaining macro actions either have no application to VBA code, or you can perform them more efficiently by using VBA functions and commands. The VBA language includes a MsgBox function, for example, that's far more robust than its macro action counterpart. Many of the DoCmd methods have optional parameters. If you don't supply an argument, the compiler assumes the argument's default value. You can use commas as place markers to designate the position of missing arguments, as shown here: DoCmd.OpenForm "frmOrders", , ,"[OrderAmount] > 1000" The OpenForm method of the DoCmd object receives seven parameters; the last six parameters are optional. In the example, I have explicitly specified two parameters. The first is the name of the form ("frmOrders"), a required parameter. I have omitted the second and third parameters, meaning that I'm accepting their default values. The commas, used as place markers for the second and third parameters, are necessary because I am explicitly designating one of the parameters following them. The fourth parameter is the Where condition for the form, which I am designating as the record in which OrderAmount is greater than 1,000. I have not designated the remaining parameters, so Access uses the default values for these parameters. If you prefer, you can use named parameters to designate the parameters that you are passing. Named parameters can greatly simplify the preceding syntax. With named parameters, you don't need to place the arguments in a particular order, nor do you need to worry about counting commas. You can modify the preceding syntax to the following: DoCmd.OpenForm FormName:="frmOrders", WhereCondition:= "[OrderAmount] > 1000"
OPCFW_CODE
Before you start Objectives: Learn the difference between an absolute path and the relative path, and useful commands, and common shortcuts for typing in paths. Prerequisites: you should know what is Shell in Linux. Key terms: path, directory, paths, command, linux, absolute, relative, current, run, system, file, shell What is a Path Good understanding of the path will help us grasp the concept of shell in Linux. Path is an environment variable that stores some special information. In general path is the location of some object in the file system. We can store paths in variables, so we can use them in commands. The path variable stores the list of directories that the system will search to execute commands, without entering relative or absolute paths. As you will see, paths in Linux are very similar to paths in DOS world. We actually differentiate two types of paths. The first type is called the absolute path, and the other type a relative path. An absolute path is the full location from the root directory in the file system. Absolute paths in Linux will always start with the “/” sign (root directory). For example, “/usr” is absolute path. A relative path on the other hand will refer to objects relative to our current working directory. The working directory is our current position in the file system. To see the absolute path of our current working directory we can use the command: “pwd“. So, we can refer to objects in the file system based on where we are now, and to do that we use relative paths. When we use relative paths, we use certain symbols. One of the symbol is the single dot “./”, and the other symbol is the double dot “../”. These are actually two entries that exist in every directory. These are actually files, but they have a special meaning. The double dot refers to the directory above our current directory. So, to go one level up we can simply use the command “cd .. ” (cd stands for change directory). The single dot refers to the directory we are in. This is used a lot in Linux world. In contrast to Windows, in Linux when we try to run something from the current working directory we have to use the “. /” designation. In Windows, the command automatically tries to run the command in the current directory, while in Linux we have to refer to it if we want to run something from the current directory. We do that by using the the relative path of “./”. This is actually a sort of security feature and that’s why we use it. When we navigate the file system, there are two shortcuts that are really helpful. Paths can be really long, so there will be a lot of typing, especially for absolute paths. Because of that there are shortcuts in Shell which help us to type faster. Bash, which is a GNU shell which is often used in Linux, has an auto complete feature. To use it we press the TAB key while we are typing in a command. The Bash will then search our path for anything that starts with what we typed and will auto complete that word for us. If there are multiple entries that match, we can press the TAB key multiple times to display all of the entries that match what we entered. Another useful shortcut is the history. Bash remembers every command that we run in order. the five or to write command and then you immediately run the same command again I wouldn’t have to type it out again. To use this feature we press the up and down arrow keys. That way if we need to run some command again, we can simply get it from the history without typing it in again.
OPCFW_CODE
import { FlagRouter } from "../flag-router" const someFlag = { key: "some_flag", default_val: false, env_key: "MEDUSA_FF_SOME_FLAG", description: "[WIP] Enable some flag", } const workflows = { key: "workflows", default_val: {}, env_key: "MEDUSA_FF_WORKFLOWS", description: "[WIP] Enable workflows", } describe("FlagRouter", function () { it("should set a top-level flag", async function () { const flagRouter = new FlagRouter({}) flagRouter.setFlag(someFlag.key, true) expect(flagRouter.listFlags()).toEqual([ { key: someFlag.key, value: true, }, ]) }) it("should set a nested flag", async function () { const flagRouter = new FlagRouter({}) flagRouter.setFlag(workflows.key, { createCart: true }) expect(flagRouter.listFlags()).toEqual([ { key: workflows.key, value: { createCart: true, }, }, ]) }) it("should append to a nested flag", async function () { const flagRouter = new FlagRouter({}) flagRouter.setFlag(workflows.key, { createCart: true }) flagRouter.setFlag(workflows.key, { addShippingMethod: true }) expect(flagRouter.listFlags()).toEqual([ { key: workflows.key, value: { createCart: true, addShippingMethod: true, }, }, ]) }) it("should check if top-level flag is enabled", async function () { const flagRouter = new FlagRouter({ [someFlag.key]: true, }) const isEnabled = flagRouter.isFeatureEnabled(someFlag.key) expect(isEnabled).toEqual(true) }) it("should check if nested flag is enabled", async function () { const flagRouter = new FlagRouter({ [workflows.key]: { createCart: true, }, }) const isEnabled = flagRouter.isFeatureEnabled({ workflows: "createCart" }) expect(isEnabled).toEqual(true) }) it("should check if nested flag is enabled using top-level access", async function () { const flagRouter = new FlagRouter({ [workflows.key]: { createCart: true, }, }) const isEnabled = flagRouter.isFeatureEnabled(workflows.key) expect(isEnabled).toEqual(true) }) it("should return true if top-level is enabled using nested-level access", async function () { const flagRouter = new FlagRouter({ [workflows.key]: true, }) const isEnabled = flagRouter.isFeatureEnabled({ [workflows.key]: "createCart", }) expect(isEnabled).toEqual(true) }) it("should return false if flag is disabled using top-level access", async function () { const flagRouter = new FlagRouter({ [workflows.key]: false, }) const isEnabled = flagRouter.isFeatureEnabled(workflows.key) expect(isEnabled).toEqual(false) }) it("should return false if nested flag is disabled", async function () { const flagRouter = new FlagRouter({ [workflows.key]: { createCart: false, }, }) const isEnabled = flagRouter.isFeatureEnabled({ workflows: "createCart" }) expect(isEnabled).toEqual(false) }) it("should initialize with both types of flags", async function () { const flagRouter = new FlagRouter({ [workflows.key]: { createCart: true, }, [someFlag.key]: true, }) const flags = flagRouter.listFlags() expect(flags).toEqual([ { key: workflows.key, value: { createCart: true, }, }, { key: someFlag.key, value: true, }, ]) }) })
STACK_EDU
HQ and the remote office are using site-to-site VPN to communicate. 96.0/20 traffic are routed via eth1/3.1 via the tunnel to remote site office. 192.168.96.0/20 <NS eth1/3.1> <ISP>......Site-to-Site VPN.....<ISP><eth0/2 NS> 192.168.130.0/24 We have added a new link (Fiber) and want to reroute those VPN traffic to the new Fiber. 192.168.96.0/20 <NS eth1/3.1><ISP>.........................................<ISP><eth0/2 NS> 192.168.130.0/24 <NS eth1/5><ISP>....................Fiber................<ISP><eth0/1 NS> 192.168.130.0/24 HQ has implementated PBR with a 192.168.0.0/16. I added a more specific route 192.168.130.0/24 before this. HQ traffic cannot ping to the remote site after disable the tunnel. 1. Confirm the new link interface can be pinged on both netscreen. 2. HQ PC (192.168.98.82) cannot ping to FW new interface 192.168.230.1 and remote site interface 192.168.230.2. 3. RS PC (192.168.130.121) can ping to new interface 192.168.230.2 and remote site interface 192.168.230.1. 4. Tried to put the policy before pol-trust No 10 and found traffic were routed to Internet. (by tracroute) 5. Tried to put the policy after pol-trust No 10 and before 40, traffic are only shown '*' (by traceroute) 6. Tried to add a static route 192.168.130.0/24 next hop 192.168.230.2/29 7. Confirmed VPN tunnel is down when we were doing the re-route. Here are some of the HQ and Remote Site Interface list and Routing table Could someone help? The config looks OK, ACL 9 should be hit before 6 and 7. Can you collect a flow debug on HQ box while attempting the internal ping? unset ff (repeat till you see a message - Invalid ID) set ff src-ip <laptop ip> dst-ip 192.168.230.1 set ff src-ip 192.168.230.1 dst-ip <laptop ip> debug flow basic <<Run the ping test>> get db st The last command will print the debug trace, please share it.
OPCFW_CODE
C# 0 (minus) uint = unsigned result? public void Foo(double d){ // when called below, d == 2^32-1 ... } public void Bar(){ uint ui = 1; Foo( 0 - ui ); } I would expect both 0 and ui to be promoted to signed longs here. True, with the 0 literal it is knowable at compile time that a cast to uint is safe, but I suppose this all just seems wrong. At least a warning should be issued. Thanks! Does the language spec cover a semi-ambiguous case like this? this questions waits for Jon Skeet :) It's the int that is being cast to uint to perform substraction from 0 (which is implicitly interpreted by the compiler as uint). Note that int to uint is an implicit conversion hence no warning. There is nothing wrong with your code... except that uint is not CLS-compilant. You can read why here. More info on CLS-compilant code on MSDN int to uint isn't generally an implicit conversion - this is an implicit constant expression conversion. See section 6.1.9 of the spec. If we'd started off with an int variable, then there'd have been promotion to long. (Both int and uint can be implicitly converted to long.) Accepting this answer because the root cause of my confusion is simply that zero is implicitly convertible to uint. I expected promotion to long based on the assumption that 0 was int... Why would anything be promoted to long? The spec (section 7.8.5) lists four operators for integer subtraction: int operator-(int x, int y); uint operator-(uint x, uint y); long operator-(long x, long y); ulong operator-(ulong x, ulong y); Given that the constant value 0 is implicitly convertible to uint, but the uint value ui is not implicitly convertible to int, the second operator is chosen according to the binary operator overload resolution steps described in section 7.3.4. (Is it possible that you were unaware of the implicit constant expression conversion from 0 to uint and that that was the confusing part? See section 6.1.9 of the C# 4 spec for details.) Following section 7.3.4 (which then refers to 7.3.5, and 7.5.3) is slightly tortuous, but I believe it's well-defined, and not at all ambiguous. If it's the overflow that bother you, would expect this to fail as well? int x = 10; int y = int.MaxValue - 5; int z = x + y; If not, what's really the difference here? I think his issue is more with the (potentially) ambiguous resultant value. i.e. If I subtract a positive number from 0 I should not get a positive result. Similarly with your int example, adding 10 to a positive number should not result in a negative number. A question I would have, if you are dealing with these types of bounds on your numbers wouldn't it be best to use checked to assure that your result isn't ambiguous (at least ambiguous to you) @NominSim: There's no ambiguity here. It's behaving exactly as the specification dictates it should. If the OP wants to use a checked context then they absolutely can, but there's no need for a warning here, and it's all behaving correctly. I believe the OP's actual issue is that it's being performed with uint arithmetic whereas he expected it to use long arithmetic: "I would expect both 0 and ui to be promoted to signed longs here." I get that it is behaving exactly as the specification dictates, the ambiguity isn't in the spec but rather in the action a la if I subtract a guaranteed positive number from a negative number I should get a negative result. That's why I think the OP should use the checked context if they are dealing with numbers that may lead out of the bounds of the particular data structure. @NominSim: You understand that it's as per the spec, but I don't believe the OP did. I don't think the word "ambiguity" is helpful here - there simply isn't any. There may be unexpected or undesirable behaviour, but that's not the same as ambiguity. That's true, I am using the word since the OP used it. Now that I think more about it too, his statement that "At least a warning should be issued" at first seemed reasonable to me, but the more I think about it the more I agree with you that it is fine as is, hand-holding can only get you so far... Thanks for the detailed response @JonSkeet. I accepted the answer that most concisely summarized the problem, which was that 0 is being interpreted by the compiler as unsigned. My comment about a warning still stands - the compiler is doing an implicit conversion in such a way that almost guarantees underflow, and in c# it seems this should be a warning... (as opposed to c/c++, where I would NOT expect one. I'm still early on the learning curve for C#, though.) @mike: I suggest you consider exactly what change you'd expect to see in the language specification, without making it significantly more complex. In a checked context, if the difference is outside the range of the result type, a System.OverflowException is thrown. In an unchecked context, overflows are not reported and any significant high-order bits outside the range of the result type are discarded. http://msdn.microsoft.com/en-us/library/aa691376(v=vs.71).aspx Technically, doing the following: double d = checked(0-ui); Will result in a throw of System.OverflowException which is perhaps what you are expecting, but according to the spec since this is not checked the overflow is not reported.
STACK_EXCHANGE
Bike computer which doesn't need a smartphone to work I'm confused with different models of bike computers available in the market right now. I actually want a bike computer which can log data including speed, distance, GPS (optional) etc. I'm not interested to take my smartphone while riding, but I want to upload the ride stats in Strava after my ride. Is there any reliable bike computer available in the market which logs necessary ride stats which can be exported/synced with Strava once the ride is complete? Is GPS necessary for uploading ride stats in Strava? I'm a bit confused by this question. To the best of my knowledge there are no bike computers that actually REQUIRE a smartphone. Mostly you just plug them into your computer with a USB cable and upload the file to strava manually. Or are you looking for a computer with built in Wifi? @AndyP But not all devices support a PC connectivity I think. Can you suggest some budget computers which does the job nicely? I want to upload the data in strava for my training purposes. We don't do product recommendations here as they are considered off topic. Also 'budget' computers don't generally support data storage and upload. P.S If any moderators read this - I tried to tag Vishnu, but the @ tagging did not work - any idea why? @AndyP You don't need to tag a user in a comment on their own post, so the tag gets removed. GPS is required to provide the coordinates to log in the GPX file to upload to strava. WIthout location points and times, strava can't calculate your speeds. What would you expect to upload without location points? Total time riding and distance covered? That won't be able to match any segments. Note, you could use a cellphone without a bike computer. That's all I do - just don't look at it while riding. @Criggie You can upload files with no GPS data and only data from sensors - people do this routinely for indoor trainer rides. Since the OP states (in comments) he wants the data for training purposes, then a .fit file containing only Power/HR/Cadence would be a perfectly valid upload. Although if this is the only data of interest, i'd suggest there is better analysis software for it than strava I'm voting to leave this open. Although we don't do product recommendations, this is a question about whether a certain kind of product exists. Specific product recomendations are off topic but some general info: Garmin and Wahoo bike computers at least do not need a phone with the respective apps present when tracking rides. Some features that require phone or Internet connection obviously don't work though. Wahoo computers allow direct USB connection to a PC and route files can be downloaded manually. Trying to cut the smartphone out of the loop is a battle you https://support.wahoofitness.com/hc/en-us/articles/115000209170-Do-I-need-to-have-my-phone-with-me-while-using-the-ELEMNT-BOLT- https://support.wahoofitness.com/hc/en-us/articles/115000127910-Connecting-the-ELEMNT-and-BOLT-to-Desktop-or-Laptop-Computers A couple of things worth pointing out regarding Wahoo computers: The RFLKT does require a phone on rides (it's a bit of weird computer, that one); the ELEMNT BOLT doesn't require a phone during rides, but also can upload ride data via WiFi without a phone present (but requires a phone to configure the wifi connection initially). I had a RFLKT previously and have a BOLT currently. Thanks for the info. Any idea about Lezyne Super GPS? Google is often handy for answering questions I find: https://www.lezyne.com/product-gps-supergpsY11.php "Instant download of ride files (.fit) via plug-and-play flash drive technology (Windows/Mac) and upload directly to GPS Root website for ride analysis" The new Lezyne Macro Easy GPS is specifically described as a one-time, on the device setup that does not require a smartphone for setup or for use at all. It generates .fit files which can be downloaded to a computer (PC/Mac) and then uploaded to Strava or Lezyne's GPS Root website. It will record basic GPS data, and HR data via Bluetooth Smart. I have this computer. You can use the app to setup the screens like you want it, and export the tracks, but it's not needed during the ride. You can do those things without phone, but it's easier. I really like it. It's also quite solid. I dropped it a few times already :s In the years since this question was asked, there are some new technologies that offer better solutions. Hammerhead Karoo 2 For a completely smartphone-free bike GPS experience, Hammerhead's Karoo 2 is your best bet. It doesn't even have a smartphone companion app, and can run entirely off WiFi (for data syncing only in WiFi range) or use an installed SIM card for an always-on cellular connection with live tracking during your rides. At $399 it's pricy, but not far off less-capable options from Garmin and Wahoo. It's regularly updated (more frequently than Garmin or Wahoo's devices), but as of this moment one of its missing features is that it'll only let you use Karoo's live-tracking app, and doesn't integrate with Strava's live track service. The key difference is that Strava's service will message your contacts when you start and finish a ride, and show your progress during the ride, whereas Karoo's live tracking is just a map that shows your current location, and won't allow you to auto-notify anyone, or show your progress for your current ride. Wahoo ELEMENT Roam & Bolt If you're not worried about live tracking without a smartphone during your rides, Wahoo's bike computers are the other option for riding without the need to connect to a smartphone during, or after your ride. From a features standpoint, they're similar to Garmin's Edge line of computers, but the Wahoo ELEMNT Roam & Bolt both have WiFi to upload rides to Strava, and other services without syncing to a companion app. You'll still need to pair an iOS or Android device for the initial setup, but after that, you can largely forget about it. There are a few features you'll miss if you leave your smartphone at home during your ride, like live tracking, but they're fairly minor. Smartwatches The other options are WiFi enabled smartwatches. For example, Apple's Watch devices with cellular will allow you to use Strava's Apple Watch app to record and live-track your rides while leaving your smartphone at home. (Though, notably, Strava's app won't work with "Family Setup", which allows you to setup an Apple Watch without linking it to an iPhone). Other smartwatches from Garmin and other manufacturers have similar functionality. However, I get the impression that the original question was looking for bike computers specifically, not wrist-mounted devices. Ironically, as far as I understand the Karoo is a full Android device with mobile network connection on its own. It's just a microphone and speaker away from smartphone. Yes - that's correct. It's basically a specialized Android device. You can even "Side Load" APKs to it if you'd like.
STACK_EXCHANGE
Although the GUI-based tools allow the uninitiated to make quick backups, all of these tools output tar files. The tar files, and the tar program that makes them, are one of the original carryovers from Unix. tar stands for Tape ARchive and refers to backing up data to a magnetic tape backup device. Although tar files are designed for backup, they've also become a standard method of transferring files across the Internet, particularly with regard to source files or other installation programs. A tar file is simply a collection of files bundled into one. By default, the tar file isn't compressed, although additional software can be used to compress it. tar files aren't very sophisticated compared to modern archive file formats. They're not encrypted, for example, but this can also be one of their advantages. Linux comes with a couple more backup commands, which you might choose to use. They are cpio and pax. Both aim to improve on tar in various ways, but neither is broadly supported at the moment. Examine their man pages for more details. Perhaps unsurprisingly, tar files are created at the console using the tar command. Usually, all that's needed is to specify a source directory and a filename, which can be done like so: tar -cf mybackup.tar /home/knthomas/ This will create a backup called mybackup.tar based on the contents of /home/knthomas/. tar is automatically recursive so, in this example, it will delve into all subdirectories beneath /home/knthomas. The -c command option tells tar you're going to create an archive, and the -f option indicates that the filename for the archive will immediately follow. If you don't use the -f option, tar will send its output to standard output, which means that it will display the contents of the archive on the screen. If you typed in a command like the preceding example, you would see the message "Removing leading '/' from member names." This means that the folders and files added to the archive will all have the initial forward slash removed from their paths. So, rather than store a file in the archive as: the file will be stored as: The difference between the two forms concerns when the files are later extracted from the archive. If the files have the initial slash, tar will write the file to /home/knthomas/Mail/file1. If there's already a file of that name in that location, it will be overwritten. On the other hand, with the leading slash removed, tar will create a new directory wherever you choose to restore the archive. In this example, it will create a new directory called home, and then a directory called knthomas within that, and so on. Because of the potential of accidentally overwriting data by specifying absolute paths in this way, a better way of backing up a directory is simply to change into its parent and specify it without a full path: cd /home/ tar -cf mybackup.tar knthomas When this particular archive is restored, it will simply create a new folder called knthomas wher-ever it's restored. You can also compress the archive from within tar, although it actually calls in outside help from either bzip2 or gzip, depending on which you specify. To create a tar archive compressed using bzip2, the following should do the trick: tar -cjf mybackup.tar.bz2 knthomas This will create a compressed backup from the directory knthomas. The -j command option passes the output from tar to the bzip2 program, although this is done in the background. Notice the change in the backup filename extension to indicate that this is a bzip2 compressed archive. The following command will create an archive compressed with the older gzip compression: tar -czf mybackup.tar.gz knthomas This uses the -z command option to pass the output to gzip. This time, the filename shows it's a gzip compressed archive, so you can correctly identify it in the future. Extracting files using tar is as easy as creating them: tar -xf mybackup.tar The -x option tells tar to extract the files from the maybackup.tar archive. Extracting compressed archives is simply a matter of adding the -j or -z option to the -x option: tar -xjf mybackup.tar.bz2 Technically speaking, tar doesn't require the preceding hyphen before its command options. However, it's a good idea to use it anyway, so you won't forget to use it with other commands in the future. To view the contents of a tar archive without actually restoring the files, use the -t option: tar -tf mybackup.tar |less This example adds a pipe into less at the end, because the listing of files probably will be large and will scroll off the screen. Just add the -j or -z option if the tar archive is also compressed. In addition, you can add the -v option to all stages of making, extracting, and viewing an archive to see more information (chiefly the files that are being archived or extracted). Typing -vv provides even more information: tar -cvvf mybackup.tar knthomas This will create an archive and also show a complete directory listing as the files and folders are added, including permissions. Once the tar file has been created, the problem of where to store it arises. As I mentioned earlier, storing backup data on the same hard disk as the data it was created to back up is foolish, since any problem that might affect the hard disk might also affect the archive. You could end up losing both sets of data! If the archive is less than 700MB, it might be possible to store it on a CD-R or CD-RW. To do this from the command line, first the file must be turned into an ISO image, and then it must be burned. To turn it into an ISO image, use the mkiso command: mkiso -i backup.iso mybackup.tar.bz2 You can then burn the ISO image to a CD by using the cdrecord command. Before using this, you must determine which SCSI device number your CD-R/RW drive uses (all CD-R/RW or DVD-R/RW drives are seen as SCSI devices, even if they're not). Issue the following command (as root, since cdrecord must be run as root, no matter what it's doing): You should find the device numbers listed as three numbers separated by commas. To burn the backup image, all you need to do is enter a command in this format: cdrecord dev=<dev number> speed=<speed of your drive> mybackup.iso On a typical system, this will take the form of: cdrecord dev=0,0,0 speed=24 mybackup.iso The tar command was designed to back up to a tape drive. If you have one of these installed on your computer, writing data to it is very easy. However, you'll first need to know if your tape drive is supported by SUSE Linux. Most modern tape drives are made available as SCSI devices, even if they aren't actually SCSI. Therefore, you can scan the hardware bus for SCSI devices like so (once again running this command as root): In the results, look for a line relating to your tape drive and, in particular, a line that begins Device file:. This will tell you where in the /dev/ folder the tape drive has been made available. Once you have this information, it's simply a matter of adapting the tar commands discussed previously in order to direct the output of the tar command to the drive (the following assumes you've identified your tape drive as /dev/st0): tar -cf /dev/st0 /home/knthomas Extracting files is just as easy: tar -xf /dev/st0 The tape itself can be rewound and erased using the mt command. See its man page for the various command options. Generally, the -f command option is used to specify the device to use (such as /dev/st0 in the previous example), and then a plain English word is used to tell the device what to do. These are listed in the man page. The most useful commands are those that can rewind a tape and also erase it. To rewind a tape, type the following: mt -f /dev/st0 rewind To erase a tape, type this: mt -f /dev/st0 erase One particularly useful command that you can use if the tape becomes unreliable is the retension command, which winds the tape to the end and then rewinds it to the beginning: mt -f /dev/st0 retension
OPCFW_CODE
This discovery process started when we tried to figure out how we could help grow and make the esports landscape more organised. As being organised is a good baseline for growth for an industry, in turn making it easier to step into the industry. Having years of experience and working closely with both broadcasts and bringing tournaments (professional as well as amateur) to existence. Below I have detailed the discovery process for a possible new web platform between tournament organisers and broadcast talent. This process took place between February and April of 2021. I would already want to thank my team lead at the time who helped me through parts of the process and my peers, who helped me in some of the ideation. Whenever I mention a "we" in this documentation, it will be either my lead or peers who helped. I have tried to be as explicit as possible to show when I pulled others in. First we figured out who exactly we would like to approach. Our target market for this discovery was Tournament Organisers (going forward shortened as "TO(s)"), who often also organise (or outsource) the broadcast of the event, and Broadcast Talent (going forward usually named as "Talent(s)"; commentators, show hosts, casters and any other on-screen artists). Later on we also figured out that the latter category could see growth with additional categories (like statisticians, who provide stats to be displayed on the broadcast, freelance league administrators, etc.). I started with user interviews with both TOs and Talent. For this, I first created a template set of interview questions. Starting with an icebreaker and some general demographics questions, funneling down to questions about what their biggest problems are that they are facing with their work. In general the interviews lasted from somewhere between 30 minutes to 1.5h. I conducted 9 interviews in total (4 with TO representatives, and 5 with Talent). I found all the interviewees by approaching contacts from my network. User interview questions. It was a deliberate decision to skip creating a survey and trying to angle for a large general data set. Firstly, because fishing for a big enough data set would have not been as easy, because it is quite a niche target audience. Secondly, because we already had found a direction, and a possible gap that we wanted to explore. The communication between TOs and Broadcast Talent. The interviews were conducted online. I recorded the interviews as audio or video, depending on what the interviewee was comfortable with. This helped in being later able to re-listen to parts or check for any possible behavioral cues. Furthermore, this gave the option for my team lead and peers to see how I worked and for them to be able to give feedback. Alongside the recordings, I also actively took notes during the interviews. I must admit, it would have been a great benefit to have had a second person in the interview. One being dedicated and paying full attention to the interviewee and being able to catch cues and direct the flow, whereas the second person could dedicate themselves to taking notes. All the upcoming information is sorted on a Miro whiteboard, which happened to be one of the better free whiteboard services at the time of going through this discovery process. The information on most images is not intended for reading, but rather as a visual aid to show the process. One of my philosophies when making drafts is that they should be understandable to anyone who looks at it with minimal explanation. This is why some of the information might be laid out in a hectic way but, hopefully, understandable way. I feel it saves a lot of time if you know you want to quickly find connections between information and that there's a high possibility that you need to move information around. In the drafting phase I always want to stay as flexible as possible without losing integrity of the information. Back from the tangent! After conducting the user interviews, I started to arrange the notes from each interview I conducted, as follows: I laid out the notes on the whiteboard. On the left you can see the legend. I color coded each interviewee (top to bottom in the legend on the left are: first 4 are TOs and the following 5 are Talents), and added their notes to the board, showing connections with parts of their responses. I would have loved to do a sentiment and theme analysis, but I felt the notes were thorough enough to be able to start categorising them. These notes mainly show the user pain points or topics that they felt very strongly about. Next up, I first sorted the piles into related topics and pain points between the interviewees. At first, I thought it might make sense to organise the TO and Talent notes separately. This did help in that I could see what were the biggest problems for both groups. However, it quickly became apparent that I should connect the pain points between the groups. From these clusters we were able to write out user problems (or needs). Blue is the needs of the Talent, and green for the needs of the TOs. The clusters also help to see how much a topic was discussed, but we learned that it didn't necessarily mean that it was the "best" problem to solve. By "best" I mean the most urgent or most impactful for both groups. However, the big clusters did give a good indication of a general direction. Next, we pulled all the problems/needs we wrote out of the note clusters. We talked through each of them, trying to give them a ranking in priority. Where priority is the impact for the groups and a perceived do-ability when it comes to developing a solution. We also tried to give themes for each of the problems/needs or a cluster of the problems/need (marked with the transparent boxes next to both of the ranking columns). From this we also got a general sentiment on what the goals for both groups currently were. Those clusters being shown on the far left and far right of the above image. We only used this as an indication of a goal for both of the groups, rather than a hard truth. From the ranking, we identified and decided to go with three "big" user needs according to the priority we set in the ranking. All of the user needs from the ranking were worth a consideration, but this is the direction we chose: trying to focus and narrow down what service we are able to offer and to keep in mind that each need will take significant time to develop. For each group. From the ranking we created an opportunity tree, with the prioritised user needs. We brainstormed a vision and north star, giving us even more direction. Even though both a vision and a north star SHOULD come at a later point, we felt we had a good direction and already a clearer picture of what direction we could head towards. This is the point where we figured out that we would like to develop a platform/market where we can connect TOs to Talents. Then I gathered 3 peers to ideate on how to solve each of the user needs. You can see each idea of all three of my peers color coded, with a legend for who contributed each idea. For this brainstorm, I asked each of them to do it by themselves, as to come with ideas on their own. And then invited them to share their and talk through their suggestions in a group. My own ideas are marked in red. I then clustered ideas where possible, and wrote each of them out. You can see that on the yellow notes. More ranking! I talked through the ideas with my peers. How they should be prioritised, what ideas would make sense in the bigger picture, and which ideas should be cut and would not work or would not be necessary for the product to function. We also identified some additional points that should be kept in mind when we get to development. These are marked in red. This whole exercise gave us a general user flow for an MVP or the absolutely necessary needs that should be met for both groups, and how the needs between the groups are connected. This is where most of the initial product discovery process ends. The upcoming parts are more musings or could be considered to be part of what developers do. But since I only had access to a small group of people and I needed every proof possible to show the feasibility of the product, I also worked on the following parts. A brainstormed version of a full user flow. The purple notes are all the possible actions a user will be able to take. Again, green is actions for the TO, and blue are actions for the Talent. (Below) User personas. 2 Talent user personas, 1 TO user persona, and 1 for a tentative Talent Manager persona. The last one was created as an exploration into a role, and is not part of the MVP. I wanted to mark this here as an avenue for growth and further options for development. Persona-centered user actions. What/how/why actions they need to take to get to the desired outcome. Blue for Talent, green for TO. Detailed tentative sitemap of the platform, with the actions users are able to take. Condensed tentative sitemap of the platform. Extended list of actions, fields, information that is needed from the user or how the user will interact on the site. A click-through prototype of the platform in Figma. Below you can see a high level view of what went into the pitch deck. The pitch deck consists of 20 slides and the planned time for the presentation is 45 minutes. This includes a projection of the costs, development time, and revenue. The calculations were done in a separate file.
OPCFW_CODE
The amount of posts published every day on this sub about how people have lost access to their accounts or were “hacked” is getting absurd, so I thought I’d share some general notes on how you can secure your accounts **right now.** edit: never store your seed phrase online! 1. **Stop** using the same password for more than one account. This is **the most common** way accounts get stolen. If a password you’ve used appears in a data breach then that login combination **will** be tested against various financial institutions, crypto included. 2. The same applies to email address. Stop using the same email address for more than one account. This increases your risk factor from phishing emails considerably. The way to solve this one is to use aliases. I’ve give an example with gmail, as the biggest email client out there. Imagine your email address was [email@example.com](mailto:firstname.lastname@example.org). What you can do is add a “+randomtext” after your email. So it becomes this [“email@example.com](mailto:”firstname.lastname@example.org)”. You will receive all emails as normal in your inbox. It’s important to note that “+randomtext” should really be random and not “+amazon” or “+apple” to avoid them picking up any patters as “+bank”. If an alias gets compromised, you simply change that one email and block all emails going to it. 3. Two-Factor Authentication is key (pun intended). Enable 2FA on every single account that you own and store your backup codes somewhere offline. 2FA should be your primary way to confirm logins/approve transactions. 4. Use whitelisted addresses for withdraws. If someone does get access to your account (near impossible, if you’ve done the above) then they won’t be able to take your precious coins without needing to approve a wallet address using 2FA. 5. The moment your investment becomes more than you’re willing to lose it’s time to get a hardware wallet. This point deserves it’s own post, so won’t go into much detail but they can’t “hack” what’s offline. A few good software recommendations for anyone looking into this post: * [https://keepass.info/](https://keepass.info/) or [https://keepassxc.org/](https://keepassxc.org/) – In my opinion the best password manager out there. Could be a bit much for someone new to this. A close second for me is [https://bitwarden.com/](https://bitwarden.com/) * 2FA is a hot topic and you won’t go wrong with [https://authy.com/](https://authy.com/) or Google Authenticator Remember, **you** are the biggest vector of attack for any online account that you own. It’s up to you to secure it. Stay safe folks.
OPCFW_CODE
I was asked recently (internally) about CloudLinux; What is it and why should we use it? At first I actually had no idea about it, I had never even heard of it and struggled to first see the benefits of moving from our stable sturdy CentOS platforms to this foreign OS. However like anything I don’t know about in order to provide an answer I started researching. This little entry serves as a note of my research and implementation of CloudLinux. The OS that prides itself on being the only one specifically designed for the shared hosting market, it radiates itself with benefits of security, stability and ease of management. But don’t all operating systems throw these points at you? – yes they do. There’s one point however that makes cloudLinux really stand out, “LVE” Technology. LVE technology is really what makes CloudLinux different, CL developed this Kernel level module to simply “make sure that not one single web site can bring down your web server”, It does this by controlling / limiting (respectively) the amount of CPU and Ram accessible to a given process. Why are we implenting it? We always want to strive to give our customers exactly what they’re paying for + a little bit more, I was told about CloudLinux because it has the possibility of offering our clients better security and performance. We decided to give it a whirl and virtualize it for our shared hosting services, Initially I was a little bit “what’s the difference” but surely enough that LVE Technology gave me a new inner flame. So we’ve implemented it for the following reasons: - It provides us better resource tracking (allowing us to better enforce our AUP) - LVE provides us with “containers” for keeping a clients website, allowing them to get a better slice in the resource pool whilst us maintaing said pool with enforced limits - Better detection tools for malicious scripting activities provides us with the ability to better inform you when somethings not quiet right! Where is this being implemented exactly? As CloudLinux stated “build for shared hosting” we’re going to implement this on our Shared Hosting services initially. I’m fairly confident the timeline will be: - A hand full of legacy shared hosting services - Our High Availability shared hosting - The remaining shared hosting servers Benefits to you! The benefits are pretty standout, if you read the above points you can already guess them. CloudLinux will allow us to provide you with: - Greater Stability – CL Allows us to understand our resources better, giving us to the options and tools to refine how we share these out; giving you better speed / processing - Greater Security – By placing each website in it’s own container we go from being a bunch of websites on a server with different permission sets to a “tenant” based system - Tenants have a finite resource limit which is a slice of the overall available resources on the server; this stops other peoples websites spiking and impacting yours! - Greater visibility – The all seeing eye, we need to know whats going on when we look, CL provides us with a model to do exactly that, by utilizing the “tenant” model each of your executions are run by your user inverse to being something like user “apache” We can use this to see who’s executing that script that is spamming or that script that it trying to gain higher priveledges. Overall CloudLinux is a great new addition and advantage to our shared hosting services, it will provide you with a better experience whilst simlessly intergrating into our existing systems. |Hosting Options & Info||VPS||Web Solutions & Services|
OPCFW_CODE
import simplejson as json from logbook import Logger from logbook import TimedRotatingFileHandler import config import redis_util handler = TimedRotatingFileHandler('firetower-client.log', date_format='%Y-%m-%d') handler.push_application() log = Logger('Firetower-client') class Client(object): """Basic Firetower Client.""" def __init__(self, conf): self.conf = config.Config(conf) self.redis_host = self.conf.redis_host self.redis_port = self.conf.redis_port self.redis_db = self.conf.redis_db self.queue_key = self.conf.queue_key self.queue = redis_util.get_redis_conn( host=self.redis_host, port=self.redis_port, redis_db=self.redis_db) def push_event(self, event): self.queue.lpush(self.queue_key, event) def emit(self, event): """Emit a message to firetower. Args: event: str or dict. if we cannot parse as json, we convert it to a simple JSON struct: {'sig': <event payload>}. """ try: unencoded = json.loads(event) if not unencoded.get("sig", None): raise ValueError except ValueError: payload = {"sig": event} event = json.dumps(payload) self.push_event(event) log.debug("Pushed event %s to firetower" % (event,))
STACK_EDU
we have recently purchased a license for Aspose.iCalendar and have been testing out the functionality of GenerateOccurrences(). We have a dialog where the user specifies the RecurrencePattern for a series. On closing the form we validate the RecurrencePattern generated to ensure that at least one occurrence is generated by the pattern and, if it is not, our plan was to alter the pattern to change the EndType to Count and the Count to 1 so that 1 occurrence would be generated. The problem we are encountering is that when the pattern specified should not generate any dates, GenerateOccurrences (called with no parameters) always returns one date - today’s date. Is this by design? If it is how would I go about detecting a pattern that should return 0 occurrences so that I can modify the pattern as described above? Thanks in advance, According to the iCalendar standard (RFC2445) start date is considered the first occurrence date and always returned (unless explicitly excluded by EXDATE). I think this is what happens in your case - the start date of the pattern is always returned. Although we comply to the standard, we think that always returning start date is the first occurrence is not good for some scenarious - for example for you. This is not the first time we have this issue raised by our customers so we plan to address it. Probably, we will introduce an option to either comply with the standard and return start date as the date of the first occurrence or not. Thanks for the information. Do you have any kind of timescale of when this option to comply with the standard or not will be built in? In the meantime, I am going to perform a temporary kludge to get around the problem. I am going to set the pattern start date to a day before the actual start date should be and exclude that date from the pattern. Then, the actual start will only be generated as an occurrence if it falls within the range. I think this should work as a temporary fix. just to update anyone who’s interested in the workaround, in the case where the RecurrenceRule EndType is set to Count I have had to increment the Count by 1 to generate the dates that I expected (as the 1st occurrence counts as the StartDate which I am excluding). I am having the opposite problem - I can’t get the component to include the start date in the return from GenerateOccurrences(). My pattern is: and the first date returned is 1/5/2004. I couldn’t really determine if the compenet was leaving the start date out or including it and what the proper behavior according to the RFC was. Any thoughts? The key to understand what happens in your example is that RRULE and EXRULE are processed identically and they both "include" the start date into the result set. Thus RRULE includes the start date and EXRULE excludes it. We have some troubles understanding what behaviour is assumed by the standard in this case hence we left it as is for now, although I agree it does look odd. To workaround and make sure start date is always included, add RDATE:20040101T000000 to your pattern. Vice versa, if you want to make sure the start date is always excluded, add EXDATE for it. Please let me know if you have any strong thoughts or understand the standard on this issue well. The EXRULE specifies only Sat. and Sun., so I would not think that 1/1/2004, a Thursday, would be excluded based on that. Unless the point is that a rule, by definition, includes the start date, then I would (without referring to the RFC, btw) assume that rules would be processed in the order listed, meaning the behavior is correct. RDATE and EXDATE sound like excellent workarounds, though, as it would help clear up any ambiguity for other iCal implementaions that may handle the rules differently. Thanks!
OPCFW_CODE
The idea of describing it as a non-constructable set as in language arts is to express the idea that Raga cannot be dissected or reduced by the ideas of Western music. Perhaps a better term might be irreducible, but if you look at papers on Linguistics and non-constructibility I think you may see that the comment is not meaningless. The problem is that defining a Raga as "a set of notes", and "a motif", and "a framework for the musician to ..." blah, blah, blah, is that this reduction is a completely Western Philosophical view applied to an Asian tradition. Raga is more than that. It does not express the full idea of what Raga means to say that a musician is noodling around on the Dorian mode (which is one of the Carnatic Scales). Raga includes cultural ideas. In some sense one can compare it to language where we have a vocabulary, then we have cultural ideas expressed in idioms. English, Canadians (for the most part), Australians, etc. speak English, yet they don't always understand each other. Not because of accent but because of idioms, euphemisms, vocabulary not in common use across all English speaking regions. This is not reducible like the base roots of language. A great deal of meaning is conveyed by tone of voice and other things like body language. Yet we remove that when we try to describe "language" as a rule set. Raga is often mapped to a complete human experience, not just some notes that are played with. Phrasing, and other attributes are taught as part of the Raga. Some examples are Morning Raga, Evening Raga, etc. Musical ideas that are supposed to invoke in the listener the feeling that most people have when they wake up, or start slowing down in the evening and stop working. You can't write this is Western sheet music. In fact we cannot even express the full gamut of Western experience this way. I seems to me that the entire Westernized way of looking at music evolved from the development of orchestras and multi voice harmony. Not all musical languages follow this tradition and are not describable in these terms. In the Indian culture Yoga and chakras are an integrated idea that permeates art and music. Not everyone in India practices this or believes in it but it has roots that are thousands of years old and hence is an integral part of everyone's common experience. That being said, many Raga ideas map to chakras, moods, etc. We have some of this in western music in that minor modes usually convey negative moods, and major positive moods, etc. But that is a far cry from saying that this collection of musical motifs (a Raga) stimulates sexual desire, or produces a calming effect. Again, I am not stating that there have been double blind studies that verify this, but the idea is deeply rooted in the Raga tradition. To simply reduce it to "up scale", "down scale", "etc". Misses the entire point of Raga, and ignores the cultural ideas contained within the Raga. I'd recommend reading the following (Wikipedia is not always a good source if info). The Raga-ness of Ragas by Deepak Raja Classical Music of India by Subramaniam and Subramaniam Nuances of Hindustani Classical Music by Hirlekar And listen to some of these classic Ragas like The Morning Raga, etc. I have a box set I got in India more than 10 years ago called 50 Glorious Classical Years that has examples of every kind of Classical Indian music performed by the great musicians of India. The Western view of music is not all encompassing yet it doesn't stop us from trying to fit every thing in our box. To really get a Raga you need to learn it the traditional way, from a master in person. All the subtle nuances are transmitted by oral tradition.
OPCFW_CODE
import { BackendMultiplexor } from "./BackendQuotaSavers/BackendMultiplexor"; import { AuthStates, AuthClient } from "./AuthClient"; import { Settings } from "./Settings"; import { EntryStatus, EntryModel } from "./EntryModel"; import _ from "lodash"; import { Mutex } from "async-mutex"; import { EntriesTableModel, EntriesSubscriptionCallback, } from "./EntriesTableModel"; import assert from "assert"; interface HistoryItem { old: EntryModel; new: EntryModel; } export class EntriesTableModelImpl implements EntriesTableModel { private _disposed = false; private _historyIndex = 0; private _history: HistoryItem[] = []; private _addNewItemMutex = new Mutex(); // |_entries| is in reverse order. // It is natural for |Map| to add new items to the end, but // in |EntriesTable| new items belong to the top. private _entries: Map<string, EntryModel> = new Map(); private _isCreatingNewEntry = false; private _settings?: Settings; private _serializedSettings = ""; private _descriptions: Map<string, string> = new Map(); private _subscriptions: Set<EntriesSubscriptionCallback> = new Set(); constructor( private _backendMap: BackendMultiplexor, private _authClient: AuthClient ) { this._syncLoop(); } dispose(): void { this._disposed = true; this._subscriptions = new Set(); } subscribe(callback: EntriesSubscriptionCallback): void { this._subscriptions.add(callback); if (this._entries.size > 0) callback(this._getFilteredEntriesArray(), this._settings, { canUndo: false, canRedo: false, }); } unsubscribe(callback: EntriesSubscriptionCallback): void { this._subscriptions.delete(callback); } addNewItemThrottled = _.throttle(() => { this.addNewItem(); }, 500); addNewItem = async (omitHistory = false): Promise<void> => { const release = await this._addNewItemMutex.acquire(); const entry = this._tryFindVacantEntry() || (await this._createNewEntry()); if (entry == undefined) return; this.onUpdate( entry .clear() .setFocused(true) .setInitiallyCollapsed(true) .setCreationTime(new Date(Date.now())), omitHistory ); release(); }; // If user deletes entries from top of the table than keys assigned to these // entries can be reused. // This function returns such entry if it exists. _tryFindVacantEntry(): EntryModel | undefined { let lastVacantEntry: EntryModel | undefined = undefined; this._entries.forEach((entry) => { if (entry.data === EntryStatus.DELETED) { if (lastVacantEntry == undefined) lastVacantEntry = entry; } else { lastVacantEntry = undefined; } }); return lastVacantEntry; } // If there is no deleted entry to reuse the only option is creating a new one. async _createNewEntry(): Promise<EntryModel | undefined> { if (this._isCreatingNewEntry) { return; } this._isCreatingNewEntry = true; const newKey = await this._backendMap.createKey(); assert(this._isCreatingNewEntry); this._isCreatingNewEntry = false; const newEntry = new EntryModel(newKey, EntryStatus.DELETED, ""); this._entries.set(newKey, newEntry); await this._sendEntryToBackend(newEntry); return newEntry; } undo = (): void => { if (this._historyIndex === 0) return; this._historyIndex--; let entry = this._history[this._historyIndex].old; if (entry.data !== EntryStatus.DELETED) { entry = entry.setFocused(true); if (this._history[this._historyIndex].new.data === EntryStatus.DELETED) { entry = entry.setInitiallyCollapsed(true); } } this._entries.set(entry.key, entry); this._sendEntryToBackend(entry); this._onEntriesChanged(); } redo = (): void => { if (this._historyIndex >= this._history.length) return; const historyItem = this._history[this._historyIndex++]; let entry = historyItem.new; if (entry.data !== EntryStatus.DELETED) { entry = entry.setFocused(true); if (historyItem.old.data === EntryStatus.DELETED) { entry = entry.setInitiallyCollapsed(true); } } this._entries.set(entry.key, entry); this._sendEntryToBackend(entry); this._onEntriesChanged(); } sync = async (): Promise<void> => { if (this._disposed) return; while (this._authClient.state !== AuthStates.SIGNED_IN) { await this._authClient.waitForStateChange(); } const keys = await this._backendMap.getAllKeys(); const newEntries = new Map(); Array.from(keys).forEach((x) => { let entry; if (this._entries.has(x.id)) { entry = this._entries.get(x.id); } else if (x.description === EntryStatus.DELETED) { x.outdated = false; entry = new EntryModel(x.id, EntryStatus.DELETED, ""); } else { x.outdated = true; entry = new EntryModel( x.id, newEntries.size < keys.length - 30 ? EntryStatus.HIDDEN : EntryStatus.LOADING, x.description ?? "" ); this._descriptions.set(x.id, x.description ?? ""); } assert(!!entry); if (x.description !== this._descriptions.get(x.id)) { entry = entry.setDescription(x.description ?? ""); this._descriptions.set(x.id, x.description ?? ""); } newEntries.set(x.id, entry); }); const promises: Promise<void>[] = []; if (this._settings == undefined) promises.push(this._fetchSettings()); keys.reverse().forEach((x) => { if (x.outdated && newEntries.get(x.id).data !== EntryStatus.HIDDEN) { promises.push(this._fetch(x.id)); } }); this._entries = newEntries; if (this._entries.size === 0) { await this.addNewItem(true); return; } this._onEntriesChanged(); await Promise.all(promises); }; onSettingsUpdate = _.debounce((settings) => { this._settings = settings; this._onEntriesChanged(); this._backendMap.setSettings(settings.stringify()); }, 1000); onUpdate = (entry: EntryModel, omitHistory = false): void => { if (!this._entries.has(entry.key)) return; const prevEntry = this._entries.get(entry.key); assert(!!prevEntry); if (entry.data === EntryStatus.LOADING) { if (prevEntry.data === EntryStatus.HIDDEN) { this._fetch(entry.key); this._entries.set(entry.key, entry); } return; } this._sendEntryToBackend(entry); if (!omitHistory) this._addHistoryItem(entry); this._entries.set(entry.key, entry); this._onEntriesChanged(); }; _syncLoop = async (): Promise<void> => { if (this._disposed) return; await this.sync(); setTimeout(this._syncLoop, 15000); }; _addHistoryItem(newEntry: EntryModel): void { const oldEntry = this._entries.get(newEntry.key); if (oldEntry == undefined || !oldEntry.isDataLoaded()) { return; } this._history = this._history.slice(0, this._historyIndex); this._history.push({ old: oldEntry, new: newEntry, }); this._historyIndex = this._history.length; } async _sendEntryToBackend(entry: EntryModel): Promise<void> { let descriptionPromise = null; if (entry.description !== this._descriptions.get(entry.key)) { descriptionPromise = this._backendMap.setDescription( entry.key, entry.description ); this._descriptions.set(entry.key, entry.description); } let dataPromise = null; if (entry.isDataLoaded()) dataPromise = this._backendMap.set(entry.key, JSON.stringify(entry.data)); await Promise.all([descriptionPromise, dataPromise]); } async _fetchSettings(): Promise<void> { const serializedSettings = await this._backendMap.getSettings(); if (serializedSettings == "") { this.onSettingsUpdate(new Settings("")); return; } if (this._serializedSettings === serializedSettings) return; this._serializedSettings = serializedSettings; this._settings = new Settings(serializedSettings); this._onEntriesChanged(); } _fetch = async (key: string): Promise<void> => { const content = await this._backendMap.get(key); if (!this._entries.has(key)) { console.error("Entry for fetch doesn't exist anymore. " + key); return; } if (content === undefined) { console.error("Key " + key + " is missing"); const entry = this._entries.get(key); if (entry != undefined && !entry.isDataLoaded()) { this._entries.delete(key); this._onEntriesChanged(); } return; } try { if (content === "") { const entry = new EntryModel(key, EntryStatus.DELETED, ""); this._addHistoryItem(entry); this._entries.set(key, entry); } else { const data = JSON.parse(content); if ( data !== EntryStatus.DELETED && (data.left == null || data.right == null) ) { throw new Error("bad format " + content); } if ( data === EntryStatus.DELETED && this._descriptions.get(key) !== EntryStatus.DELETED ) { this._backendMap.setDescription(key, EntryStatus.DELETED); } const entry = new EntryModel( key, data, this._descriptions.get(key) ?? "" ); this._addHistoryItem(entry); this._entries.set(key, entry); } } catch (e) { console.error(e.message + " " + key + " " + content); const entry = this._entries.get(key); if (entry != undefined && !entry.isDataLoaded()) { this._entries.delete(key); this._onEntriesChanged(); } return; } this._onEntriesChanged(); }; _getFilteredEntriesArray(): EntryModel[] { return Array.from(this._entries.values()) .reverse() .filter((x) => x.data !== EntryStatus.DELETED && x.key !== null); } _onEntriesChanged(): void { const entries = this._getFilteredEntriesArray(); this._subscriptions.forEach((callback) => { callback(entries, this._settings, { canRedo: this._history.length > this._historyIndex, canUndo: this._history.length > 0 && this._historyIndex > 0, }); }); } }
STACK_EDU
Copyright © 1995-2001 MZA Associates Corporation Runset Monitor Overview The tempus Runset Monitor is a standalone program that allows the user to continuously monitor the state of a simulation runset. It consists of several display sections to view the progress, time, and disk space information, in addition, it has an optional control bar and an optional message window. The control bar provides buttons to skip the next run or pause the execution of the simulation, while the message window can be used to track, display, and save log, error, and warning messages including the present context. The following links describe the features of the Runset Monitor: Software configuration and requirements The tempus Runset Monitor was written for the Windows98\2000\NT environment using the MS Visual C++ 6.0 compiler. Files needed for the monitor program are the executable file trm.exe and the configuration file trm.cfg both located in the <TEMPUS_DIR>\bin\<MSVC_VERSION> directory. The help file trm.htm and its associated jpg image files are located in the C:\MZA\doc\trm directory. Necessary dll files which must be installed into the \Windows\System32 or \WinNT\System32 directory are mfc42.dll, msvcrtd.dll, and msvcirtd.dll. The configuration file stores the path to the help files and up to 10 runset names that were used during the last sessions. The Runset Monitor can be started by double-clicking on the trm.exe file or by typing trm in a DOS window. Make sure the environment variables <TEMPUS_DIR> and <MSVC_VERSION> have correctly been set. Following command line arguments are available to specify the initial setup: All command line arguments are position insensitive and, with exception of the runset name, case insensitive. When submitting a runset name without the ending .trf or .smf, the program first checks if an executable corresponding to the runset name exists in the current directory. If the program finds the executable it will use the next available runset test name (i.e. runset name extended by a number) to connect to the simulation. Otherwise, the runset name itself (without the .trf or .smf ending) will be used to connect to the simulation. The runset monitor employs a shared memory file to connect to a simulation runset and exchange progress data and control commands. This shared memory file is created by the simulation runset and its name is equal to the name of the new data trf file (without the .trf ending), also created by the simulation at startup. In order for the monitor to connect to a simulation runset, the correct name of the shared memory file has to be submitted. |The runset to be monitored can be defined or changed at any time by clicking on the button next to its name field and selecting/entering the shared memory file name in the popup dialog. Up to 10 previously submitted names will be stored and can be selected in the dialog list box. In case the exact shared memory file name is unknown, one can click on the Search button to look for and select the desired trf file. If the monitor canít connect to the selected runset file, an error message will be printed to the status field and the program will periodically try to connect again. The runset name dialog is accessible in the Summary and Runset display section and in the View menu under Select Runset.| The runset name can also be defined during startup as a command line argument by either submitting the shared memory file name or the runset simulation name itself. The Runset Monitor has several display sections which can be opened or closed in the View menu. To access the View menu right-click on the monitor window outside any display field, or right-click on the window name bar to bring up the system menu which has a link to the View menu. The following display sections are available: In addition to each individual display section, following predefined display sets can be selected using the View menu: If no valid runset has been selected, the monitor information fields will be blank. When all display sections are closed one can access the View menu through the system menu by right-clicking on the window name bar. |The Summary section presents a brief overview of the state and progress of the simulation runset. It includes the runset name, the number of runs, the current run being processed, the progress of the current run, the overall progress, and the status of the runset.| |The Runset section shows all information pertaining to the runset itself. The following information is reported: the runset name, the number of runs, the current status and progress of the runset, the elapsed time and estimated total time, the name and location of the simulation output trf file including the current disk space used and estimated disk space required.| |The Current Run section displays the information specific to the current run in progress. It includes the current run number, the progress of the current run, the elapsed time and estimated total time, the current virtual time and the virtual stop time.| The Control Bar consists of three buttons: The control buttons are enabled if the right conditions are met and are disabled if no valid runset has been selected. The Message Window traces the log messages, and possible warning and error messages, along with the current context of the simulation runset. The context and messages are displayed in separate fields and are updated every 0.1 second. The Context field displays always the newest context and can be set to either show the top or the bottom view using the double arrow buttons aside its vertical scroll bar. The Logs field contains up to the last 100 log messages and the Warning/Error field contains up to the last 50 error and warning messages, both always showing the most recent message on top. For each log, error, or warning message its corresponding context can be displayed by left-double-clicking on the message. This will pause the display which can be restarted by left-clicking in the Context field or by pressing the Go button. The display can also be paused independently by pressing the Pause button. In addition to tracing the messages, the Message Window continuously displays the current virtual time and updates the total time and total memory used whenever the context level decreases. A Status field shows the current state of the runset and possible lost messages are counted and displayed in the Messages Missed field. The Message Window also provides a message buffer which the user can flush to a selected file using the Save button. The default size of the buffer is 50000 messages but the size can be defined using the command line argument at startup. If the buffer size is set to 0, no messages are buffered and the Save button is disabled. This can reduce the CPU time taken up by the Monitor program when processing a large amount of messages, thus leaving more time for the actual simulation process. For debugging purposes, the simulation can automatically be halted when an error occurs by setting the Sleep On Crash flag in the View menu. When the Message Window is closed or a new runset is selected all data, including the message buffer, is deleted. Messages are only tracked and processed when the Message Window is open. Top of this document
OPCFW_CODE
F303 DMA operation hangs after exactly 2 DMA transfer, the START bit from i2c CR2 remains HIGH, and the while loop inside of the write/read/write_dma/read_dma will hangs forever. I know it is working because the data i read is as expected from when i use non-dma i2c, also i tried to use write_dma to start the chip, enabling only some of the axis, and i get data as expected, 0 fro disabled axis and actual values in expected range for the others. So the dma read/write are "working" snippet: let BUF = [0x28 + 0x80]; // 0x80 mean multiread let mut ANSWER = [0, 0, 0, 0, 0, 0]; loop { i2c.write(addr, &BUF).ok(); // request register, currently not using DMA as there is no way to know if it is completed unsafe { i2c.read_dma(addr, &mut ANSWER, stm32_hal2::dma::DmaChannel::C7, Default::default(), &mut dma); } delay.delay_ms(1000); //toggle led if led.is_high(){led.set_low()}else{led.set_high()} } NOTE 1: For F303 many things are different from example/i2c.rs (that also should not compile as there are some spelling error) in particular dma::mix() does not exist, i guess is not necessary for F3 and F4? it is very confusing, maybe should be thre anyway, simply doing nothing, at least wont break compilation. NOTE 2: now i have a delay before starting the next read/write operation, to make sure i am not trying to overlap the write on the dma read, but that is not good and i would love to have a way to know if the dma transfer is completed, without having to enable interrupt. i reduced the test to: let BUF = [0x20, 0x77]; unsafe { i2c.write_dma(addr, &BUF, true, stm32_hal2::dma::DmaChannel::C6, Default::default(), &mut dma); } delay.delay_ms(10); unsafe { i2c.write_dma(addr, &BUF, true, stm32_hal2::dma::DmaChannel::C6, Default::default(), &mut dma); } the result are consistent: first write is fine: the second one hangs: tested also disabling clock stretch, and changing i2c noise filter to analog or digital 5, without success. As the sensor works 100% reliable for long time on non-dma, im very lost. I could not find any errdata orclue about what is going on here I'll take a look and get back to you. And clarify the example. Mux is not used on F3 - it's for newer STM32s. F3 channels are hard-set. I'll take a look and get back to you. And clarify the example. Mux is not used on F3 - it's for newer STM32s. F3 channels are hard-set. thanks, im much more familiar with F4 series, so i was confused. If you don't want to use interrupts, you can poll on if the DMA xfer is complete (dma.transfer_is_complete() method) oh, i didnt think to look into the DMA class, thanks! But you are then throwing out the main benefit of DMA. Not in my case, I need to read few sensor on different bus @ ~1KHz, combine result and send the result to uart; reading one I2c sensor has shown here, take my main loop from 420000 loops/s to 1440 loops/s, so pretty much I am wasting all my precious time waiting for data. By just moving the read of the slowest (or all) sensors to DMA should be more than enough to fix the problem. What's the word? Closing pending further information. Issue is still present with 1.5.3 I looked around and it seems there may be some issue with i2c driver on all stm32, but i dont know enough to confirm if it is the case (see https://electronics.stackexchange.com/questions/267972/i2c-busy-flag-strange-behaviour) I would like to propose an alternative solution tho; the driver should NOT have any infinite loop, or at least a timeout alternative should be provided, so it would be possible to try a software recover, or at least not hang the rest of the system.
GITHUB_ARCHIVE
Daily news, dev blogs, and stories from Game Developer straight to your inbox October 25, 2007 4 Min Read Author: by Patrick Murphy, Staff Continuing Gamasutra’s ‘Road to the IGF’ feature, which profiles and interviews Independent Games Festival 2008 entrants, today’s interview is with Bits & Pieces Interactive's Mårten Brüggemann. Brüggemann and his Swedish-based colleagues have created Fret Nice, a platformer to be played with a guitar controller, and designed to make the player feel as if they are controlling the main character in time with the rock soundtrack. The game's official description also notes: "With the "Riff Combo system", the player is able to play along to the game's rich soundtrack while avoiding obstacles and defeating menacing enemies." Multiple game levels each have a specific rock song to play along with, and we asked Brüggemann about the genesis of this rockin' project. Fret Nice began as a University project, correct? How was your work received? Yes, Fret Nice was my degree thesis, and as that, it was pretty well received. However, if I had to choose, I wouldn't want to create another game in an academic context - as I thought that sort of background research-heavy work procedure often draws one's attention from the actual game designing. Where did you draw inspiration from in its design and implementation? I would be foolish not to mention Donkey Kong Jungle Beat as an inspiration for the "platformer controlled with a musical instrument" part of the game. When it comes to specific platform elements of the game, I think 20 years of 2D love makes it hard to make out specific sources of inspiration. Overall, the game design is focused around the guitar controller, and most of the game mechanics are incorporated to make the most out of the differences in using a guitar to control this kind of game, as opposed to an ordinary joypad. So I guess you could say that the guitar controller itself had a big influence on the game design. What made you choose Multimedia Fusion 2, and what sort of other development tools have you been using on the project? I've been working with Multimedia Fusion and its predecessors for a long time, and thought it would be sufficiently powerful for what I wanted to do with Fret Nice, plus I'm not that good at programming. For music, I used a tracker called Modplug Tracker, and the graphics were made by Emil Berner in GraphicsGale, mostly. Aside from utilizing a guitar controller, what do you think the most interesting element of your game is? The Riff Combo system used to defeat the enemies of the game and play along to the game's soundtrack is something I'm especially fond of, and I think brings an unique feel to both playing the game and its audiovisual appearance. How long has development taken so far, and what has the process been like? The first version of the game, the one submitted to IGF, took about 6 months of sporadic work to finish. About half of that time went to creating the base of the game and designing the core game mechanics. During the other half, me and Emil worked together, me doing the level design and programming the game elements specific to each level, and him making the graphics. If you had to rewind to the start of the project, is there anything that you'd do differently? There are some parts of the design that could be more balanced, and will be so, in the next version of the game. What are your thoughts on the state of independent game development - which recent indie titles impress you, and why? I think the independent game development scene is as good as it ever was, with potential classics released every week or so on the internet, and with channels of distribution like XBLA, PSN, WiiWare, Steam, and others opening up for indie work, the future looks quite bright. I really liked [IGF Student Showcase winner] And Yet It Moves, for the weird gameplay and the rich ambience created with simple cut-outs and unexpected sound effects. You have 30 seconds left to live and you must tell the game business something very important. What is it? You get no girls by making games! You May Also Like Get daily news, dev blogs, and stories from Game Developer straight to your inbox Subscribe to Game Developer Newsletters to stay caught up with the latest news, design insights, marketing tips, and more
OPCFW_CODE
It’s one of the most iconic games in mobile history. Every person had a high-score, every person was determined to better someone else’s score. It’s one of the most iconic games in mobile history. Every person had a high-score, every person was determined to better someone else’s score. It was a never-ending battle between friends and family. Whenever you met someone with a Nokia phone the same question was always asked: “What’s your high-score on Snake?” Although the original concept wasn’t created by Nokia, Taneli Armanto programmed a mobile specific version for the Nokia 6110 in 1998. It was this release that saw the game begin its journey to reach the great heights of popularity. Since then, there have been many versions of Snake to grace the mobile world. It was in 2000 when the Nokia 3310 arrived that Snake found its true fame. This version brought about Snake II, which is arguably the version that propelled the game to its highest popularity. The updated version added a more snake-like form, bonuses, mazes and a cyclical screen - meaning you could now go through the bottom of the level and come back out the top. Several versions followed as technology advanced. Snake Xenzia was released for monochrome displays around 2005 and was the first look at the game with any color on a Nokia screen. Snake EX was the first real shift in graphics for the game and featured many of the well known aspects of Snake II. The game looked more polished and the snake began to look even more snake-like than its predecessor in Snake II. The game would go on to see a multitude of changes to the way it looked and the way it played in the following years. Snakes was a multiplayer version that allowed you to play with your friends and compete for bragging rights. Snake III brought about a dramatic shift in graphics with the snake looking its most reptilian, and Snake Subsonic made its way into the world in 2008. HMD Global took the game back to basics upon the release of the Nokia 3310 feature phone in 2017. Bringing back the Snake Xenzia name, the updated version of the game is a more colorful and more attractive version of the original. The inclusion of Snake on the feature phone was met with a lot of enthusiasm from fans and critics alike. The Snake legacy doesn’t end there. In 2018 our CPO, Juho Sarvikas, tweeted out that Snake was now available as an Augmented Reality game on Nokia phones. The AR game maps out your level using your camera and your surroundings. The future is bright for Snake. We’re looking forward to seeing where one of the world’s most iconic games will go next. What was your favorite version of Snake? More importantly however, what’s your high-score? Let us know in the comments below.
OPCFW_CODE
When it comes to network time synchronisation, Network Time Protocol (NTP) is by far the most widely used software protocol. Whether it’s for keeping a network of hundreds or thousands of machines synchronised, or keeping a single machine running true, NTP offers the solution. Without NTP, and the NTP server, many of the tasks we perform on the internet, from shopping to online banking, simply wouldn’t be possible. Synchronisation is vital for networks operating over the internet. Without synchronisation, there would be chaos. Imagine receiving an email from someone five minutes before it was sent or transferring money to a user whose machine says the money left before it arrived. Coordinated Universal Time To avoid all these problems, a single, universal timescale is employed across the internet, which is the same no matter which time zone a machine resides in. Coordinated Universal Time (UTC) is governed by atomic clocks, so it is highly accurate and stable. For computer networks to receive UTC, they use NTP servers, which receive the time source from either GPS networks, radio transmissions, or from the internet itself. Once received, it is up to NTP to take this master time source and distribute it around a network to ensure synchronicity. Network Time Protocol Explained NTP is one of the oldest protocols in computing. It dates back to when the internet was still in its infancy, but it has been modified and adapted to ensure it is still relevant. In essence, NTP is an algorithm designed to adjudicate the timing on individual computers and compare them to the UTC time source. If NTP finds any discrepancies, it adjusts the clock on the offending device to ensure it matches. NTP does this with such accuracy that a network of a thousand machines can all be synchronised to within a few milliseconds of each other. NTP adopts a hierarchical system. Rather than have every device on a network checked against the NTP server and its UTC time source, the protocol allows those machines closest to the server to be used as reference to machines lower down. This avoids an influx of traffic to the NTP server and allows a single device to maintain synchronisation in a network of hundreds, or even thousands, of devices, such as ethernet clocks, PCs, phones and more. One of the biggest challenges NTP faces in using UTC as a time source is that this universal time is occasionally adjusted to maintain its correlation with the rotation of the Earth. Because the planet is ever so slightly slowing down, the atomic clocks that govern UTC are more accurate that the planet itself, so an occasional second is added once or twice a year to ensure there is no drift from day into night (although such a process would take millions of years). These incremental changes are known as leap seconds and are identified in the signals sent to most NTP servers. When NTP discovers a leap second is added, it automatically adjusts all devices on a network by repeating a second. Failure to adjust for these leap seconds would result in the network gradually drifting away from UTC and becoming out of sync with the rest of the internet community. Galleon Systems has over 20 years' experience producing NTP time servers and clocks. View the full range of Galleon products and contact Galleon to discuss the best product for you.
OPCFW_CODE
Are you looking for the best Full Stack Developer Courses in Pune? Look no further. In the realm of software development, becoming a proficient full stack developer is a coveted skill. But who is a full stack developer? A full stack developer is a professional with expertise in front-end and back-end web application development. In essence, they understand the entire web development process, from designing user interfaces to managing databases and server-side functionalities. Full stack developers are versatile and capable of working on all layers of a web application, making them valuable assets in various development projects. So if this sounds like something you are looking for, we can help you achieve your career aspirations. Here are the seven Best Full Stack Developer Courses in Pune to pick from. Top 7 Best Full Stack Developer Classes in Pune - Kochiva : (Best for practical knowledge) - Seven Mentors : (Best for mastery in full stack development) - FITA Academy : (Best for comprehensive knowledge) - Seed Infotech : (Best for career guidance) - 3RI Technology : (Best for beginners) - TechnoBridge : (Best for placement assistance) - Edureka : (Best for affordability) 1) Kochiva : (Best for practical knowledge) Kochiva’s online Full Stack training in Pune is best for full stack development. The course contains an extensive array of cutting-edge technologies and vital skills necessary for excelling at full stack web development. Guided by mentors with 15 + years of corporate experience, you’ll grasp real-world insights. Here live projects bridge theory and practice, enhancing understanding. The course caters to diverse learners, making it perfect for students seeking practical skills and working professionals looking to upskill. Regular assignments, tests, and feedback ensure your continuous improvement. - Mentors with 15+ years of corporate experience - LIVE Project - Certification and 100% Placement Assistance - Ideal for College Internship Training - Advanced Course Curriculum - Flexible Batch Timings Kochiva can be highly valuable for building a strong portfolio. Get in touch with Kochiva today: Email – firstname.lastname@example.org Phone – +91 9872334466 Kochiva also provides 2) Seven Mentors : (Best for mastery in full stack development) Their Full Stack Development Course in Pune is like a toolbox for web developers. It integrates HTML, HTML5, CSS, CSS3, jQuery, and Angular for front-end development and Node.js, Express, and MongoDB for robust server-side development. This course imparts essential skills to become a full stack developer in Pune. The course’s structure starts with the front end and progresses to a MEAN Stack module that covers server-side and database interactions. Through practical assignments and integrated projects, students build practical experience alongside theory. This course from Seven Mentors nurtures a well-rounded skill set. The proficiency you’ll gain after training completion: - An excellent understanding of HTML, CSS and MEAN using MongoDB, Angular, Nodejs and Expressjs Programming as front-end technologies - Mastery in Handling the Database with in-depth knowledge of NoSQL - Skills in data modelling, ingestion, query, sharding, and data replication - Understanding of the MEAN Stack concept to create a front-end application - Basic understanding of the Testing of the MEAN Stack Training module 3) FITA Academy : (Best for comprehensive knowledge) This Full Stack course in Pune provides you with the skills needed to build database-backed web applications and software. This course will teach you to design and develop databases, create web APIs, and ensure secure user authentication. Experienced trainers guide students in deploying web applications. The course curriculum is expertly designed with industry insights to offer a solid foundation in practical concepts. Highlights of the course include - Comprehensive knowledge of full-stack development - Hands-on practice with HTML/CSS - Proficiency in UI/UX principles for user-friendly interfaces 4) Seed Infotech : (Best for career guidance) This Full Stack Java Development course in Pune addresses the high demand for Java-related jobs in the software industry. Full-stack Java developers are instrumental in creating applications combining the user-facing front-end and the back-end components. They are responsible for designing user interfaces and deploying, debugging, and maintaining databases and servers. This course prepares you for the demanding role by providing a curriculum that covers a wide range of essential technologies and concepts. Course’ key features: - Front-end technology, along with Server Side programming, database, and Frameworks - Expert-led Sessions related full stack Java - A solid foundation of manual testing - Quality assurance techniques & tools - Hands-on Labs on Full stack Java - Employability Labs - Career guidance from top experts - Grooming Sessions, Resume Building 5) 3RI Technology : (Best for beginners) This Diploma in full stack web development course in Pune offers a pathway for you to become proficient Full Stack Developers in the software industry. Available through both classroom and online training, the course covers fundamental to advanced topics, including backend and frontend technologies. Experienced industry instructors will guide you through the curriculum, ensuring a full understanding of application development. Furthermore, the course assures placement assistance. This program is an effective launchpad for you if you want a career as a Full Stack Developer. Key Features include: Course Duration: 5 months Real-Time Projects: 2 EMI Option Available Project Based Learning 24 x 7 Support Certification & Job Assurance 6) TechnoBridge : (Best for placement assistance) Their Online Full Stack Developer Course in Pune is designed for practical, job-oriented training, integrating Technical Java Full Stack Developer Training, live projects, Soft Skill Training, Aptitude Preparation, and Interview Preparation. They believe in your holistic growth. Mock interviews, group discussions, communication, and soft skill sessions are integral to the course. 7) Edureka : (Best for affordability) This course also imparts skills for creating scalable back-end applications through Express & Node.js. Furthermore, the program equips you with the competence to manage data utilising MongoDB effectively. What do you get? - 100% Placement Assistance - Life Time Access to The Course - Join the course without quitting your job - Affordable Full Stack Developer course fees in Pune In conclusion, to become a successful full stack developer, there are excellent options to choose from. Full stack developer courses in Pune are tailored to levels of expertise and offer a curriculum designed to provide practical skills required in web development. Adding to the appeal, the competitive Full Stack Developer salary in India highlights how this career can be quite well-paying. As businesses increasingly rely on digital platforms, the demand for skilled developers remains robust. By enrolling in these courses, you can deeply understand frameworks and technologies. Pune’s bustling tech ecosystem, coupled with these top-notch educational offerings, ensures that Full Stack classes in Pune graduates are well-equipped to tackle the challenges of modern web development.
OPCFW_CODE
Computational neuroscience postdoc prospects given computational neuropsychology Ph.D. research? I have been accepted for a Ph.D. in computer science (CS) and funded projects I am applying for are computationally focused whole brain neuroimaging (e.g., MRI, DTI) cognitive neuropsychology research with new experimentally-derived human data. My end goal is to do computational neuroscience at a molecular electrophysiology level using experimentally-derived subject data which my biology, math, and CS background and prior neuro research projects help with (also, would greatly enjoy the positions but end goal strong favorite). Would doing a Ph.D. with my research in those funded positions (which are more top-down whole brain, as opposed to my bottom-up individual neuron end goal) prohibit me from being a good applicant for postdoc positions in my end goal research area? How much of an extra challenge would this cause compared to doing a Ph.D. directly with research in my end goal? In general terms: would agreeing to do funded research for a Ph.D. in a related area to the one that is eventually desired allow for getting a postdoc in a position one most wants? Also, what are good ways to do extra work during the degree toward the postdoc wanted? Voted to close as unclear. @aparente001 Please allow me to add descriptions that help clarify the question and thank you for alerting me to it not being clear. I have added a description in parenthesis to try to help clarification. My question is: if I do a Ph.D. with research in a related but different area to my ultimate goal research, will that still lead to me being a good candidate for a postdoc in research in the ultimate goal area? Are there other things I can say to try to help explain the question better? Thanks! @compneurophile Computational neuroscience, even with respect to your "bottom-up individual neuron end goal", is still big enough to make your question too broad. Are you talking about working with "realistic" data-driven models (which involves a lot of mathematics), or analyzing experimental data (which still involves a lot of mathematics, but also potentially machine learning or image processing), or something else. There is also often a fair amount of (software/hardware) engineering required in major projects, e.g. to develop some of the experimental systems. @compneurophile That being said, researchers in the field understand the highly multidisciplinary nature of the field, so as long as you have a chance to familiarise yourself in your CS PhD with some of the essential low-level neuroscience knowledge (electrophysiology, neuroanatomy, neuromodulation, etc), you should be okay. people commenting must be from other fields, because this really isn't an unclear question. obviously it would be best to study your topic of interest now, rather than bank on getting a post-doc doing what you like 5+ years from now. that said, if you have no choice but to stick with your current lab for the PhD, it certainly is possible to switch to molecular/cellular comp neuro for the post-doc. in fact, i've done just that. i did my PhD doing psychophysics and some modeling in an fMRI lab, and will be starting a post-doc this fall in a comp neurosci lab. @101010111100 Thanks for the further explanations. In the position I would work with experimental data on neuroimaging, not “realistic” modeling in the purely theoretical side. I have updated the question’s first paragraph to include that. I want to somewhat avoid being too specific about the work for anonymity reasons. Does that help the question to be clear? @dbliss That is encouraging to find that you have done it, well done! Thanks for saying the Q is understandable. In this case, the potential position is the best one available after I have tried my Ph.D. applications. A question I have is how much extra preparation do you view do needed to do to get accepted into the comp neurosci lab.? Also what was your Ph.D. major? my PhD is in Neuroscience. i took time in my PhD to teach myself how to do neural modeling at the cellular level, and i am (hopefully) about to publish a paper with such a model in it. i'm sure that experience helped me sell myself to the computational lab for the post-doc. i've also tried to stay up to date with the computational literature most directly related to the questions i'm interested in. @dbliss That’s impressive about the paper, best of luck on it! Given that you said you’ve worked in an fMRI lab, how did you find a way to do the paper with cellular modeling? Did you pitch it to the lab or find another way to do it? i have a very hands-off advisor, and i used that to go off in a direction pretty different from what other people in my lab are doing. my neural modeling is focused on cognitive questions that other people in my lab are approaching with fMRI. so there is a relationship there. even if your advisor keeps tighter control on you, i'd bet you can connect his work to some kind of computational question that excites you. @dbliss Good advice thanks so much! That was savvy to maneuver your way into that. Advisers can range such a lot in terms of allowing certain work, but I too like to believe that if a good evidence-based case is made and it relates to their work, than at least one article can be added in that direction. This question should remain open. It is clear that the asker is asking whether expertise in computationally focused whole brain neuroimaging (e.g., MRI, DTI) cognitive neuropsychology can be applied to another, related, area to the extent of doing a postdoc. That's not a question that depends on personal circumstances. That the asker has received useful answers, demonstrates it is a good question. ThomasKing Thanks for the support! To @scaaahu MassimoOrtolano aparente001 Buzz user3209815 I have added a statement to describe the question in terms that generalize to other situations, and I trimmed some extra descriptions that was complicating the question. Please tell me if more editing of the post is wanted, thanks!
STACK_EXCHANGE
Windows Vista has reached corporate customers and is en route to retailers and OEMs for release at the end of January. It’s going to be huge. I’m not going to support it. Here’s why… The history of Microsoft’s releases is, to say the least, chequered. It has been a sequence of promises that have not been delivered. Worse than that the promises have been deliberately designes to frighten users away from switching to other operating systems that already contained these innovations. Let’s look at some: Active Directory is an implementation of LDAP directory services by Microsoft for use in Windows environments. Active Directory allows administrators to assign enterprise-wide policies, deploy programs to many computers, and apply critical updates to an entire organization. An Active Directory stores information and settings relating to an organization in a central, organized, accessible database. Active Directory networks can vary from a small installation with a few hundred objects, to a large installation with millions of objects. [Wikipedia] Active Directory was promised 1996 (to ship with NT 4.0). It was delivered in 2000, shipping with Windows 2000. Novel’s NDS (now called eDirectory) could do this in 1993. Promised in 1990 to detract from the fact that Windows 3.0 was rubbish when compared to anything else on the market (ie NeXT and Mac System 7), it would theoretically feature an object oriented user interface, featuring direct manipulation of desktop objects, and an object oriented development environment. Neither of these features actually made it into Windows 95 – in fact it was little more than a re-polish of Windows 3.11 with the addition of a 32 bit API which was kept secret from other developers until after Office (which used it) had shipped. Meanwhile NeXT already had both an object oriented user interface and an object oriented development environment. It also supported distributed computing to boot and had been available since 1993. Windows XP was set to deliver many of the same features promised for Windows 95. It arrived five years late and didn’t have a range of features available in Mac OS X that shipped seven months earlier. Which brings us to Vista. The list of features promised that have been dropped is longer than the list of ones that remain in. There is also more than a slight suspicion that the developers were given a copy of Mac OS X and told to copy it. Have a look at the hilarious video attached to this review [nytimes.com] by David Pogue. Not only that but many of the shiney new features of Vista I’ve had running on my linux box [linuxforums.org] for ages. So good at promising the world and failing to deliver are they that several companies have filed against them in Iowa [Groklaw.net], the problem being that Microsoft’s half baked promises and the difficulty in getting other products shipped with Windows (both technically and because of restrictions placed on OEM manfacturers) not to mention the forcing of OEMs to promise not to ship any other OS on their machinery with Windows (heaven forfend that consumers be given a choice!) all adds up to anti-competitive behaviour which has destroyed products that were better. I’ve Had Enough I’m not saying that Microsoft Products, when they eventually get to market – stripped down as necessary, are actually bad. The simple fact of the matter is that you can do better. Further to that some things which should be done better aren’t because Microsoft seems to pursuing a deliberate tactic of bullying them out of the market. Competition is supposed to drive innovation, and it does, the problem is that innovation is being stiffled by their behaviour and consumers are settling for second best, often without even realising it. This is the digital age; How you manage your personal data and your computer environment is important. That technology continues to be able to be innovative is important. That documents people write now can be read in the future is important. This last one is a not-so-separate issue relating to the proprietary format that MS Office documents are saved in and their attempts to sideline the Open Document Format with their (not) OpenXML format, see this article [regdeveloper.co.uk] for a quick overview. In short, to be guaranteed that you can read your old documents at some nebulous time in the future you can: - Hope Microsoft will always exist and continue to release programs that you can read your files with. - Save your files in a format whose specifications are open, known and published so anyone can write a program that can correctly read them. Microsoft is pretending that (not) OpenXML meets the latter while in fact still being the former. Why? Why not just implement the ODF standard? Because then consumers wont be locked into Microsoft Office because of file compatibility and they will have to compete on technical quality and price. Wouldn’t that be awful!? No of course it wouldn’t be awful. It would drive up innovation and drive down price which is better for consumers isn’t it? I’ve had enough of supporting this behaviour. Obviously the nature of my job means I need to go on supporting MS servers in the Enterprise for the time being although many of the key products I deal with are migrating to run on Red Hat Enterprise Linux. Outside of work though I’m drawing a line under Windows XP – if you get Vista I’m not going to help you get anything working with it. Sorry. OK, so you all know I’m a Linux geek but trust me, just try it and see. Download the Ubuntu Live CD which will let you try it out without actually overwriting anything and, if you like it, then go ahead and install it! Remember that this ships with a fantastic web browser, fully featured email and callendar client and Open Office 2.0 which is miles better than the previous release and even supports macros. It’ll read most MS documents, from what they’ve been able to decifer about the format. Or buy a Mac. Not a massive fan (other than of the sexy hardware) myself but everything you want to do on your Windows PC you can do on your Mac only easier and better. What about games!? I hear you cry. Well there are some you can get and play on Linux and Mac but a fair few more you can’t… to be honest my usual reply is “buy a playstation”. But the wider the user base of an OS is, the more incentive there is for companies to release games for it. When it comes time to upgrade your hardware and your OS I simply make this one plea: Be a good consumer and choose what operating system you are going to use, compare and contrast what is on offer and pick what you think is best. Do not just take what is given – beware of Greeks bearing gifts.
OPCFW_CODE
Note: In large-scale deployments of GVP (with 10,000+ DIDs), users may experience up to 15-20 seconds response times when editing or saving DIDs in a DID group, through the GVP Reporting plug-in for GAX. Note: DID group provisioning succeeds when you remove the conflicting entries in overlapping ranges (example: 100-200 and 150-250). Backend filtering no longer removes the call record from a report when you use an IVR profile or Tenant or component in the filter. Instead, certain filtering rules apply. See the GVP Reporting plug-in for GAX Release Note. Resource Manager (RM) has three new configuration options in the CTI Connector (CTIC) Logical Resource Group (LRG) for handling CTIC failover. See the Resource Manager Release Note. MRCP Proxy supports provisioning MRCP resources with the same URI, if the resource names or types are different. 15 December 2015 The Sip.Body value in the [vxmli] session_vars configuration option enables access to the body of SIP INVITE messages. See the MCP Release Note. Use the configuration parameter [callmgr] enable_sip_response_in_transfer_metric to configure Media Control Platform to append the SIP response code (when it is available) to transfer_result metrics. See the MCP Release Note. TCP Timer Setup—You can configure the wait time to keep a needed resource available, when waiting to establish a TCP or TLS connection. See the MCP Release Note. 16 November 2015 You can use the GVP Reporting plugin for GAX to "self-service" provision IVR profiles, Map individual DIDs to DID groups, and Map DID Groups to IVR profiles. Read complete details here. 28 August 2015 The Supplementary Services Gateway now supports including custom HTTP response headers in the HTTP responses to HTTP requests. Enable this behavior with the [http]ResponseHeaders configuration parameter. Enter the value in this format: If your value includes the characters | or :, precede them with a backslash (\| or \:). The default value for this parameter will be set to X-Frame-Options:DENY—part of Genesys software defense against “clickjacking”. This behavior also applies to accessing the SSG root page. Now you can configure the duration of the MP3 recording buffer. Use the option [mpc] mediamgr.recordmp3audiobuffer (in the MCP application) to specify the duration of the audio buffer for MP3 recording, in milliseconds. mediamgr.recordmp3audiobuffer must be an integer 2000 or greater; the default is 4000. Any change takes effect at start/restart. The Media Control Platform supports mp3 compression at 8 kbps for mono recording. To enable 8 kbps mono recording, make these settings: msml.record.channels=1 OR msml.record.channels2=1 The default compression remains 16 kbps. The Media Control Platform supports mono-channel recording during MSML GIR recording for all file formats. Use the parameters (both in the gvp.service-paramaters section of the IVR profile record) to specify the channel information: Both options recordingclient.channels=fixed and recordingclient.channels2=fixed can have the possible value 1(mono) or 2(stereo, the default). If a parameter is missing from the IVR profile configuration, then MCP uses the configuration parameters msml.record.channels and msml.record.channels2 (which specify the number of channels that MCP must use for MSML recording to dest2). These two have the same possible values (1=mono and 2=stereo, the default). Any change takes effect immediately. Use the configuration option [msml]record.amazonallowpublicaccess to enable public download access to an MSML recording file that was uploaded to Amazon s3. Set to true to enable access to the uploaded recording file, and to false (the default) to disable access. This option will grant access to both the primary recording destination (recdest) and the secondary recording destination (recdest2), if you configure both destinations to use the s3 URI format (s3:bucketname). Any change takes effect at start/restart. The Media Server function Call Progress Detection (CPD) now performs voice print analysis and beep analysis to identify the specific preconnect carrier messages that occur in different countries. Media Server's configurable database of preconnect tones is initiated during installation and loaded when Media Server starts. You can update the database with different carrier messages at any time, without stopping Media Control Platform. Other features include the ability to leave postconnection messages such as voicemail. You can read about additional CPD functionality in "Appendix C: Tuning Call Progress Detection" of the GVP 8.5 User's Guide. The Reporting Server (RS) adds support for Windows 2012 64-bit and MS SQL Server 2012, in native 64-bit mode using the 64-bit version of the Java Virtual Machine. RS also adds support for Standard and Enterprise editions of the SQL Server 2012 database. Resource Manager can now be configured to reject a call request if the geo-location of the targeted Logical Resource Group (LRG) does not match the geo-location attribute of the call request. This extends to all calls, a behavior that previously applied only to recording solution calls. Read about this behavior and the option that controls it, reject-on-geo-location-nomatch (new in release 8.5.1), in Locating Resources Using Geo-Location. 18 December 2014 GVP added support for Genesys Interactive Recording (GIR): Media Control Platform added two new mp3 encoding compressions—16kbps (new default) and 24kbps—to the existing configuration option [mpc]mp3.bitrate. VP Reporting Plugin for GAX added new support and compatibility requirements: Added support for Management Framework 8.5 and Genesys Administrator Extension 8.5. Added support for Windows Server 2012 R2 64-bit. The plugin now requires GAX 8.5.0; support for GAX 8.1.4 is discontinued. Warning: The Verbose level of debug tracing affects GVP production environment performance. This warning applies to all mentions of the debug tracing setting verbose: Warning: Debug tracing has a significant impact on GVP performance under load, especially the verbose setting. Genesys recommends that you enable debug tracing in GVP production systems only when recommended by Customer Care or Engineering. Trace or tracing is mentioned in the context of debugging or a logging in several places, in the documentation listed below. In each instance, readers should be aware of its performance impact: Genesys Media Server 8.5 Deployment Guide
OPCFW_CODE
What annotation means The act of annotating something is defined as: 1: a note added as a comment or explanation The bibliography was provided with helpful annotations. What are annotations in OData By adding information to a service that instructs clients how to interpret the service and its data, annotations are a potent method of extending OData services. What are annotations in SAP It can be specified for specific scopes of a CDS object, i.e. specific locations in a piece of CDS source code. SAP uses a set of predefined SAP annotations. An annotation enriches a definition in the ABAP CDS with metadata. What are the different types of annotations There are four main types of annotations. What are the types of annotations in SAP ABAP annotations are evaluated when the object defined in the DDL source code is activated or when the object is used in the ABAP runtime environment. SAP annotations are evaluated by SAP frameworks and can be one of the following two types: component annotations or annotations. What is Fiori annotation What is semantics in CDS view They must not be tied to a specific consumption channel, for example, and are used to inform the client which elements contain a phone number, a portion of a name or address, or information about a calendar event. What are associations in CDS views ASSOCIATIONS are a type of join that retrieves data from multiple tables based on the conditions of the join, but they are “JOINS ON DEMAND,” meaning that they are only activated when a user accesses the necessary data that necessitates the Association of tables. What are the types of CDS views ABAP CDS views versus HANA CDS views - ABAP CDS views and HANA CDS views are the two different categories of views. - They are comparable in that they both contribute a substantial amount of ABAP code to the views, but they are different in terms of implementation and platforms. - The following are each types primary characteristics: What does annotate mean in science Annotated drawings are used in this curriculum to respond to specific scientific questions. They are composed of notes and labeled drawings that explain a scientific process. What is the difference between AMDP procedure and CDS view AMDP can be used to work with stored procedures that go further and execute them in the HANA DB layer, whereas Open SQL and CDS cannot. AMDP can be used to work with stored procedures, which further go to the HANA DB layer, and achieve this functionality. What is the difference between Hana CDS and ABAP CDS The CDS objects created using HANA CDS are not controlled by ABAP dictionary and therefore cannot be consumed in ABAP Programs or Open SQL. HANA CDS views aim to support the development of native SAP HANA applications. ABAP CDS views are database independent, whereas HANA CDS views are database dependent. What is the use of CDS views in Hana ABAP CDS Views enable developers to create semantically rich data models that the application services expose to UI clients, providing enhancements in terms of data modelling and enabling improved performance. What is Abapcatalog compiler compareFilter true If the annotation is not used, the compiler will create a specific JOIN-statement for each CDS-Path expression with filters. compareFilter – defines how the CDS compiler will translate CDS-Path expressions with filters to corresponding JOINs. How use CDS view in ABAP program All the SQL-like code written in a CDS view picks the data from Base tables directly or via other CDS views at RUNTIME. This is why CDS views are also known as VDMs, or Virtual Data Models. What is OData publish true You can now create an odata service without using code SEGW by adding the annotation @OData.publish: true to your CDS view, which is described in SAP Help Generate Service Artifacts From a CDS View. What is DDL SQL view With a DDL source, you have the appropriate ABAP development object, which you can use directly to access the standard ABAP Workbench functionality (transport, syntax check, activation), and you can define an entity that represents as a projection onto one or more database tables. What is AMDP in SAP HANA Since ABAP Managed Database Procedures are implemented as methods of a global ABAP class, the editing environment for AMDP is the ABAP class editor. ABAP Managed Database Procedures (AMDP) are the preferred method for developing SAP HANA DB procedures on the ABAP platform.
OPCFW_CODE
Software Development: Think About These Critical Items Before the Construction! We, developers, usually tend to start and create something beautiful, useful and sometimes some geeky. We usually don’t think much about some critical items: What do our clients/users want to see? We usually guess it instead of asking, researching on the market, traveling in Medium, etc… Creators usually over-estimates their creations. Because there are sweat, flesh and blood in it from the creators eyes. But users don’t give a sh*t and also don’t have to! If we want them use our product, we should find out their needs and desires. What will our revenue model be like? I have 4 different applications published in Google Play Store from my very early times in programming and also Android programming. Currently, I have 65k total downloads and 7k active users with those 4 apps. All of them are badly coded, constructed to be Minimum Viable Product applications, created to teach me about Android Programming and to see the market potential. Here I’m going to share my 2 years of experience in the market and construction. When I started to learn Android programming, I saw a gap in the education industry in Turkey. Children have hard times while trying to track their study status like “how many questions I have solved this week?”. Then I’ve started to build an app which you can track your lectures and question numbers which you’ve solved with the right, wrong, empty results. I haven’t made any market research to see if there are any similar products. Or I haven’t asked students what do they want to track in their study. The result was https://play.google.com/store/apps/details?id=centertable.calismagunlugum&hl=tr (Sorry, the only supported language is Turkish for this app) I had faced a lot of problems after publishing: - The scope was too narrow. It was not enough for people to track what they want to. - No market research, there are REALLY good looking and more functional applications - No user involvement; I have never known what they need functionally and non functionally (customizations, limitations, UI design…) - No revenue model; bad, irritating, full-screen advertisements - And more and more problems… Currently, active users are too few and people don’t use it anymore. Also, I haven’t made any money from that application. Then I thought about items above more deeply and see some common problems. Actually, the problems are easy to solve but you should “know” them in the first place and you should see them as “problems”: 1. Identify the problem: At the beginning of a project, you have to point something to create. You should identify a problem and your solution should solve that problem for a successful project. 2. Make a market research for that problem and solution: Are there any similar solutions? They really have the problem which you exposed? You can make a quick search on Google, Google Play Store, other mobile stores, forums, dictionaries,… 3. How do people want to solve that problem? Is your solution in the exact scope? As I said earlier, developers tend to “guess” instead of questioning the potential users in the market. You have to get your hands dirty. If there are similar solutions, applications; you can take a look at their “user comments” and find out “what do people want” and also “what do people like” for those applications. We can point the problem but there may be more problems waiting to be solved. Always search for them from the user’s point of view. 4. What about your profits? If you want to get some revenue, you should think about your revenue model. You can think a free app with advertisements, demo app with less functionality of app with fee (give users a chance to look at it freely, if they like they can buy it), in-app purchase and more and more. In addition, how do you locate advertisements in the app for maximizing the revenue, gamification to motivate people to click your ads (maybe points, virtual currency, .. can be given to users). This part is really important for developers who want make some money with their products. In conclusion; if you want to build a successful application and make money with it, don’t forget about “user involvement”, “market research”, “revenue model”. Original post from my LinkedIn profile: https://www.linkedin.com/pulse/software-development-think-critical-items-before-azmi-rutkay-biyik/ Have a nice day :)
OPCFW_CODE
HP Articles Forum [Return to the Index ] [ Previous | Next ] Better trig functions on HP17BII using Maclaurin series Posted by Michael Blankenship on 10 Sept 2001, 11:35 a.m. [This is a reprint of a message I posted in the forum] I found a way to program the trigonometric functions into my HP 17BII business calculator. True story: my calculus professor told me on the first day of the course to ditch my business calculator and go get a trigonometric calc. Bummed, I leafed through the back of the calculus book and read about the Maclaurin power series. Click, click, click and now I have the trig functions programmed into my HP. You'd think that a certified calculus professor would have suggested this, huh? Anyway, I had a blast figuring out the nifty HP programming language after much struggle to find the manual, searching the Internet for a source of the programming instructions, failure and the "turning up" of the actual manual during a physical search for something else. Anyway, without further ado, the cosine function for the HP business calculator's SOLVER functionality: SOLVE -> NEW -> COS=1+SIGMA(I:2:12:2:((MOD(I/2:2)x-2)+1)x(THETA^I)/FACT(I)) Note: Replace SIGMA above with the actual greek sigma character (looks like a capital E) from the ALPHA -> WXYZ -> OTHER -> MORE menu. Replace THETA above likewise with the greek theta character by pressing MORE twice more. The slash character above is simply the "divide by" character and the character "x" is the times sign, not the letter. The colon is also found on the ALPHA -> OTHER menu. Since this forum doesn't provide for good formatting I can't directly show the equivalent formula but basically: COS(theta) = 1 + sum of series as I steps from 2 to 12, stepping by 2 each time and alternating signs (theta raised to the I power over I-factorial). The modulo section is just so that the terms alternate signs, in other words: COS(theta) = 1 -((theta^2)/2!) +((theta^4/4!) -((theta^6/6!) +... Once you've typed this into the HP, press INPUT and CALC and enter in 0 and press the function key associated with theta, then press the COS function key. The answer should be 1.00000, depending upon your current number of decimal places (which should be about six for these kinds of calculations). Note that the angle input is in radians rather than degrees. More on that in a moment... Without much further explanation (other than the fact that the SIN function uses the odd powers of I instead), here are the remaining functions you might need: Notes: The "DEGREES:" text above is just a label so that you can more easily find the conversion formula. By inputting in the cosine you want (1.00, for example) into the COS function and then pressing the function key associated with theta, you get the ARCCOS function, yielding 0.0 in this case. Don't forget that you can store a variety of temporary numbers here and there using the [STO + number] and [RCL + number] syntax. I could have replaced the... "((MOD(I/2:2)x-2)+1)" code with "IF(MOD(I/2:2)=1:-1:1)" ...but this resulted in more characters. I was a little disappointed to find that the IF() function doesn't evaluate expressions as C/C++ does. I could have saved a little more program space. If you want more precision, increase the number of decimal places in your calculator first and then if that's not enough, increase the upper range of the sigma function to something larger than twelve (in the cosine function, for example). P.S. - Those of you familiar with the Maclaurin series for the SIN function might say, "Hey, the first term of the series is just theta itself." Well, yes, but I was optimizing for size of program, not speed of execution. If you feel otherwise, feel free to change the code so that the series begins at 3 instead of 1 and add theta somewhere outside of the series function itself. P.S.S. - Okay, so why didn't I optimize out the first term of the power series for the COS function? Note that (0^0) is undefined had I started the series at the zeroeth instead of the second power. Edited: 16 Feb 2006, 8:42 p.m. [ Return to the Message Index ] Go back to the main exhibit hall
OPCFW_CODE
With respect to the auto reset issue, I did detailed memory test on my board and found the following 0xA0000 - 0xBFFFF & 0xE0000 - 0xFFFFF region is not writable and every byte reads as 0xFF. Coreboot loads Seabios in the address range 0xA0000 to 0x10_0000. When I tried to read the memory range through XDP after coreboot decompressed the seabios payload, it appeared to be fine. When Seabios was executing , it was resetting at or around malloc init. Later I introduced WBINVD instruction just before jump to payload, XDP which was displaying program code started reading 0xFF. This indicates that the payload loading was happening in cache memory of CPU & executing Here I conclude that the memory range 0xA0000 - 0xBFFFF & 0xE0000 - 0xFFFFF is not usable in my board & is the cause of seabios failure. I tested the same region by booting with bios provided with board & found that the memory range 0xA0000 - 0xBFFFF & 0xE0000 - 0xFFFFF is not usable. Can anyone suggest me how to go about this issue. My intention is to boot Win7 successfully. On Tue, Mar 17, 2015 at 06:42:06PM +0530, Naresh G. Solanki wrote: > Dear All, > I'm trying to port coreboot with seabios payload on Intel Atom E6xx based > The problem I'm facing is during malloc init, the board reboot & keeps on > rebooting after displaying malloc init debug message. > I tried to track it & found that it might probably be due to dummy IDT > loaded @seabios entry point (entry_elf) > Once it reboot I'm finding that breakpoints set are not working. > Breakpoints are set through JTAG-XDP. > Can anyone give an idea to how to go about this issue. SeaBIOS loads a dummy IDT when in 32bit mode, because the code is never supposed to raise an interrupt (interrupts and NMIs are always disabled when in 32bit mode). So, if you're seeing a fault with the IDT, it likely indicates something is causing a software fault. If a fault is ocurring in malloc_init(), then I'd check that the memory between 0xc0000-0x100000 is fully read/writable. You could also try disabling CONFIG_MALLOC_UPPERMEMORY to see if that changes behavior. malloc_init() is also called right after self-relocation, so check that the memory map that coreboot provides is accurate. You could also try disabling CONFIG_RELOCATE_INIT to see if that changes
OPCFW_CODE
Feb 2, 2017 The Chart control is a chart object that exposes events. When you add a chart to a worksheet, Visual Studio creates a Chart object that you can Forms.DataVisualization.Charting.Chart), "ChartControl.ico")] public class Chart : System.Windows.Forms.Control, IDisposable, System.ComponentModel. a quick chart there's a free and easy-to-use component in Visual Studio 2010. In .NET 4 you will discover that you have a ready made charting control, i.e. Jun 27, 2014 This example shows how to display your data in your Windows Forms program as a bar graph or spline chart. To achieve this, you use Chart Mar 9, 2015 A chart is used to present the data in a visual form. Let's Begin. Step 1. Open Visual Studio (I am using Visual Studio 2012) and create a new Speed your development time with 80+ .NET charts in this full-featured cross- platform chart control. Available in WinForms, WPF, UWP, ASP.NET MVC, Wijmo After installing , restart you visual studio,, open a windows form application and Try devexpress.. Add " New Tab" in your tool box. Right Click Visual Studio > Controls > Chart Control for WinForms. Chart Control for WinForms. Comprehensive set of 2D & 3D Chart types to create attractive data visualisations including chart toolkit and visual attributes customisable. Get Started. About TeeChart Chart Control for WinForms. Offers great charts, maps and gauges for a myriad of ChartDirector for C++ uses C linkage to the ChartDirector DLL/shared object. As a result, it is compatible with most compilers. ChartDirector has been tested with Visual Studio, Borland C++, gcc and cc. If you are using C++ compilers, it is likely ChartDirector is compatible with them too. Jun 27, 2014 This example shows how to display your data in your Windows Forms program as a bar graph or spline chart. To achieve this, you use Chart Android Controls provides data grids, charts, gauges and barcode scanning. iOS Controls provides the same controls optimized for Xcode and Objective C. Xamarin.Forms UI controls integrate the controls provided by the Android and iOS offerings with the Xamarin cross-platform mobile development tools for C# and XAML. You have to use Visual Studio's GUI to add an assembly to your program to use Excel interop. Use the Add Reference command for this. Tip: Add the Microsoft.Office.Interop.Excel assembly by going to Project -> Add Reference. Class: First, we make a new C# class file in Visual Studio and you can call it something like ExcelInterop. "C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off" (Bjarne Stroustrup) Firstly i wanna to explain about Qt "What is Qt ?" why Visual C++ and MFC Gui Development tool with full mfc source code. 3D Plot - Chart Graph ActiveX Control with OpenGL. C++ Chart Graph Library. Factory Pattern in C++. Serial Port Communication, Read data from port Visual C++ Samples. Play MP3 File and MP3 Palyer with Visual C++ Source Codes. [C#]-Samples Environments for Microsoft Chart Controls/Windows Forms make this sample ready for the Windows Store or Microsoft Store, see Visual Studio A great strength of C++ is the ability to target multiple platforms without sacrificing performance. If you are using the same codebase for multiple targets, then CMake is the most common solution for building your software. You can use Visual Studio for your C++ cross platform development when using CMake without needing to create or generate Visual Studio projects. Codejock Software, a leading provider of modern user interface components, today announced the introduction of their new product Chart Pro 2010 v14.1.0 for Visual C++ MFC and ActiveX COM. Some highlights from this release include: Bar Chart: A bar chart displays data with rectangular "bars" with lengths relative to the data they symbolize. Microsoft Visual C# is a programming environment used to create computer applications for the Microsoft Windows family of operating systems. It combines the C# language and the .NET Framework. This website provides lessons and topics on how to create graphical applications using Microsoft Visual C#. Visual Studio dev tools & services make app development easy for any platform & language. Try our Mac & Windows code editor, IDE, or Azure DevOps for free. You use Form controls when you want to easily reference and interact with cell data without using VBA code, and when you want to add controls to chart sheets. For example, after you add a list box control to a worksheet and linking it to a cell, you can return a numeric value for the current position of the selected item in the control. Visual Studio 2008 reached end of support on April 10, 2018.To aid the discovery of the latest downloads, the links are retained currently, but may be removed in the future. Download the Visual Studio 2008 Service Pack 1 (Installer).This is the latest Visual C++ service pack for Visual Studio 2008. This example demonstrates a real-time chart in which the data are acquired from a separate thread. It is based on the Real-Time Chart with Zooming and Scrolling sample code in the ChartDirector distribution, and is available in C++ (MFC, Qt), C# (.NET Windows Forms, WPF) and Java (Swing). The following only explains the multithreading part of the code. In my Control Panel>Programs and Features I have nineteen instances of Microsoft Visual C++, dating from 2005 to 2010, and installed from my in-service date of April 2010 to December 2011. Is it necessary to have all these programs/features installed or can some be safely remove. This is a simple, lightweight 2D graph control that supports multiple plots as well as printing. Why make another graph control? This one gives the user basic functionality which would make a great oscilloscope without a lot of extra features getting in the way. Easy way to plot graphs with C# and Visual Studio 2010. Ola, amigos! Here I am again to another tutorial. "By default, the Chart control automatically sets the scale of the axes in chart areas based on its data series. You can manually set the Minimum, Maximum, Interval, IntervalOffset, IntervalType, and IntervalOffsetType properties for It can be used for building Processs Control and Circuit Diagram , and with Process Control & SCADA, it with 2010, and database print JScript solution with MFC Skin and Chart Graph of BPMN, E-XD++ with Visual Studio 2011 and full Visual C++ 2011 support.. Describes how to programmatically add controls to Windows forms at run time by using Visual C#. Also includes a code samle to explain the methods.
OPCFW_CODE
What’s New in ComponentOne 2019 v2 We’re pleased to announce the second major release of 2019—ComponentOne 2019 v2. For the 2019 v2 release, ComponentOne Studio Enterprise and ComponentOne Ultimate continue to grow and address the needs of .NET, mobile, and web developers. DataEngine for .NET Core The new ComponentOne DataEngine (C1DataEngine) for .NET Core uses in-memory caching technology to deliver faster extraction, transformation, and loading of large and complex data sets. - Fetch and query millions of records in a second or less. - Sort, filter, group, and aggregate data at runtime without needing to hit the server. - Blend data from multiple data sources into a single collection (SQL, CSV, JSON, .NET Objects). - Any .NET Core or ASP.NET Core application supported. TextParser for .NET Standard The new ComponentOne TextParser (C1TextParser) for .NET Standard enables you to efficiently extract data from plain text or HTML files that can then be stored in a table of records or transferred to another system. - Extract and integrate data from semi-structured sources such as emails and invoices, into your workflows. - Parse data using a variety of different techniques (Starts-After-Continues-Until, HTML and template-based with regular expression matching) - Extract repeated fields from HTML files to generate a data table of records. - Supported with any .NET Framework, .NET Core, ASP.NET Core, UWP or Xamarin application. C1DataEngine and C1TextParser can both be downloaded from the service components tile in the ComponentOne Studio Installer. They are licensed as part of the ComponentOne Ultimate Bundle. See ComponentOne Studio Enterprise Feature comparison & pricing here. Support for .NET Framework 4.5.2 In addition to adding new libraries for .NET Standard and .NET Core, we continue to update ComponentOne Studio components to support the latest .NET Framework. Microsoft has ended support for .NET 4.5.1, and based upon feedback from an earlier survey, we decided to update all ComponentOne .NET controls to .NET 4.5.2. In this release, we’ve updated all WinForms and ASP.NET (MVC and Web Forms) controls. We will finish updating WPF controls by the next release in November. Starting with 2019 v2, .NET 4.5.2 will be the lowest supported framework for the controls. This means that all new features and new controls will be exclusive to .NET 4.5.2, however, we will continue to maintain the 4.0 version of the controls for one year until 2020 v2. Office 365 Ribbon for WinForms (beta) We originally released the popular C1Ribbon control for WinForms more than a decade ago, and over the years we continued to add features to keep pace with Microsoft Office. This year we decided it was time to create a new ribbon control to cater to requests from our users. The new Ribbon for WinForms is built on .NET 4.5.2 and it’s based on UI concepts of Office 365. With it you’ll enjoy: - New simplified view when the ribbon is collapsed. - 20+ embedded controls including buttons, progress bars, updated galleries and more. - Enhanced set of embedded images for buttons along with support for font and vector-based icons. - A backstage view and status bar component. - Users of the old ribbon will be pleased to know that the new ribbon also supports the same 40+ themes, or they can customize one using the C1ThemeController. We will continue to maintain the old C1Ribbon, but please consider the first version of the new ribbon as a beta this release, so that we can gather useful feedback from users and make necessary changes. · Icon Classes for Modern Apps - The new C1Icon is a set of classes which generate monochromatic icons that can be tinted and resized easily, without all the pain points that bitmap-based icons have. These icons are used internally in some controls, such as the new Ribbon for WinForms, where users are able to specify different icons through the API. C1Icon sources can be fonts, vectors (path or SVG), and images. C1Icon is supported in WinForms, WPF and UWP.
OPCFW_CODE
Stay Informed With Our Curated List of IT Know-How for the Modern Workplace As the tech world seems to move faster than the speed of light, it’s tough keeping up-to-date with current technology trends. No one wants to be last to the party or feel on the spot when asked about the latest ‘thing’. At IT Lab, it’s our job - and our passion - to keep pace with the IT world. Our close relationship with Microsoft is one way we do this, and we share an enthusiasm for Modernising Workplaces, creating more productive and fulfilling environments to work. And through our strategic partnership with the tech behemoth, we’ve been privileged to work with them in their development of exciting new managed services, such as the Microsoft Managed Desktop (MMD). We’re available to answer any questions you may have, from the prosaic (we never judge) to those that challenge us and call for out-of-the-box thinking. Sometimes, though, it's best to take time out and learn new stuff over your morning coffee (or a glass of blended grapes in the evening). So, we’ve compiled this list of the top social media accounts to follow. Brad Anderson, Corporate Vice President at Microsoft Quick Bio: Redmond, WA based driving force behind Microsoft’s management offerings, including the System Centre suite and products such as VMM (Virtual Machine Manager). Brad’s team is responsible for deploying the Microsoft 365 Modern Workplace. Why You Should Follow Brad: A legend in the tech world, Microsoft’s very own Obi-Wan Kenobi doesn’t take himself too seriously. Top Tweet: Microsoft Defender is now being used on > 500M (half a billion) PCs around the world. I’m seeing similar share/growth in Windows 10 PCs managed by Config Manager, with > 55% now using Defender. It’s a great solution! Michael Niehaus, Principal Program Manager Modern Deployment at Microsoft Interesting Facts: Publishes a treasure trove of blogs in Out of Office Hours – Michael Niehaus’ Technology Ramblings. Michael’s working on “the next ten years” for deployment technology. Why You Should follow Michael: He's an all-round great guy, willing to answer questions and share knowledge. Retweets witty, intriguing and moving stuff beyond tech. We liked: A possum broke into an Australian bakery and ate so many pastries it couldn't move. This is how they found him. Top Tweet: Windows Autopilot at Microsoft Ignite https://oofhours.com/2019/10/31/windows-autopilot-at-microsoft-ignite/ Bill Karagounis, General Manager at Microsoft Quick Bio: Sydney born Bill is the Redmond WA based founder and leader of the Microsoft Managed Desktop (MMD) service. He redefines and upgrades the end-user PC experience in the enterprise. Manages the tech giant’s Enterprise Mobility and Management team with the Experiences and Devices Group. Interesting Facts: Has a beer with IT Lab when he’s in the UK (we do indeed have many chums in high places). Handy with mechanical stuff; pit crew for his sons’ karting sessions. Proud owner of a Gibson guitar and enjoys Formula 1 and Italian food. Why You Should Follow Bill: Microsoft veteran, cool guy, and what he doesn’t know about Windows OS is written on a postage stamp. Top LinkedIn Post: It’s been an awesome year of growth for #microsoftmanageddesktop. Our comms folks put together this cool Xmas graphic - content courtesy of Dan Coleby of IT Lab. Happy holidays everyone, see you in 2020! #hr #windows #windows10 Per Larsen, Senior Program Manager at Microsoft Quick Bio: Belongs to the Intune Engineering Customer Acceleration Team. As a Microsoft MVP (Most Valued Professional) before joining Microsoft, he helps their largest customers adopt and deploy Intune. Per has a blog site: Mobile-First Cloud-First. Interesting Facts: Per lives in ‘Denmark’s garden’ - cosmopolitan Odense, which 164 different nationalities choose to call home. Star Wars fan and handy in the kitchen; unsurprisingly, cooks a mean bacon and eggs. Why You Should Follow Per: One to watch; Per joined Microsoft in April 2019 and is already making his mark. He knows Microsoft Intune inside out. Top Tweet: Very interesting article by Mark Russinovich on how Microsoft does extreme and global redundancy in Azure AD. Retire noncompliant devices. Our devs have been working hard over the holidays to get the new Apps full-screen experience out soon! Matt Shadbolt, Ian Bartlett and George Smpyrakis – The Config Manager Dogs From L-R: Matt Shadbolt, Ian Bartlett and George Smpyrakis Quick Bio: Trio hailing from Microsoft Intune’s product team, styling themselves as the Config Manager Dogs. Seattle-based Matt’s experience spans support and programme management across the entire Microsoft tech stack. Down under, Ian manages a team partnering Microsoft’s most complex and strategic enterprise customers. George, who also lives in Australia, is a cloud evangelist and Azure Technical Trainer. Interesting Facts: Matt is savvy about keeping his personal info under wraps, a skill he has in common with Ian. When George was a kid, he dreamed of becoming a professional athlete, but his dad had other ideas. Today, George runs a coding club for children. Why You Should Follow the Config Manager Dogs: Insights multiplied by three on all things Microsoft Intune and Azure. Their twitter feed includes hand-picked goodies from across the tech and business communities. Top Tweet: Office What’s New Management Preview Updates Now Available: The Office What’s New management Preview allows your organization to decide which features are shown to or hidden from end-users in the What’s New panel of Office. The IT Lab Group's Most Valued Professionals (MVPs) The Microsoft Most Valued Professionals (MVP) community is an exclusive club for Microsoft technologists at the top of their game. "The Microsoft MVP award recognises exceptional knowledge and leadership," says Dan Coleby, our Modern Workplace Product Director. "MVPs are at the bleeding edge and have an unstoppable urge to get their hands on new, exciting technologies. And they're equally passionate about sharing their knowledge with the wider tech community." MVPs are a rare breed; several have achieved cult status on social media and are active contributors at community events and conferences. We're proud of the three MVPs in the IT Lab group, and several colleagues are planning on joining them. You can tap into our MVPs' videos and other insightful content below. And if you'd like to learn more about Microsoft MVPs click: Microsoft Most Valuable Professional. Steve Goodman, Principal Technology Strategist at Content and Code Quick bio: Eight-time recipient of the Microsoft MVP award for Office 365, chief editor of Practical365.com and engaging co-host of All About 365 with Jay and Steve podcasts. Steve and Jason (below), also run an annual event for IT professionals - the Evolve Conference. Author of several books about Office 365 and Exchange. Interesting Facts: Namesake of the Chicago singer-songwriter who recorded 11 albums of hope, humour and emotional punch. Lives with his family in historic Warwickshire, the stomping ground of many literary giants, including William Shakespeare and JRR Tolkien. Why You Should Follow Steve: Technology evangelist with infectious enthusiasm. Has the rare gift of translating highly technical subjects into language that's accessible and meaningful. All-round cheery chap and good egg. Not precious; shares other great content too, as demonstrated by (see below). Top Tweet: The mysterious “remove built-in Windows app” setting explained. (Retweet, @Scottduf) Jason Wynn, Principal Technology Strategist at Content and Code Quick bio: Two times MVP award, specialising in Microsoft 365, Office 365, Microsoft Teams and Skype for Business. As a committed Microsoft advocate, technology is hardwired into Jason's DNA. Droll co-host of All About 365 with Jay and Steve podcasts. Jason and Steve, above, also run a yearly event for IT pros called the Evolve Conference. Jason is the founder and organiser of the Microsoft Cloud User Group. Interesting Facts: A great guy to have your back in a crisis; many in Jason's fan-base mention his calm and reassuring presence. Educated in the Lone Star State of Texas, describes himself as a "displaced American". Jason has met Harrison Ford twice - on the second occasion Harrison remembered him. Why You Should Follow Jason: Brilliant, personable communicator. Keeps a keen eye on the horizon and a steady hand on the tiller. Top Tweet: Loads of great conversations today at #MSIgniteTheTour London - as you can see, a few interesting podcasts in the can for @AllAbout365 and topics discussed for @Practical365. See you tomorrow! Chris O'Brien, Head of Development at Content and Code Quick bio: Twelve times MVP award. Leads a talented team with expertise in Microsoft technologies including Office 365, Azure, SharePoint, Microsoft Teams and Power Apps. Interesting Facts: Assisted Microsoft with the SharePoint Framework (SPFx) - the renewed development platform for extending SharePoint. A Mancunian, father of identical twins and a competitive cyclist in his younger days. Chris once explored the glacial peaks, deep rhododendron forests and terraced rice paddies surrounding the Annapurna Base Camp. Why You Should Follow Chris: Big on best practice, very hands-on. No airs and graces and endlessly charming. Publishes a highly regarded technical blog - The Nuts and Bolts of SharePoint. Helps organisations get more from their tech. Top Tweet: My summary of #Teams announcements from Ignite 2019. Great to see better integration with Yammer and Outlook, and private channels finally available. #MSIgnite #Office365 Top Microsoft Corporate Accounts to Follow To impress your colleagues with how current you are, here are our top three corporate accounts. After all, 580,000 combined followers (and counting) can't be wrong. For tips, tricks and insights on transforming how your users work. Microsoft 365 combines best-in-class productivity apps with intelligent cloud services. Click here to follow Microsoft 365. Catch Microsoft’s official blog for Windows and Devices. Click here to follow Windows Blogs. Microsoft Endpoint Manager The lowdown on endpoint management solutions like Config Manager and Intune, enabling the best user experiences for secure apps on any M365 device. Click here to follow Microsoft Endpoint Manager. Bonus: More IT Insights We’re active on LinkedIn, and if you’re one of our 6,500 plus followers – we appreciate you! As well as our posts about the Microsoft Modern Workplace and Microsoft Managed Desktop, we share advice on other topics across the IT spectrum, such as the cloud, ERP, and cybersecurity. Follow us on LinkedIn here. You’ll also learn about our events and on-demand webinars. You’re bound to find something that ignites your interest, and we’d love you to join us. You may have noticed our very own Dan Coleby got a special mention from Microsoft. Dan's our Modern Workplace Product Director, guiding clients on their technology strategy, implementation and operation. His specialities include stakeholder engagement, project management, IT transformation and process change management. You can follow Dan here. He's currently helping several companies with their business cases for the Microsoft Managed Desktop (MMD) - click below to learn more about MMD.
OPCFW_CODE
I have read all the other posts on this and believe I know why my issues occur, and would like to suggest to developers that we make a small change. If I have one lister open with a number of tabs in left and right panes. I have the setting "Shutdown Dopus when the last lister closes" selected. When I close Dopus by clicking X on the lister window, Dopus process exits entirely. If I then restart Dopus (just run the exe), it forgets the layout/tabs I had open. I understand this happens because I close the last lister, Dopus exe then gets told to shut down and as it does, it sees there are no listers open, so next time it starts, it loads the Default lister. I can circumvent this by adding the Exit Dopus command to the menu and using that ie. Dopus shuts down whilst there is still lister(s) open. I think in a scenario like this though where the last lister closes triggering Dopus to exit, Dopus should save that last lister layout before terminating itself. Then it can reload that lister when it restarts. The possible reason could be, that you´ve set Opus to load a particular layout when starting. You can check it under -> settings -> preferences -> startup -> auto start, where you should have the appropriate option checked ("open the listers that were open when the program was last closed"). (Abr's answer won't help for the reasons you said in your first post, i.e. there are no windows open when Opus exists, so there's nothing to open when it restarts with that setting.) Instead, turn on Preferences / Launching Opus / Default Lister / Update Default Lister automatically when closing a Lister and then make sure whichever method(s) you use to launch Opus are set to open the Default Lister. Its taken me a while to get my head around this, why its not acting as axpected and how to solve it. I can see your answer leo is necessary given all listers are actually closed at the time that Dopus exits. It achieves the same as what I was sking for ie. read last lister closed before exiting. What bothered me about it, and I can see it would still be a problem even if doing my suggested way, is that sometimes the last thing I am doing is a Find so I have the find panel open or indeed an entire lister setup for find operation. Or perhaps I have a lister open with a dedicated format for viewing phots or videos. What I really want to achieve is that upon starting Dopus again, it just loads my standard lister which is setup the way I want BUT I want all those tabs I had open to be reapplied. I can see the issue immediately. Upon closing, which tabs from which open lister am I wanting back. The anser would be the ones from my standard lister. So, if upon opening, I could tell Dopus to open the lister called MY_STD_LISTER, it would very kinfly reapply all the tabs from last time. I could then open other listers for Find etc, but when Dopus closes, it notes the tabs from the lister called MY_STD_LISTER, to repply next time. I appreciate this is another whole wishlist for how it operates but I'll throw it out there. If you turn on Preferences / Launching Opus / Default Lister / Ignore folder format of Default Lister right above the other option, that takes care of the scenario where you were using a custom format that you don't want to be used when you open a new window. That just leaves the Find panel which you can turn off in a single click and doesn't seem like a big deal by itself. (If you really don't want to have to close it, you could do so using a User Command which Opus runs instead of opening the default lister in the normal way. I think that would be more trouble than it's worth, though.) I tried Ignore folder format but it doesn't seem to achieve the desired result. Here is my standard layout ( ) and this is the lister I open to do finds ( ). Its not as simple as turning the find panel on/off as I have it setup to show more info on the right, the results, and it only needs the one tab. If this opens by default when Dopus starts (ie was last closed lister), then I must open my standard lister - but lose all the folders I had open in tabs at the time of close. I'm not sure there is a way to achieve that. I think you want Opus to remember the last layout and paths/tabs sometimes, but not others. The only way to do that is make it not remember anything automatically and then explicitly save the window when you want to, and don't save it when you don't want to, using Settings -> Set As Default Lister. I can see why you say that but I don't think its that silly. Just remember my explorer-style lister, not the special ones open for find/content view etc. If a lister is opened from a named layout, could it not, optionally, simply update that layout when it closes. Then if I open my STD layout, and open my FIND layout, then later close each. I could then tell Dopus at startup to load my STD layout and it will have remembered how it was when it closed and not get mixed up with any other layouts. It certainly could in theory, but that's not the way it works right now so it's not currently possible. Fair enough. Is there anywhere I can make a plug to have such a feature implemented. Thanks heaps. Just mentioning it here is enough, since you've got a linked account. We'll keep it in mind.
OPCFW_CODE
What is the intuitive interpretation of the transfer function of this system? If I have the system that could be observed in the next Image: I want to know the transfer function, where the external force $f$ is the entry and $x_1$ is the output. The direction and positive sense of movement and force is right to left, as the image shows. Assume that the mass $m_x$ is zero, the initial conditions are zero, and the effects of gravity are null. Moreover, the two springs are taken as a single equivalent ($k=k_1+k_2$) and the same with the shock absorbers ($b=b_1+b_2$), since it is assumed that each end of them suffers the same displacement, speed and acceleration. My transfer function is: $$\frac{X_1(s)}{F(s)}=\frac{\frac{1}{m}}{s^2+\frac{k}{m}}$$ It contains a null damping factor, that is, it is pure oscillatory. However, I know that I have a mass that acquires kinetic energy, a spring that acquires potential energy, and there will be an exchange of energy between them during the oscillations, but I also have a damper dissipating energy, which I don't see in my transfer function. How can I interpret this? Attached is my development. but I also have a damper dissipating energy, which I don't see in my transfer function So, this is a fun one. You have a damper that dissipates energy when the external force is applied. However, your $m_x = 0$; your connecting bar thingie on the input is massless, and you are (properly) modeling your dampers as massless, too. As a consequence, the end bar isn't anchored to anything -- it "sees" an infinite mechanical compliance. From an intuitive perspective, this means that with no force applied, whatever your main mass does, your connecting bar thingie will do the exact same thing, only with whatever offset happens to be present in the dampers. This is what leads to your mathematical result that shows the dampers having no effect. That looks correct. The issue is that the normal way to analyze the mass-spring-damper system that gets a non-zero damping term has the spring and the damper in parallel. This answer on the Physics SE site comes to a similar conclusion to you (though uses different notation). They end up with $$m\,\ddot x_2=b\,(\dot x_1-\dot x_2)=-k\,x_1$$ where I've changed their $\sigma$ to your $b$.
STACK_EXCHANGE
The EDL3 Explorer utility opened automatically when you installed the John Deere Service Advisor EDL v3 Adapter drivers and utilities To re-open the EDL3 Explorer once it has been closed, click on the Show Hidden Icons arrow in your PC’s System Tray. Then, double-click on the EDL3 icon 7 . NOTE:You can also access the Heavy Duty Truck Scanner EDL3 Explorer from your PC’s Start menu. Click Start and then select All ProgramsDeere EDL3 EDL3 Explorer. The EDL3 Explorer opens. The following menu options are provided: Each menu option includes a number of features. Each of the menu options are discussed in the following sub-sections. When you click on a EDL3 in the list in the left pane, the Configuration tab is The Configuration Tab The Configuration tab provides the following information: — MAC Address • Wireless Settings • Internet Protocol (TCP/IP) Settings This information can be useful when troubleshooting network connection problems. You also use the Configuration tab when switching between the two wireless Connection Types (i.e., Wireless and Bluetooth), or when setting up a Wi-Fi connection in Infrastructure Mode. To access the Configuration tab: (1)Click on an EDL3_xxxxxx in the list in the left pane of the Explorer. Switching Modes: Mini Access Point and Infrastructure From the EDL3 Explorer Configuration tab, you can use the Mode drop-down menu under Wireless Settings to switch between Mini Access Point and Infrastructure modes. NOTE:For a graphic depiction of a typical Infrastructure Mode setup, see Settings: Mode Drop-down Menu NOTE:You can also use the Reset Button to switch from Infrastructure Mode back to Mini Access Point Mode (the Wi-Fi default). Just push and hold the button until the wireless LED changes color from orange to white (about 3 seconds).Once, you have selected Infrastructure from the drop-down menu, additional fields in the Wireless Settings portion of the screen are available. The following Wireless Settings fields are available: • Network Name • Frequency (used to switch between 2.4 and 5 GHz) • Security (WEP, WPA/WPA2) • Key Index (only available with the WEP security selection) NOTE:The settings for connecting to your company network may differ from one installation to another. To ensure network security, your Information Technology (IT) administrator will need to oversee the installation and specify the appropriate configuration parameters. The Internet Protocol (TCP/IP) portion of the screen is also available to you to enter the required settings. There are two options: • Obtain an IP address automatically (i.e., a dynamic IP address) • Specify an IP address (i.e., a static IP address that does not change) — IP Address — Subnet Mask — Default Gateway NOTE:You will need to obtain this information (i.e., IP Address, Subnet Mask) from the designated IT person or network administrator for your location. Depending on how your local network is configured, you may also need to enter Default Gateway information. The File Menu The File menu has one feature, Exit. You use the Exit feature to close the EDL3 Explorer. To exit the EDL3 Explorer: (1)Select File from the EDL3 Explorer menu bar. The EDL3 Explorer closes. The Tools Menu The Tools menu provides the following features: The Ping feature uses the PING protocol to check for the presence of a device on the network. To check for a device: (1)Select Tools from the EDL3 Explorer menu bar. (3)Enter the IP address of the device you want to locate (e.g., 192.168.123.107). The EDL3 Explorer searches for the device and, if found, displays the reply. (6)Click the Close button on the dialog box. The Options feature provides the following features, which are presented as check boxes: • Start EDL3 Explorer when Windows starts (pg. 50) • Show New EDL3 Notification (pg. 51) Start EDL3 Explorer when Windows Starts You use this feature to manage when the EDL3 Explorer opens. The default is to not open the EDL3 Explorer when Windows starts. To change the default, click on the check box to add the check mark. Then click OK. Show New EDL3 Notification You use this feature to manage when to display the New INLINE7 notification message box. The default is to display the notification message box whenever a new EDL3 is detected. To change the default, click the box to remove the check mark. Then click OK. The Help Menu The Help menu has one feature, About. You use the About feature to display information about the EDL3 Explorer. To access the Help menu: (1)Select Help from the EDL3 Explorer menu bar.
OPCFW_CODE
ValueError: Cannot construct a ufunc with more than 32 operands This error happens when using search mode. But program can still run. Does it impact the compiling of attention framework? 2016-01-21 17:59:38,114: groundhog.trainer.SGD_adadelta: DEBUG: Constructing grad function 2016-01-21 17:59:39,072: groundhog.trainer.SGD_adadelta: DEBUG: Compiling grad function ERROR (theano.gof.opt): SeqOptimizer apply <theano.tensor.opt.FusionOptimizer object at 0x3717f50> 2016-01-21 18:01:21,598: theano.gof.opt: ERROR: SeqOptimizer apply <theano.tensor.opt.FusionOptimizer object at 0x3717f50> ERROR (theano.gof.opt): Traceback: 2016-01-21 18:01:21,598: theano.gof.opt: ERROR: Traceback: ERROR (theano.gof.opt): Traceback (most recent call last): File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/gof/opt.py", line 195, in apply sub_prof = optimizer.optimize(fgraph) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/gof/opt.py", line 81, in optimize ret = self.apply(fgraph, *args, **kwargs) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/opt.py", line 5498, in apply new_outputs = self.optimizer(node) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/opt.py", line 5433, in local_fuse n = OP(C)(*inputs).owner File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/elemwise.py", line 496, in init scalar_op.nout) ValueError: Cannot construct a ufunc with more than 32 operands (requested number were: inputs = 40 and outputs = 1) 2016-01-21 18:01:21,648: theano.gof.opt: ERROR: Traceback (most recent call last): File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/gof/opt.py", line 195, in apply sub_prof = optimizer.optimize(fgraph) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/gof/opt.py", line 81, in optimize ret = self.apply(fgraph, *args, **kwargs) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/opt.py", line 5498, in apply new_outputs = self.optimizer(node) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/opt.py", line 5433, in local_fuse n = OP(C)(*inputs).owner File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python2.7/site-packages/theano/tensor/elemwise.py", line 496, in init scalar_op.nout) ValueError: Cannot construct a ufunc with more than 32 operands (requested number were: inputs = 40 and outputs = 1) Update Theano to the development version. Le ven. 22 janv. 2016 00:08, zhangdongxu<EMAIL_ADDRESS>a écrit : This error happens when using search mode But program can still run Does it impact the compiling of attention framework? 2016-01-21 17:59:38,114: groundhogtrainerSGD_adadelta: DEBUG: Constructing grad function 2016-01-21 17:59:39,072: groundhogtrainerSGD_adadelta: DEBUG: Compiling grad function ERROR (theanogofopt): SeqOptimizer apply 2016-01-21 18:01:21,598: theanogofopt: ERROR: SeqOptimizer apply ERROR (theanogofopt): Traceback: 2016-01-21 18:01:21,598: theanogofopt: ERROR: Traceback: ERROR (theanogofopt): Traceback (most recent call last): File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/gof/optpy", line 195, in apply sub_prof = optimizeroptimize(fgraph) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/gof/optpy", line 81, in optimize ret = selfapply(fgraph, *args, **kwargs) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/optpy", line 5498, in apply new_outputs = selfoptimizer(node) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/optpy", line 5433, in local_fuse n = OP(C)(*inputs)owner File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/elemwisepy", line 496, in init scalar_opnout) ValueError: Cannot construct a ufunc with more than 32 operands (requested number were: inputs = 40 and outputs = 1) 2016-01-21 18:01:21,648: theanogofopt: ERROR: Traceback (most recent call last): File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/gof/optpy", line 195, in apply sub_prof = optimizeroptimize(fgraph) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/gof/optpy", line 81, in optimize ret = selfapply(fgraph, *args, **kwargs) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/optpy", line 5498, in apply new_outputs = selfoptimizer(node) File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/optpy", line 5433, in local_fuse n = OP(C)(*inputs)owner File "/nfs/nlphome/nlp/tools/python27/root/usr/lib64/python27/site-packages/theano/tensor/elemwisepy", line 496, in init scalar_opnout) ValueError: Cannot construct a ufunc with more than 32 operands (requested number were: inputs = 40 and outputs = 1) — Reply to this email directly or view it on GitHub https://github.com/lisa-groundhog/GroundHog/issues/44. Thank you. Development version works.
GITHUB_ARCHIVE
//! Configuration of the framework. use crate::command::{CommandConstructor, CommandId, CommandMap}; use crate::context::PrefixContext; use crate::group::{Group, GroupConstructor, GroupId, GroupMap}; use crate::{DefaultData, DefaultError}; use serenity::futures::future::BoxFuture; use serenity::model::channel::Message; use serenity::model::id::UserId; use std::fmt; /// The definition of the dynamic prefix hook. pub type DynamicPrefix<D, E> = for<'a> fn(ctx: PrefixContext<'_, D, E>, msg: &'a Message) -> BoxFuture<'a, Option<usize>>; /// The configuration of the framework. #[non_exhaustive] pub struct Configuration<D = DefaultData, E = DefaultError> { /// A list of static prefixes. pub prefixes: Vec<String>, /// A function to dynamically parse the prefix. pub dynamic_prefix: Option<DynamicPrefix<D, E>>, /// A boolean indicating whether casing of the letters in static prefixes, /// group prefixes, or command names does not matter. pub case_insensitive: bool, /// A boolean indicating whether the prefix is not necessary in direct messages. pub no_dm_prefix: bool, /// A user id of the bot that is used to compare mentions in prefix position. /// /// If filled, this allows for invoking commands by mentioning the bot. pub on_mention: Option<String>, /// An [`IdMap`] containing all [`Group`]s. /// /// [`IdMap`]: ../utils/id_map/struct.IdMap.html /// [`Group`]: ../group/struct.Group.html pub groups: GroupMap<D, E>, /// A list of prefixless [`Group`]s. /// /// These are invisible to the user on Discord. /// /// [`Group`]: ../group/struct.Group.html pub top_level_groups: Vec<Group<D, E>>, /// An [`IdMap`] containing all [`Command`]s. /// /// [`IdMap`]: ../utils/id_map/struct.IdMap.html /// [`Command`]: ../group/struct.Command.html pub commands: CommandMap<D, E>, } impl<D, E> Clone for Configuration<D, E> { fn clone(&self) -> Self { Self { prefixes: self.prefixes.clone(), dynamic_prefix: self.dynamic_prefix, case_insensitive: self.case_insensitive, no_dm_prefix: self.no_dm_prefix, on_mention: self.on_mention.clone(), groups: self.groups.clone(), top_level_groups: self.top_level_groups.clone(), commands: self.commands.clone(), } } } impl<D, E> Default for Configuration<D, E> { fn default() -> Self { Self { prefixes: Vec::default(), dynamic_prefix: None, case_insensitive: false, no_dm_prefix: false, on_mention: None, groups: GroupMap::default(), top_level_groups: Vec::default(), commands: CommandMap::default(), } } } impl<D, E> Configuration<D, E> { /// Creates a new instance of the framework configuration. pub fn new() -> Self { Self::default() } /// Assigns a prefix to this configuration. /// /// The prefix is added to the [`prefixes`] list. /// /// [`prefixes`]: struct.Configuration.html#structfield.prefixes pub fn prefix<I>(&mut self, prefix: I) -> &mut Self where I: Into<String>, { self.prefixes.push(prefix.into()); self } /// Assigns a function to dynamically parse the prefix. pub fn dynamic_prefix(&mut self, prefix: DynamicPrefix<D, E>) -> &mut Self { self.dynamic_prefix = Some(prefix); self } /// Assigns a boolean indicating whether the casing of letters in static prefixes, /// group prefixes or command names does not matter. pub fn case_insensitive(&mut self, b: bool) -> &mut Self { self.case_insensitive = b; self } /// Assigns a boolean indicating whether the prefix is not necessary in /// direct messages. pub fn no_dm_prefix(&mut self, b: bool) -> &mut Self { self.no_dm_prefix = b; self } /// Assigns a user id of the bot that will allow for mentions in prefix position. pub fn on_mention<I>(&mut self, id: I) -> &mut Self where I: Into<UserId>, { self.on_mention = Some(id.into().to_string()); self } fn _group(&mut self, group: Group<D, E>) -> &mut Self { for prefix in &group.prefixes { let prefix = if self.case_insensitive { prefix.to_lowercase() } else { prefix.clone() }; self.groups.insert_name(prefix, group.id); } for id in &group.subgroups { // SAFETY: GroupId in user code can only be constructed by its // `From<GroupConstructor>` impl. This makes the transmute safe. let constructor: GroupConstructor<D, E> = unsafe { std::mem::transmute(id.0 as *const ()) }; let mut subgroup = constructor(); subgroup.id = *id; self._group(subgroup); } for id in &group.commands { // SAFETY: CommandId in user code can only be constructed by its // `From<CommandConstructor<D, E>>` impl. This makes the transmute safe. let constructor: CommandConstructor<D, E> = unsafe { std::mem::transmute(id.0 as *const ()) }; self.command(constructor); } self.groups.insert(group.id, group); self } /// Assigns a group to this configuration. /// /// The group is added to the [`groups`] list. /// /// A group without prefixes is automatically added to the [`top_level_groups`] /// list instead of the [`groups`] list. /// /// [`groups`]: struct.Configuration.html#structfield.groups /// [`top_level_groups`]: struct.Configuration.html#structfield.top_level_groups pub fn group(&mut self, group: GroupConstructor<D, E>) -> &mut Self { let id = GroupId::from(group); let mut group = group(); group.id = id; if group.prefixes.is_empty() { assert!( group.subgroups.is_empty(), "top level groups must not have prefixes nor subgroups" ); self.top_level_groups.push(group); return self; } self._group(group) } /// Assigns a command to this configuration. /// /// The command is added to the [`commands`] list. /// /// [`commands`]: struct.Configuration.html#structfield.commands pub fn command(&mut self, command: CommandConstructor<D, E>) -> &mut Self { let id = CommandId::from(command); let mut command = command(); command.id = id; assert!(!command.names.is_empty(), "command cannot have no names"); for name in &command.names { let name = if self.case_insensitive { name.to_lowercase() } else { name.clone() }; self.commands.insert_name(name, id); } for id in &command.subcommands { // SAFETY: CommandId in user code can only be constructed by its // `From<CommandConstructor<D, E>>` impl. This makes the transmute safe. let constructor: CommandConstructor<D, E> = unsafe { std::mem::transmute(id.0 as *const ()) }; self.command(constructor); } self.commands.insert(id, command); self } } impl<D, E> fmt::Debug for Configuration<D, E> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Configuration") .field("prefixes", &self.prefixes) .field("dynamic_prefix", &"<fn>") .field("case_insensitive", &self.case_insensitive) .field("no_dm_prefix", &self.no_dm_prefix) .field("on_mention", &self.on_mention) .field("groups", &self.groups) .field("top_level_groups", &self.top_level_groups) .field("commands", &self.commands) .finish() } }
STACK_EDU
Butterfly classloader module isolation causes Jackson problems As reported in https://github.com/OpenRefine/OpenRefine/issues/1882 with analysis in https://github.com/OpenRefine/OpenRefine/pull/1889 Butterfly implements its own ClassLoader called ButterflyClassLoader, with some custom logic to reload classes on the fly when they are modified. To do so, the method loadClass from the parent class URLClassLoader was overridden, with a home-made interpretation of the delegation model which was incorrect: the problem was that it tried to load a class first by itself before trying the parent loader (whereas the parent loader should be tried first). This lead to Jackson annotations being marked as loaded by ButterflyClassLoader instead of WebappClassLoader, so the hashes of the classes would differ and the Jackson introspection code would not find them when generating serializers and deserializers for POJOs coming from extensions. I stumbled across the commented out classloader code (which is not associated with any PR, but is in 0de01cb325c89c7f8f1265f0ee538b183527d9e6) the other day and, independently stumbled across the unlinked commentary in OpenRefine/OpenRefine#1889 just now. I'm not sure what other workarounds are possible for Jackson's particular troubles, but the Butterfly classloader organization was intentional. It wasn't a "misinterpretation" of the delegation model, but, as I interpret it, an intentional firewalling mechanism to keep modules from interfering with each other. Many extension models have similar protection mechanisms, so OpenRefine may need a different mechanism than insisting that all code share a common classloader. Yes I understand that this custom class-loading was intentional. Yes it could be useful to have something that enables extensions to use different versions of libraries, so provide the same sort of isolation. The solutions I have in mind are: start a big refactoring to provide our own interface to handle JSON serialization / deserialization. Extensions would need to rely on that instead of Jackson (for me, the cost/benefit ratio is way too high) convince Jackson to index classes not by hash of class object but by class name (it looks very hard to me too) find a way to enforce that the same classloader is used for classes whose versions do not differ between main and extensions. That would amount to requiring that extensions use the exact same Jackson version, but would still be able to override other dependency versions thanks to the custom classloading code (looks hacky to me) anything else? Of course I found out about this problem extremely late in the migration process, and that did not really help find a satisfying solution. The problem only occurs with real-world testing of extensions (units tests are not affected as the class loading mechanisms are different when running tests). Yes I understand that this custom class-loading was intentional. Thanks for the clarification. That wasn't clear to me from what I read before. I'm still not clear on what the exact problem is/was. Can you expand with a little more detail (perhaps an example)? Is it to do with preferences or communication between the core and extensions or the front end and backend or something else entirely? The purpose in creating this issue wasn't to do something different immediately or even consider the issue now. It was just to get the existing history captured since we use Github as our principal form of institutional memory. Here is a summary of the issue: We use Jackson to serialize / deserialize our classes in JSON. Jackson makes it possible to specify how a class is (de)serialized by annotating it with its own set of annotations. Given a class to (de)serialize, Jackson generates a (de)serializer on the fly by looking at the annotations on the class. In Java, annotations are classes themselves, so they are associated to the classloader that loaded them. This classloader is taken into account when checking for equality of classes (and hashCode), so two annotations classes loaded by different classloaders are considered different. If an extension declares a new operation, or overlay model, or anything else that will be serialized to JSON by the main application, this is a problem. With Butterfly's custom classloader, Jackson annotations are marked as loaded by ButterflyClassLoader instead of WebappClassLoader, so the hashes of the classes would differ and the Jackson introspection code would not find them when generating serializers and deserializers for POJOs coming from extensions. The issue on Jackson's side is here: https://github.com/FasterXML/jackson-databind/issues/542 That being said, one low-hanging solution for this comes to mind as I write this: adding jackson as a provided dependency to the extensions could ensure that the ButterflyClassloader fails to find them in the extension and falls back on the main classloader. Perhaps it is that simple - I suspect I have tried that back in the days, but there is a slight chance that I did not. If it does work it would be by far the best way forward. I have no idea what I am talking about but perhaps the new Java module system introduced in Java 9 could help with this sort of problem. Modules are essentially a "package of packages" that can declare requires transitive as necessary or requires static and give strong encapsulation. One thing to note is that by default, all packages are module private but you can export...to to expose to only certain packages as necessary. Another thing is thinking about ButterflyClassloader and would it itself be an Application Module or perhaps even a Service by signifying with module my.module { uses ButterflyClassloader; } or even considering it a Service Provider that provides ? I think only you can answer that having some knowledge of what Butterfly currently gives to OpenRefine extensions. Good Reference: https://www.baeldung.com/java-9-modularity Revisiting this in the context of the OpenRefine extension ecosystem discussion, I think it would be useful to restore the module isolation which was disabled. While I think exposing Jackson in the extension interface is dangerous for the same reason that having org.json objects in the extension interface was dangerous, as I previously argued If an extension gets broken by the JSON serialization change, I'd argue that that indicates that it's not well enough isolated and the solution isn't to make the abstraction even more leaky, but, rather, to force the extensions to provide their own JSON serialization, unless it's a facility provided by OpenRefine through an opaque API. a bandaid fix might be to declare Jackson databind to be a sort of bootstrap dependency that gets loaded first by Butterfly or to otherwise provide special treatment of the necessary classes (effectively make Jackson a core Butterfly facility). I took a closer look as I was preparing to implement the bootstrapping scheme described above and discovered that it's exactly as simple as @wetneb described: adding jackson as a provided dependency to the extensions could ensure that the ButterflyClassloader fails to find them in the extension and falls back on the main classloader. There are actually two sources of conflict: slf4j and Jackson. The Jackson conflict seems to be with ObjectMapper declarations rather the annotations, but I pulled in both just to be safe. Fixing all the bundled extensions works, but it only takes a single rogue extension to mess things up again for all. For example, an older version of the rdf-extension bundles jena-shaded-arq-3.8.0.jar which is a shaded "uber jar" that includes its own jackson-core (among other bundled dependencies). Contrary to my assumption above, the Butterfly classloader doesn't actually provide intermodule classloading isolation, but merges all the modules classpaths together into a single classloader, https://github.com/OpenRefine/simile-butterfly/blob/79296b6640ee1f61937cbd5f97b218d5a9547323/main/src/edu/mit/simile/butterfly/Butterfly.java#L644-L651 so the database module can have its Jackson loading broken by the mere presence of the rdf-extension. Also, because the order of module loading is effectively random (hashset iteration order), the priority of modules on the classpath relative to each other is random. This discussion really encourages me to go for an established plugin system. I mean, honestly, I am really not knowledgeable on Java class loading details (as can be seen with the classloading change I made), and I don't think it's a good use of our time to become experts on that and implement something solid, when there are existing solutions we could adopt (OSGi, PF4J, maybe others). I have written more about this on the forum. Sure! If we do decide to adopt an existing plugin system, it's not something that's going to happen overnight, so I wouldn't make that block up any fixes on the current system.
GITHUB_ARCHIVE
File execute permission lameta01dev:/tmp/test> ls -ltra total 20 drwxrwxrwt 14 root root 12288 2016-08-29 14:21 .. -rw-r--r-- 1 root root 19 2016-08-29 14:24 a.sh drwxr-xr-x 2 root root 4096 2016-08-29 14:24 . lameta01dev:/tmp/test> whoami apgimage lameta01dev:/tmp/test> sh a.sh hello world Question: If I am logged in as apgimage how am I able to extcute a.sh using "sh a.sh" command when owner of the file is root ? To run sh a.sh you need (a) execute permission on sh and (b) read permission on a.sh. The execute permission on a.sh determines whether the following will succeed or fail: ./a.sh Notes: As long as a user have read permission on a.sh, preventing sh a.sh does nothing for security: the user could just copy the contents of a.sh to his own file and set the execute bit on his copy. To prevent a normal user from executing a script via sh a.sh, root can remove that file's read/write/execute permission for "other": chmod o-a a.sh If the file is owned by a group that the normal user belongs to, then it will also be necessary to remove group permissions: chmod go-a a.sh What kind of security is this, I am able to perform execution using sh command overriding whole permissions set. As long as you have read permission on a.sh, preventing sh a.sh does nothing for security : you could just copy the contents of a.sh to your own file and set the execute bit on your copy. Can root user override permissions of normal user ? I found that for a file which has no permissions "---------- 1 apgimage root 19 2016-08-29 14:24 a.sh" I couldnt do anything as apaimage but whenI logged in as root I was able to read, write but couldn't execute. Yes, that works. If the script belongs to root and root denies you read permission, then you cannot execute it. If root runs chmod go-a filename then all permission to others (not user root) to read, write, or execute a file will be gone. But if the file belongs to non root user and permission set looks like -rwx------ for that user, in that case it appears that root is falling under others permission which is --- but I was able to read, write but not execute the file. @user204069 Yes, root is a special case: I don't think that there is anything you can do to prevent root from reading a file.
STACK_EXCHANGE
One of my friends is the founder and Chief data scientist at a very successful deep learning startup. 2017 was a good year for his startup with funding and increasing adoption. However, on a Thursday evening last year, my friend was very frustrated and disappointed. The framework on which they had built everything in last 3+ years Theano was calling it a day. The awesome MILA team under Dr. Yoshua Bengio had decided to stop the support for the framework. It’s a setback for any startup which invests time and money in training the team and building functionalities on top of the core framework. Hence, the choice of the framework you decide to spend your time learning and practicing is very important. So, when I got a few emails from some of our readers about the choice of Deep learning framework(mostly Tensorflow vs Pytorch), I decided to write a detailed blog post on the choice of Deep Learning framework in 2018. Tensorflow + Keras is the largest deep learning library but PyTorch is getting popular rapidly especially among academic circles. If you are getting started on deep learning in 2018, here is a detailed comparison of which deep learning library should you choose in 2018. Let’s have a look at most of the popular frameworks and libraries like Tensorflow, Pytorch, Caffe, CNTK, MxNet, Keras, Caffe2, Torch and DeepLearning4j and new approaches like ONNX. PyTorch is one of the newest deep learning framework which is gaining popularity due to its simplicity and ease of use. Pytorch got very popular for its dynamic computational graph and efficient memory usage. Dynamic graph is very suitable for certain use-cases like working with text. Pytorch is easy to learn and easy to code. For the lovers of oop programming, torch.nn.Module allows for creating reusable code which is very developer friendly. Pytorch is great for rapid prototyping especially for small-scale or academic projects. Due to this, without doubt, Pytorch has become a great choice for the academic researchers who don’t have to worry about scale and performance. Tensorflow, an open source Machine Learning library by Google is the most popular AI library at the moment based on the number of stars on GitHub and stack-overflow activity. It draws its popularity from its distributed training support, scalable production deployment options and support for various devices like Android. One of the most awesome and useful thing in Tensorflow is Tensorboard visualization. In general, during train, one has to have multiple runs to tune the hyperparameters or identify any potential data issues. Using Tensorboard makes it very easy to visualize and spot problems. Tensorflow Serving is another reason why Tensorflow is an absolute darling of the industry. This specialized grpc server is the same infrastructure that Google uses to deploy its models in production so it’s robust and tested for scale. In Tensorflow Serving, the models can be hot-swapped without bringing the service down which can be crucial reason for many business. In Tensorflow, the graph is static and you need to define the graph before running your model. Although, Tensorflow also introduced Eager execution to add the dynamic graph capability. In Tensorflow, entire graph(with parameters) can be saved as a protocol buffer which can then be deployed to non-pythonic infrastructure like Java which again makes it borderless and easy to deploy. Caffe is a Python deep learning library developed by Yangqing Jia at the University of Berkeley for supervised computer vision problems. It used to be the most popular deep learning library in use. Written in C++, Caffe is one of the oldest and widely supported libraries for CNNs and computer vision. Nvidia Jetson platform for embedded computing has deep support for Caffe(They have added the support for other frameworks like Tensorflow but it’s still not enough). The same goes for OpenCV, the widely used computer vision library which started adding support for Deep Learning models starting with Caffe. For years, OpenCV has been the most popular way to add computer vision capabilities to mobile devices. So, if you have a mobile app which runs openCV and you now want to deploy a Neural network based model, Caffe would be very convenient. Microsoft Cognitive toolkit (CNTK) framework is maintained and supported by Microsoft. Since we have limited experience with CNTK, we are just mentioning it here. However, it’s not hugely popular like Tensorflow/Pytorch/Caffe. Another framework supported by Facebook, built on the original Caffe was actually designed by Caffe creator Yangqing Jia. It was designed with expression, speed, and modularity in mind especially for production deployment which was never the goal for Pytorch. Recently, Caffe2 has been merged with Pytorch in order to provide production deployment capabilities to Pytorch but we have to wait and watch how this pans out. Pytorch 1.0 roadmap talks about production deployment support using Caffe2. Promoted by Amazon, MxNet is also supported by Apache foundation. It’s very popular among R community although it has API for multiple languages. It’s also supported by Keras as one of the back-ends. Torch (also called Torch7) is a Lua based deep learning framework developed by Clement Farabet, Ronan Collobert and Koray Kavukcuoglu for research and development into deep learning algorithms. Torch has been used and has been further developed by the Facebook AI lab. However, most of force behind torch has moved to Pytorch. Theano was a Python framework developed at the University of Montreal and run by Yoshua Bengio for research and development into state of the art deep learning algorithms. It used to be one of the most popular deep learning libraries. The official support of Theano ceased in 2017. DeepLearning4J is another deep Learning framework developed in Java by Adam Gibson. “DL4J is a JVM-based, industry-focused, commercially supported, distributed deep-learning framework intended to solve problems involving massive amounts of data in a reasonable amount of time.” As you can see, that almost every large technology company has its own framework. In fact, almost every year a new framework has risen to a new height, leading to a lot of pain and re-skilling required for deep learning practitioners. The world of Deep Learning is very fragmented and evolving very fast. Look at this tweet by Karpathy: Imagine the pain all of us have been enduring, of learning a new framework every year. François Chollet, who works at Google developed Keras as a wrapper on top of Theano for quick prototyping. Later this was expanded for multiple frameworks such as Tensorflow, MXNet, CNTK etc as back-end. Keras is being hailed as the future of building neural networks. Here are some of the reasons for its popularity: Light-weight and quick: Keras is designed to remove boilerplate code. Few lines of keras code will achieve so much more than native Tensorflow code. You can easily design both CNN and RNNs and can run them on either GPU or CPU. Emerging possible winner: Keras is an API which runs on top of a back-end. This back-end could be either Tensorflow or Theano. Microsoft is also working to provide CNTK as a back-end to Keras. Currently, Keras is one of the fastest growing libraries for deep learning. The power of being able to run the same code with different back-end is a great reason for choosing Keras. Imagine, you read a paper which seems to be doing something so interesting that you want to try with your own dataset. Let’s say you work with Tensorflow and don’t know much about Torch, then you will have to implement the paper in Tensorflow, which obviously will take longer. Now, If the code is written in Keras all you have to do is change the back-end to Tensorflow. This will turbocharge collaborations for the whole community. Unifying effort: ONNX: On the similar line, Open Neural Network Exchange (ONNX) was announced at the end of 2017 which aims to solve the compatibility issues among frameworks. ONNX defines the open source standard for AI Models which can be adopted or implemented by various frameworks. So, you can train a network in Pytorch and deploy in Caffe2. It currently supports MXNet, Caffe2, Pytorch, CNTK(Read Amazon, Facebook, and Microsoft). So, that could be a good thing for the overall community. However, it’s still too early to know. I would love if Tensorflow joins the alliance. That will be a force to reckon with. Now, let’s compare these frameworks/libraries on certain parameters: Comparison of AI frameworks |Community and Support||Tensorflow||Pytorch| |Ease of Use||Pytorch||Tensorflow| |Embedded Computer vision||Caffe||Tensorflow| TLDR: If you are in academia and are getting started, go for Pytorch. It will be easier to learn and use. If you are in the industry where you need to deploy models in production, Tensorflow is your best choice. You can use Keras/Pytorch for prototyping if you want. But you don’t need to switch as Tensorflow is here to stay.
OPCFW_CODE
I started with the assumption of 128×128 pixel diamond-square tiles — they’re a power of two big (which makes them simpler to calculate), 512 bytes wide, 128 lines tall and 64KB in total size, which means two will fit in local store. So, questions: What points outside this area are required to calculate all the points within? How much space is required to render one of these tiles? (There’s a smaller example pictured, again with a different colour for each iteration of the calculation) If the tile is to be 4×4, the centre area is 5×5 with overlap between tiles. Additionally, to calculate a 4×4 tile, fully calculating a 7×7 area is required, not counting the sparsely calculated pixels around the outside. Scaling this up to 128×128 pixel tiles and it should be clear that at least 131×131 pixel area is required, not counting those extras around the outside. How to deal with those? One thing that is clear is that they’re sparsely filled, and that calculated values only appear on certain lines. In fact, the number of lines required around the outside of a tile is (1+log2n) where n is the tile width. For the 4×4 tile pictured, three extra lines are needed on each side of the tile. For a tile 128 lines across, 8 extra lines are needed. So all the values to calculate a given tile can be stored in in (129+8+8)×(129+8+8) = 145×145 pixels. Rounding up to get quadword aligned storage gives a total of 148×145×4 bytes — 85,840 bytes per tile. 171,680 bytes is a big chunk of local store, but it’s probably better to be using it for something… Based on that, it’s not particularly difficult to generate the data for a tile. Start the tile with its starting values (more on those in another post) in the corners of the squares (the blue ones in the picture), and then calculate the remaining points. There’s some special cases required for each of the sides, but it’s nothing particularly tricky, just keeping track of row/column indexes for particular iterations. The outer points make up only a very small percentage of the total points that need to be calculated — more than 50% of the points are calculated in the last diamond and square iterations, and due to the proximity and alignment of the points accessed, their calculation can be usefully optimised. (I might write a post about that sometime…)
OPCFW_CODE
In this video, we write our first web request. In this video, we use standard input and output in Python to process data piped to us from other applications. Some of us grew up clicking around in MS Paint on Windows. Others may enjoyed the luxurious interface afforded by Mac OS. Still others may have been stuck with nothing more than a cell phone, or even just a TI-84 calculator. Regardless of your humble beginnings, I want to congratulate you on taking things to the next level by jumping headfirst into the world of Linux. Whatever your reason for dipping your toes in these waters, I'm sure you won't regret it! Anyone involved with computers will almost certainly encounter Linux at some point in their career, so now is the time for you to get ahead of things and figure out how to use the dang thing! All you'll need is a little patience and about 10 minutes to get started! Read on. Disclaimer: the video is 10 minutes, but the article may be a bit more verbose. :) In this video, we cover writing to files and discuss "write" mode vs. "append" mode. In this video, we take a sip out of a file - just a quick skim, printing out the contents. It's a great skill that we'll build on later. Any of these examples are editable on CodePen. Just click "Edit on Codepen" in the top right corner and you can make as many changes as you want. Don't worry - the changes you make will be just for you! You don't have to worry about making mistakes, because you can always come back here to start fresh. Photo by Igor Haritanovich Click the button below to start the Genie Game! If you look at the code, you'll notice that we're using for statements to make this work. These are called control flow statements, and without them, programming would be a lot harder! They let us control which code is executed, allowing us to change what happens based on the answers given by the user (that's you!). To show messages to the user, we use the alert function. It's easy to use - all you have to do is follow this format, and you'll see a popup box with your message: Another function that we used is the confirm function. It displays a box with a message, just like the alert function, but it has a special feature. Instead of a single "OK" button, the confirm box has both an "OK" and a "Cancel" button. If the user presses the "OK" button, a value of true is returned. On the other hand, if you hit "Cancel," a value of false is returned. We can use this to ask yes or no questions and change which messages we display based on the responses. Finally, the last major function that we use is the prompt function. This is just like the confirm functions, but it gives you a text box to type in a message to the program. You can use it like this: With all of that out of the way, let's see it in action! Click the button below to play the game. Then, try changing the code and make it do something new! As a special challenge, create a new CodePen and try to make your own story from scratch. Be sure to use this one as a reference if you need it. Let's try out the snippets. Right click on this post and press "Inspect Element." Then, look for the Console. Copy each of the snippets from the beginning of the post and paste them into the Console. For bonus points, change the messages to something that you made up yourself! Continue the story after the three wishes are granted. You can include any number of forstatements, if you're feeling adventurous! Hint: Start by adding a new line of code after line #32. Use the template below to create your own story from scratch! You can start by just replacing the text that is displayed with your own story. Then, try building your own logic. This code picks a random color when you press the button, and shows you the answer. If you ever need to pay the bill at a restaurant, you may need to figure out how much to tip. This calculator takes the bill amount you provide and adds 20% so that you know how much to pay! The HTML Canvas is a special element on the page that lets you draw custom shapes, lines, and images wherever and however you want. It's often used to create games right in your browser. It's a bit of an advanced topic, so we won't delve into it for this post. Try messing around with the code to get a feel for it. In this video, we keep the user in line! With error handling, we can specify what type of input we're expecting to receive. This basic skill will also be useful for countless other situations as you continue your Python journey. In this video, we find out how to get user input into our program - a huge advantage if you're just starting out. In this video, we get our feet wet and unlock a tiny bit of the enormous power Python offer with a simple for loop. In this video we make sure we have Python 3 installed on our Ubuntu Linux system.
OPCFW_CODE
Plex server only connects with my Vizio tv app remotely. It shows thumbnails, but times out starting a movie. FireTv and Chromeweb remote and local all work well, its just the VIZIO. in the vizio app, when i select the server name at top, it says "Remote". If i Disable remote, it cant find the server, which i believe means its not finding the local server on the same subnet. VIZIO was working until i got a new router from my ISP. There is no DNS rebinding on the router. I swapped between their default DNS (the WAN/LAN gateway) and the google dns. When I use the link with my token I see 2 local addresses for my PMS, one current, and one with the old IP address from the previous router. My NAS/PMS are on the same subnet as the Vizio; plugged into the new router, I have a synology DS515+NAS with only 1 ip interface, no virtual interfaces. It is on the same subnet as the router. The server is plugged directly into the router, as is the tv. I have followed the common instructions in the forums, such as reinstalling (Intel32), logging in and out, removing the devices. I followed the instructions for looking up the connections via URL with token. I have 2 local devices, including the old (non existant) subnet. The forum said Vizio is opera, which has a problem with 2 local devices. I followed the guide to try and force a new network setup to the .tv server (with the tv powered off). I deleted the devices from the .tv server, but when the media server came back up, the old ip address was right there as local next to the legit new IP address. Something is keeping it. I followed instructions to completely wipe plex from my server. Telnetted in and rm-r 'd the plex directories and did a fresh install. Still the old "local" entry persists with the new token. All these attempts explicitly following the instructions in the forums and FAQ (linked in their signature!) Finally i wiped plex off the NAS again, including telnet & rm -r of the directories, logged out of plex.tv. I reinstalled and opened a new plex.tv account on a diff email. Connected them, and the OLD LOCAL ADDRESS CAME BACK! I used telnet and Netstat -a to watch the port open as soon as the package installed (before using the web client or linking the accounts, the 110 came back immediately on plex package install). Please help, I don't know how its remembering. the old router isn't even here.
OPCFW_CODE
Setting up and using your development environment# Recommended development setup# Since NumPy contains parts written in C and Cython that need to be compiled before use, make sure you have the necessary compilers and Python development headers installed - see Building from source. Building NumPy as of version 1.17 requires a C99 compliant compiler. Having compiled code also means that importing NumPy from the development sources needs some additional steps, which are explained below. For the rest of this chapter we assume that you have set up your git repo as described in Git for development. If you are having trouble building NumPy from source or setting up your local development environment, you can try to build NumPy with Gitpod. To build the development version of NumPy and run tests, spawn interactive shells with the Python import paths properly set up etc., do one of: $ python runtests.py -v $ python runtests.py -v -s random $ python runtests.py -v -t numpy/core/tests/test_nditer.py::test_iter_c_order $ python runtests.py --ipython $ python runtests.py --python somescript.py $ python runtests.py --bench $ python runtests.py -g -m full This builds NumPy first, so the first time it may take a few minutes. If -n, the tests are run against the version of NumPy (if any) found on current PYTHONPATH. When specifying a target using arguments may be forwarded to the target embedded by runtests.py by passing the extra arguments after a bare --. For example, to run a test method with --pdb flag forwarded to the target, run the following: $ python runtests.py -t numpy/tests/test_scripts.py::test_f2py -- --pdb When using pytest as a target (the default), you can match test names using python operators by passing the -k argument to pytest: $ python runtests.py -v -t numpy/core/tests/test_multiarray.py -- -k "MatMul and not vector" Remember that all tests of NumPy should pass before committing your changes. runtests.py is the recommended approach to running tests. There are also a number of alternatives to it, for example in-place build or installing to a virtualenv or a conda environment. See the FAQ below Some of the tests in the test suite require a large amount of memory, and are skipped if your system does not have enough. To override the automatic detection of available memory, set the NPY_AVAILABLE_MEM, for example NPY_AVAILABLE_MEM=32GB, or using pytest For development, you can set up an in-place build so that changes made to .py files have effect without rebuild. First, run: $ python setup.py build_ext -i This allows you to import the in-place built NumPy from the repo base directory only. If you want the in-place build to be visible outside that base dir, you need to point your PYTHONPATH environment variable to this directory. Some IDEs (Spyder for example) have utilities to manage PYTHONPATH. On Linux and OSX, you can run the command: $ export PYTHONPATH=$PWD and on Windows: $ set PYTHONPATH=/path/to/numpy Now editing a Python source file in NumPy allows you to immediately test and use your changes (in .py files), by simply restarting the Note that another way to do an inplace build visible outside the repo base dir python setup.py develop. Instead of adjusting .egg-link file into your site-packages as well as adjusts the easy-install.pth there, so its a more permanent (and magical) operation. Other build options# Build options can be discovered by running any of: $ python setup.py --help $ python setup.py --help-commands It’s possible to do a parallel build with numpy.distutils with the see Parallel builds for more details. A similar approach to in-place builds and use of PYTHONPATH but outside the source tree is to use: $ pip install . --prefix /some/owned/folder $ export PYTHONPATH=/some/owned/folder/lib/python3.4/site-packages NumPy uses a series of tests to probe the compiler and libc libraries for functions. The results are stored in HAVE_XXX definitions. These tests are run during the phase of the _multiarray_umath module in the generate_numpyconfig_h functions. Since the output of these calls includes many compiler warnings and errors, by default it is run quietly. If you wish to see this output, you can run the build_src stage verbosely: $ python build build_src -v Using virtual environments# A frequently asked question is “How do I set up a development version of NumPy in parallel to a released version that I use to do my job/research?”. One simple way to achieve this is to install the released version in site-packages, by using pip or conda for example, and set up the development version in a virtual environment. If you use conda, we recommend creating a separate virtual environment for numpy development using the environment.yml file in the root of the repo (this will create the environment and install all development dependencies at $ conda env create -f environment.yml # `mamba` works too for this command $ conda activate numpy-dev $ virtualenv numpy-dev Now, whenever you want to switch to the virtual environment, you can use the source numpy-dev/bin/activate, and deactivate to exit from the virtual environment and back to your previous shell. runtests.py, there are various ways to run the tests. Inside the interpreter, tests can be run like this: >>> np.test() >>> np.test('full') # Also run tests marked as slow >>> np.test('full', verbose=2) # Additionally print test name/file An example of a successful test : ``4686 passed, 362 skipped, 9 xfailed, 5 warnings in 213.99 seconds`` Or a similar way from the command line: $ python -c "import numpy as np; np.test()" Tests can also be run with pytest numpy, however then the NumPy-specific plugin is not found which causes strange side effects Running individual test files can be useful; it’s much faster than running the whole test suite or that of a whole module (example: This can be done with: $ python path_to_testfile/test_file.py That also takes extra arguments, like --pdb which drops you into the Python debugger when a test fails or an exception is raised. Running tests with tox is also supported. For example, to build NumPy and run the test suite with Python 3.9, use: $ tox -e py39 For more extensive information, see Testing Guidelines Note: do not run the tests from the root directory of your numpy git repo without ``runtests.py``, that will result in strange test errors. Lint checks can be performed on newly added lines of Python code. Install all dependent packages using pip: $ python -m pip install -r linter_requirements.txt To run lint checks before committing new code, run: $ python runtests.py --lint uncommitted To check all changes in newly added Python code of current branch with target branch, run: $ python runtests.py --lint main If there are no errors, the script exits with no message. In case of errors: $ python runtests.py --lint main ./numpy/core/tests/test_scalarmath.py:34:5: E303 too many blank lines (3) 1 E303 too many blank lines (3) It is advisable to run lint checks before pushing commits to a remote branch since the linter runs as part of the CI pipeline. For more details on Style Guidelines: Rebuilding & cleaning the workspace# Rebuilding NumPy after making changes to compiled code can be done with the same build command as you used previously - only the changed files will be re-built. Doing a full build, which sometimes is necessary, requires cleaning the workspace first. The standard way of doing this is (note: deletes any uncommitted files!): $ git clean -xdf When you want to discard all changes and go back to the last commit in the repo, use one of: $ git checkout . $ git reset --hard Another frequently asked question is “How do I debug C code inside NumPy?”. First, ensure that you have gdb installed on your system with the Python extensions (often the default on Linux). You can see which version of Python is running inside gdb to verify your setup: (gdb) python >import sys >print(sys.version_info) >end sys.version_info(major=3, minor=7, micro=0, releaselevel='final', serial=0) Next you need to write a Python script that invokes the C code whose execution you want to debug. For instance import numpy as np x = np.arange(5) np.empty_like(x) Now, you can run: $ gdb --args python runtests.py -g --python mytest.py And then in the debugger: (gdb) break array_empty_like (gdb) run The execution will now stop at the corresponding C function and you can step through it as usual. A number of useful Python-specific commands are available. For example to see where in the Python code you are, use py-list. For more details, see DebuggingWithGdb. Here are some commonly used commands: list: List specified function or line. next: Step program, proceeding through subroutine calls. step: Continue program being debugged, after signal or breakpoint. Instead of plain gdb you can of course use your favourite alternative debugger; run it on the python binary with arguments runtests.py -g --python mytest.py. Building NumPy with a Python built with debug support (on Linux distributions typically packaged as python-dbg) is highly recommended. Understanding the code & getting started# The best strategy to better understand the code base is to pick something you want to change and start reading the code to figure out how it works. When in doubt, you can ask questions on the mailing list. It is perfectly okay if your pull requests aren’t perfect, the community is always happy to help. As a volunteer project, things do sometimes get dropped and it’s totally fine to ping us if something has sat without a response for about two to four weeks. So go ahead and pick something that annoys or confuses you about NumPy, experiment with the code, hang around for discussions or go through the reference documents to try to fix it. Things will fall in place and soon you’ll have a pretty good understanding of the project as a whole. Good Luck!
OPCFW_CODE
daemon reload problem with 3.0.0 After updating to 3.0.0 limits configuration seems to be broken. In the manifest I have: systemd::service_limits { 'php7.4-fpm.service': limits => { 'LimitNOFILE' => 1048576, } } Puppet applied this code and tried to perform service restart: Exec[restart php7.4-fpm.service because limits] But in fact, new limits were not applied, because of lack of daemon reload. By code I see there is call of: systemctl restart php7.4-fpm.service By running it by hand after puppet run, I got: ~# systemctl restart php7.4-fpm.service Warning: The unit file, source configuration file or drop-ins of php7.4-fpm.service changed on disk. Run 'systemctl daemon-reload' to reload units. So, I need to reload daemon manually, and then restart the service. OS Debian 10.9 Puppet server 7.1.0 Puppet agent 7.5.0 I would advise you to never use $restart_service => true on systemd::service_limits since it, as you noticed, doesn't do any correct ordering. If you also manage the service itself, you may end up with double service restarts as well. What I always end up doing is roughly: systemd::service_limits { 'php7.4-fpm.service': limits => { 'LimitNOFILE' => 1048576, } restart_service => false, notify => Service['php7.4-fpm.service'], } service { 'php7.4-fpm.service': ensure => running, enable => true, } IMHO it should default to false. However, sadly there was just a major release and this wasn't considered. Also, it would be great if dropin has an auto collector. I took a stab at this in https://github.com/camptocamp/puppet-systemd/pull/191. Wrong ordering and double reloads are less evil than no applying changes to service at all. As said in docs for 3.0.0: Typically this works well and removes the need for systemd::systemctl::daemon_reload as provided prior to camptocamp/systemd 3.0.0. So, my case shows that mentioned typically is not applicable. My code worked well on 2.x branch. Looks like we never finished the discussion: https://github.com/camptocamp/puppet-systemd/pull/171#discussion_r556456446 Unfortunately, that commit was already merged. And now I think it caused strange behavior of module. Maybe, maintainer could say more, because on paper it really should work from the side of puppet itself on 6.1.0 and later, but it's not. Well, the thing is that it doesn't call the provider but just a simple exec. That's why I never use it. IMHO it should be removed or at least default to false. So I’m facing the same issue with systemd::timer where daemon-reload is not called when the service definition changes. # puppet agent -t --environment production Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for [SNIPPED] Info: Applying configuration version '1622498808' [SNIPPED] Notice: /Stage[main]/Borg::Service/Systemd::Timer[borgmatic.timer]/Systemd::Unit_file[borgmatic.service]/File[/etc/systemd/system/borgmatic.service]/content: --- /etc/systemd/system/borgmatic.service 2021-05-30 04:19:31.089365703 +0000 +++ /tmp/puppet-file20210531-5640-pz2ss 2021-05-31 22:07:04.408630248 +0000 [SNIPPED] Notice: /Stage[main]/Borg::Service/Systemd::Timer[borgmatic.timer]/Systemd::Unit_file[borgmatic.service]/File[/etc/systemd/system/borgmatic.service]/content: content changed '{sha256}ddf42ac6fb4bbfd1b74a0cee34a74 a1f5f0aa96e81c3073c4c347ca55e9cc59c' to '{sha256}cbae81dba4beea3415750275e16a57ba15769d6cf2c736de24e6c49ae270e4c0' Notice: Applied catalog in 10.35 seconds # systemctl show --property=NeedDaemonReload borgmatic.service NeedDaemonReload=yes It sounds like systemd::timer should be modified to perform that reload if needed. There you can indeed see that there is no service type managing the actual borgomatic.service service so no code to perform the daemon-reload.
GITHUB_ARCHIVE
Marker.io is the best way to collect client feedback straight into your favorite issue tracker like Trello, Jira, Asana (and more!). However, most clients who send you feedback from do not have access to your internal tools. To address this problem, Marker.io has 3 key features: Guest portal: Give Guests an overview of all feedback they have reported, without giving them access to your internal project management tool. Learn more. Guest commenting: Collect comments and respond to Guest messages inside Marker.io. All comments will be logged back into your issues inside your project management tool. Learn more. Status auto-update: Change issue statuses inside your tools and automatically notify Guests that their issues have been resolved, without leaving your tools. In this article, we will be exploring the latter: Status synchronisation and auto-update Let's see how it works 👇 1. Real-time sync with your integrations. Marker.io syncs statuses from your project management tool in real time for any issues created via Marker.io in destinations like Jira, Trello, Asana. For example, If you move a Trello card into a new list, Marker.io will pull the "up-to-date" list. You can even filter issues in Marker.io, based on the statuses in your project management tool. 2. Resolve Guest issues, automatically. All issues reported by Guests have custom issue status provided by Marker.io: Open, Archived & Resolved. With the Status auto-update feature, you can create rules to automatically to mark an issue as resolved. Under your destination setting, go to [YOUR DESTINATION] > Settings > Status auto-update. 👀 Depending on your integration type, you might see something slightly different. More on that in the FAQ section below. Now, when someone on your team changes the status inside your destination (Trello, Jira, Asana,...), your Guest's issue will automatically be updated to Resolved. 3. Notify your Guests, automatically. Shortly after updating your issue to the corresponding "Resolved" status, your Guests will receive a confirmation email. They are now up to date and know their issue has been taken care of. Guests with access to the Guest portal will also be able to view all statuses at a glance in real time. 4. Keep track of Guest status changes, inside your tools. Our goal is to have your developers and other team members leave your project management tool as little as possible. When you update an issue inside your tool like Trello, Jira, Github,... the Marker.io bot will add a confirmation message inside your commenting sections. Frequently asked questions How do I mark an issue as archived? Simply archive or delete your issue inside your project management tool. Any issues reported by Guests will then be marked as Archived. Can I re-open issues? Yes. Just move issues inside your project management tool to a status that is NOT associated with the "Resolved" status in Marker.io. This will re-open the issue in Marker.io and our bot will add a new comment to your project management tool to let you know. Can Marker.io change statuses of issues inside my project management tool? No. We use "read-only" permissions on your statuses. The Status auto-update feature only works 1-way: status changes in your integrations will be reflected in Marker.io, not the other way around. Can I update statuses manually in Marker.io? You cannot change statuses for individual issues with the status auto-update feature enabled. If you want to update statuses manually in Marker.io, you will need to disable the status auto-update feature for each specific destinations. How do I enable/disable the status auto update feature? All destinations created after April 2021 will have this setting enabled by default. You can disable it on a destination level by going into specific destinations, then settings > status auto-update. Which integrations support this feature? Depending on your integration, here are the ways that issues will be marked as "Resolved" Trello: Move a card to the specified list. By default, the last list in your Trello board is selected for the Resolved status. Jira: Change to specified status in workflow (Custom statuses supported). By default, any done statuses will be associated with the Resolved status Asana: Complete a task. Teamwork: Complete a task Wrike: Complete a task ClickUp: Close a task GitHub: Close an issue Gitlab: Close an issue Clubhouse: Change to specified status in workflow. This integration requires manual setup. Read the guide on how to setup issue auto update with clubhouse. Integrations that do not currently support status sync & auto update are: Slack, Webhooks, Email, Bitbucket and Monday.com.
OPCFW_CODE
Monstra 3.0.4 has Stored XSS via Uploading html file that has no extension. Brief of this vulnerability In uploading process, Monstra file filter allow to upload no-extension file. If html file that has no extension, it can be executed in browser as html, and it causes of Stored XSS. Test Environment Apache/2.4.18 (Debian) PHP 5.6.38-2+ubuntu16.04.1+deb.sury.org+1 (cli) Affect version <=3.0.4 Payload move to http://[address]:[port]/[app_path]/admin/index.php?id=filesmanager with admin credential Save php codes with no extensions. and upload it like below. # xss <html><head><title>Monstra XSS</title></head><body><script>alert('xss');</script></body></html> Click the uploaded file name or move to http://[address]:[port]/[app_path]/public/uploads/[uploaded file]. Monstra CMS append '.' behind to upload file name when upload file has no extension Profit! Reason of This Vulnerability Monstra prevent to upload php-style files using extension filer in uploading process at ./plugins/box/filesmanager/filesmanager.admin.php like below. #./plugins/box/filesmanager/filesmanager.admin.php if ($_FILES['file']) { if ( ! in_array(File::ext($_FILES['file']['name']), $forbidden_types)) { $filepath = $files_path.Security::safeName(basename($_FILES['file']['name'], File::ext($_FILES['file']['name'])), null, false).'.'.File::ext($_FILES['file']['name']); $uploaded = move_uploaded_file($_FILES['file']['tmp_name'], $filepath); if ($uploaded !== false && is_file($filepath)) { Notification::set('success', __('File was uploaded', 'filesmanager')); } else { $error = 'File was not uploaded'; } } else { $error = 'Forbidden file type'; } } else { $error = 'File was not uploaded'; } This filtering logic checks that extension of upload file is in their blacklist($forbidden_type variable), but it is not check that extension do not exist in their logic. Following this logic, No extension file saved with appending '.' at end of filename (e.g. xss -> xss.) It can be executed in browser(I tested in Chrome ver 68.0.3440.106 (Official Build, 64-bit)) as html and JavaScript. It can be executed in browser as html, and it causes of Stored XSS. Ouch. Very unfortunate to see that all these security issues are not resolved and no reaction from the devs. I got the CVE Number for this vulnerability - CVE-2018-18694 I got the CVE for this vulnerability - CVE-2018-18694 Normally it (the vuln) should not be publicly disclosed (at first). But seems the devs do not react which is bad. Only Admin can access Admin Panel Only Admin can access Admin Panel How can you be sure of that? A file upload without any checks is dangerous. Anyone who can break into admin can misuse this vuln. This is an arbitrary file upload vulnerability. See https://www.owasp.org/index.php/Unrestricted_File_Upload Okey, I will double check this for New Monstra (Flextype) https://github.com/monstra-cms/monstra/issues/460 Thanks for your contribution! @DanielRuf Thank you for answer instead of me. In my opinion, File upload feature MUST have the file filtering logic. There are many possibilities for stealing admin authority. Anyone can't convince that admin panel is only able to access with admin authority. So, I appreciate @Awilum for understanding value of my report and applying it to your new project. Thank you guys! Also, XHTML(https://www.w3schools.com/html/html_xhtml.asp, MIME : application/xhtml+xml) file could trigger the JavaScripts. Please consider xhtml in your next project - flextype. Thank you.
GITHUB_ARCHIVE
Answer by Pavel Nabatchikov · Jul 20, 2016 at 09:08 PM Those are the keyword phrases that usually have 2+ words (definition might very depending on the intent for the search performed) and most often will convert better than short keyword phrases that consist of 1 or 2 words. However, long tail terms have much lower search volumes compared to short terms (as people don't generally like to type long phrases when performing searches). It is recommended to bid on both short and long tail keywords. If you bid only on long tail terms you might not be able to get enough impressions, clicks and conversions. If your budget is very small, the amount of long tailed terms you might have and their corresponding search volumes can be enough to accommodate spend. In such case you don't need to bid on shorter keyword phrases at all. Answer by Dustin Woodard · Mar 07 at 10:07 PM The long-tail of search is a difficult concept for many to fully appreciate. The long tail was first popularized by Chris Anderson, former editor of Wired Magazine. He was mostly analyzing music and products, shining a light how users were picking and choosing songs and buying niche products that brick & mortar stores didn't offer, but sites like Amazon did. He found the old 80-20 rule was flipped on its head. The long tail is a visual piece, where if you plotted out all the products or purchases on a graph by popularity, you would notice that the most popular products dominate the "head" of the graph, while the less frequently purchased products make up the tail. But the interesting finding is that the tail, is very, very long--and when added together, represents 80% of all purchases (opposite of 80-20 rule). Having experienced the long tail of search, particularly with a recipe site I worked for, I noticed that Chris's 20-80 concept underestimated the long tail of search because the demands people put on search engines were massive. On the recipe site I worked for, I ranked #1 for most "head" terms like "recipes", but that head term, for example, only represented 2% of our organic search traffic (we never bought traffic). So I conducted my own study to help search marketers better understand our playing field. In 2008, Hitwise (an expensive data product at the time that used data from large ISPs to obtain search traffic info) allowed me to study their search data to determine the size of the tail from a search perspective. I published the study on the Hitwise blog called "Sizing Up the Long Tail of Search", which, unfortunately I just discovered was lost when the updated their blog after they were bought. It was a well-referenced study, by even Chris Anderson himself. I will republish it on my blog soon. In that study, I found discovered this about the head vs the tail: To better illustrate how big the tail was, I stated this: "There’s so much traffic in the tail it is hard to even comprehend. To illustrate, if search were represented by a tiny lizard with a one-inch head, the tail of that lizard would stretch for 221 miles." In your original question, you ask about long tail keywords, but I think you meant keyword phrases (long tail keywords would be misspellings or less frequently used words). Examples of "head" keyword phrases would be popular searches, like "facebook", "recipes", "auto insurance", "cheap flights" and "Jennifer Lawrence." Mid-tail might be phrases like "iphone 8", "taxes", and "italian recipes." Long-tail phrases would look like "how much money does jennifer lawrence make" "2016 tax deadline", "when does the iphone 8 come out?", and "quick and easy gluten-free italian recipe ideas with low sodium." Often times when people talk about long tail, they are referring to long tail phrases within their own industry. The same level of granular, multi-word queries happen in every industry. The challenge of long tail is we often can't get data for it. For example, Google states that every second they receive millions of search queries, but 15% of those queries they've never seen before. And even if we had the data so we could target it, we often wouldn't have the capacity to hire writers to write about each long tail concept or phrase--this is where popular UGC sites do well, because they basically have an army of writers and ideas on topics to cover.
OPCFW_CODE
SlideShow (etc.) under Vista - I've just uploaded an image into this group's Files area showing what SlideShow looks like when running under Vista. As you can see, Vista draws a frame around the application that's much thicker than we're used to on XP, and it crops SlideShow's scroll bars. I've added a red outline to the image I've uploaded, so you can see more clearly what I'm talking about. Interestingly, Image Viewer does NOT get drawn with a thick frame, and even when you right-click on an image to bring horizontal and vertical scroll bars onto the window, it still looks fine, and there are no ShowMan is also fine, although the surprise there is that the title bar does NOT adopt the Vista look, nor the Windows XP look either. The minimize, maximize and close buttons look like they do when you select "Windows Classic" as your desktop theme. In a moment, I'll upload an image of that too. I've also highlighted the title bar with a red rectangle to draw attention to what I'm talking about. These aren't gigantic problems, of course, but I thought you might want to hear about them. - Interestingly, I've just noticed that I can cause the same problem with SlideShow under XP by changing the "Active Window Border" property. With XP, right click on the desktop. Select Properties from the popup menu. Find the "Appearance" tab on the "Display Properties" dialog. Find the "Advanced" button on that tab. In the "Item" dropdown list box, select "Active Window Border". Next to it, choose something much larger than "1", like, say, "6" for the "Size" of the Active Window Border. OK, OK. Now bring up SlideShow, and load enough images into its client area that a scroll bar appears. Notice that the scroll bar is partially obscured by the non-client area's window frame, in the same way as my Vista screen shot shows. I know, I know -- why would anybody pick such a thick Active Window Border? But Vista does by default. (In fact, I've just checked -- interestingly, that thick border is what Vista calls "1", so you can't make it any skinnier than it already is!) It's up to you of course, but I'm guessing you'll probably want to address this before Vista is - Brian Madsen wrote: > These aren't gigantic problems, of course, but I thought you mightThanks, Brian, much appreciated. > want to hear about them. Unless the functioning of the programs is upset, I will not be making any changes in the short term. However, the comparison of the two programs will be helpful when I do come to update the software. SatSignal software - quality software written to your requirements Try the all-new Yahoo! Mail. "The New Version is radically easier to use" The Wall Street Journal
OPCFW_CODE
Oracle - User Experience/Digital Marketing: 2020 and Beyond To handle their user experience in 2020 and beyond, Oracle plans to have more cloud regions and service offerings and implement FY20 Partner Immersion. More Cloud Regions and Service Offerings - By the conclusion of 2020, Oracle is planning to expand its Oracle Cloud Infrastructure regions and free Cloud service. Every 23 days, it intends to establish a brand-new cloud region over the span of more than a year, and it will expand to around 36 new nations as well as "dual, geographically separated regions" in Brazil, the United States, the United Kingdom, Canada, and others. - In line with the cloud expansion plans, it revealed a free, brand-new Cloud service tier that contains an autonomous Linux OS, a digital voice assistant powered by AI, and a limited version of Oracle's paid Autonomous Database. - According to the VP of AI & Digital Assistant at Oracle, Suhas Uliyar, the voice assistant powered by AI enables customers to utilize voice commands to interact with enterprise applications to encourage certain outcomes and actions, helping to enhance user experiences with conversational AI, while bolstering productivity and facilitating interactions. Implementing FY20 Partner Immersion - Oracle has relaunched its Partner Immersion, which serves as an on-demand learning experience that aligns with the company’s sales strategy and allows the sellers to determine where shoppers "are in their journey to the cloud and how best they can take advantage of Oracle Cloud solutions now and in the future." - Under Immersion, the partners will receive the FY20 Sales Immersion Badge and be able to experience a better understanding of marketing Oracle Cloud solutions, while assisting their customers in attaining new levels of accomplishment. Oracle CX Transformation Research - With an objective to gain the pulse of the future of customer experience, Oracle conducted a survey revealing the "2020 Vision – the Future of Customer Experience," under which it highlighted that virtual customer interactions are projected to rise by the year 2020 and altering the CX and customer relationship. - Strengthening the prime importance of power analytics, Oracle emphasized on the need to leverage new technology revolution where in the adaptation of virtual reality and AI will be essential by 2020 to attain fresh insights into customer behavior and modify CX. Siebel CRM 2020 and Beyond Roadmap - Oracle has implemented a 'Siebel CRM 2019-2020 Statement of Direction' update that shares the company's plan to continue its central innovation themes, which are industry innovation, business agility, and autonomous CRM, by gaining customer and partner feedback from through the Siebel CRM Innovation Survey. - The company plans to continue its improvement with both customer and user experience with Open UI, which includes search as well as intuitive dashboards. - Sustaining Siebel CRM as part of its cloud or on-premise computing, the company has a continuous innovation and delivery roadmap for 2020 and beyond with no end date for attaining autonomous CX. User Experience Lifetime Support Policy - As a strategy to stay committed towards providing a first-rate ownership user experience, the company is offering its Lifetime Support Policy for the Oracle Real User Experience technology products and solutions, along with additional offerings, under which the support will continue beyond 2020. There is an indefinite time period of support for the various Oracle Real User Experience Insights. - With this offering available under the Lifetime Support Policy, the users/customers will have enhanced reliability and be able to drive their business and upgrade strategy with more control, choices, and certainty. Supporting Oracle E-Business Suite (EBS) - The Oracle E-Business Suite (EBS) system, which is a part of its on-premises suites and a means to transfer the advantages of cloud applications, including mobile capability, modern features, chatbots and AI, and user experiences, "without the disruption of a rip/replace." The company intends to remain devoted to promoting Oracle EBS through the year 2030. - The company plans to support Oracle EBS in the future with new functionality, including inserting Endeca into search, performing online patching to reduce downtime, enhancing UI and additional value-add structural improvements to advance the user experience. Enhanced Cloud Collaboration - Oracle is attempting to shift concentration towards the production of platform space. It entered the cloud market by collaborating with Microsoft in 2019, indicating its future steps towards 2020. Through the partnership with Microsoft, Oracle plans to offer a "highly optimized, best-of-both-clouds experience." - With the interoperability partnership, Oracle Cloud provides its customers with a "one-stop shop for all the cloud services and applications they need to run their business." Additionally, customers can experience unified identity as well as access management through a comprehensive, single sign-on experience, along with automated user provisioning. - In the future, with the Microsoft partnership, Oracle plans to implement new capabilities in terms of offering a unified SSO user experience, cross-cloud interconnect, and a collaborative support model involving Microsoft, Oracle, and others. To obtain insights on Oracle's plans related to managing their user experience or digital marketing in 2020 and beyond, we explored its official website and searched through the company's annual report, financial statements. We also consulted user experience blog posts, press releases, and related news articles from Reuters, CBR Online, Forbes, TechCrunch, etc., Additionally, we examined cloud computing industry reports from Forrester, Gartner, among others.
OPCFW_CODE
$05/853F B7 65 LDA [$65],y[$06:8689];level data in ROM map $05/8541 STA $00 [$00:0000] $05/8543 AA TAX $05/8544 29 0F AND #$0F $05/8546 8D 2B 19 STA $192B [$00:192B] $05/8549 8A TXA $05/854A 4A LSR A $05/854B 4A LSR A $05/854C 4A LSR A $05/854D 4A LSR A $05/854E 29 07 AND #$07 ;max 8 variables $05/8550 AA TAX $05/8551 BF DB 84 05 LDA $0584DB,x[$05:84E1] $05/8555 AE DA 0D LDX $0DDA [$00:0DDA] $05/8558 10 02 BPL $02 [$855C] $05/855C CD DA 0D CMP $0DDA [$00:0DDA] $05/855F D0 02 BNE $02 [$8563] $05/8563 8D DA 0D STA $0DDA [$00:0DDA] $0DDA store That would look alot better with a monospace font... RAM map tells me $0DDA is the music select, which happens to be written to $2142. The first ROM address in the code is in the level data, and Y reg = 02 when accessing, so it's the third byte in the level data I guess. The high 4 bits of it, anyway of which there's 8 variables for it. At 0584DB is the actual music bytes themselves. The one being accessed is the boss track (05h). Changing these 8 bytes to 05h for example makes every ingame level have boss music. I suppose you're in for some manual pointer work since LM doesn't want to change it. EDIT: I don't think I was very clear...To change the music of a level manually: 1. Lookup 24bit pointer according to the level number at: 2E200 $05:E000 1,536 bytes Pointer Level data pointer table (Layer 1) 2. Change the third byte's high 4 bits, at the pointed to address, (max 8 variables) to your track of choice. SMW doesn't care of it's a boss level. I tried it with the first boss battle (Lemmy?) and there's no issue.
OPCFW_CODE
There's never a good time for a short circuit in your car. When you rely on your vehicle to get to work, school or the store, any car problem can be a setback. But if you have the right skills, materials and tools to solve an electrical problem in your car, you can be back on the road in no time. What type of solder to use The first thing you'll need is the right wire to repair the problem. Not any wire will do, and finding the right kind is crucial if you want to avoid bigger problems down the road. "Be sure to use automotive-grade stranded wire." You'll want a 60-40 rosin-core solder, Popular Mechanics explained. This means it is 60 percent tin and 40 percent lead. The rosin core contains a flux, which will melt before the metal begins to. As the flux melts, it will coat the wire and allow it to weld together smoothly and strongly. Without it, the wire is more likely to melt into a ball of metal that's extremely difficult to work with. How to melt the wire To turn the wire into a malleable and useful material, you'll need to apply some heat. The best tool for the job is a soldering iron, such as Master Appliance's EconoIron. Aside from your solder and soldering iron, you'll also need: - Wire strippers. - Automotive-grade stranded wire that's the same gage as the old wire. - Electrical tape or PVC shrink tube. First, strip the sections of wire you plan on soldering together. Then twist them around each other until secure. If you're using PVC shrink tube, slip this over one of the wires. Next, get ready with your solder and iron. Before you start working on the wire, you'll need to prepare the tip. Heat up the tip until it's hot enough to steam when wet, then clean off any old solder by wiping it with a damp sponge. After it's clean, tin the tip of the wire. This means coating it with the solder. You want to have some solder on the tip because it improves conductivity, which makes the task go faster, according to Instructables. It's best to get the wires soldered together quickly to avoid damaging the wires. When your soldering iron is all prepped and ready to go, it's time to solder the wires. Apply the solder directly to the joined wires rather than the tip. Be sure there's enough consistent solder left on the tip. You may need to re-tin at some point during the process. Once coated sufficiently, move the shrink tube and heat it so it makes a snug fit around the wire. If you're using electrical tape instead, apply it once the solder has cooled naturally.
OPCFW_CODE
+ Post New Thread Try hovering over tabs in this example: I made a quick fix in the tab panel class... The Ext.Action example in 3.0 RC1.1 (as well as in RC 1) is broken - the buttons to disable the Action, change its icon and text are not added to the... If you create a color menu like this: Your select handler will get called... When a tab is added that requires the autoscroll arrows to be rendered to the tab display area at the top of a TabPanel with enableAutoScroll: true,... I found what I believe to be a bug in the method createResponseObject in the ext adapter. The problem code is: I am trying to achieve a flex based horizontal layout of fields and hbox is giving me issues. Here is what I am trying to achieve: You can refer to my thread in the help section -- but I will do a better job of laying out the problem in THIS thread. This may or may not be... Originally questioned here: I encounter similar problems with Ext 3.x on FF 3.0.8. ... and put it to Ext.data.GroupingStore config. But grid loads with GET method. I'm using extJS 3.0 and the jquery adapter while trying to include an HTMLEditor i get an Ext.TaskMgr is undefined error. I was able to fix this... I realize the xaction parameter is needed to support some of the new features in 3.0, but it's causing problems with my application when it is being... Hi guys... I've got what I *think* is a bug (it could be that my expectations are incorrect though)... here's some test code; Just a little thing that the builder cleans up, but for those of us with our own builders... When a Ext.tree.TreePanel is configured with the frame property set to true, the panel gets the normal blue background but the tree is rendered with... BasicForm.isDirty() may return true when using a tabbed form with deferredRender: true. This happens because Ext.form.Field's originalValue property... Form loaded in a panel. The form contains an htmleditor. When I try to call: In a panel I'm trying to add the following item: fieldLabel: 'Account Name', style: 'font-weight: bold;' And I get an... Row Editor Grid Example ----can't show update textfileld TabPanel Scroller Menu -----can't show scroll Please replace the 3 occurrences of Ext.data.READ in Ext.data.DataProxy with Ext.data.Api.READ. Sencha is used by over two million developers. Join the community, wherever you’d like that community to be or Join Us hd film izle takipci kazanma sitesi güzel olan herşey takipci alma sitesi komik eğlenceli videolar
OPCFW_CODE
Let your desktop breathe! Hide your desktop icons with a single click! "HiddenMe Hides all Your Mac Desktop Icons in One Click" - Lifehacker "a button to hide all the clutter with a single click." - Business Insider Now a universal app! Ratings and Reviews Does what it says Thanks to HiddenMe, I now have a clean desktop anytime I want. To see my folders, I simply click on the HiddenMe icon in the menu bar and everthing reappears on my desktop. Very functional and easy to use. I have encountered a problem though: When I have the stuff on my desktop hidden (that is, when HiddenMe is active), my wallpaper refuses to change, and it begins to tamper with the wallpaper on my other desktops. Maybe there’s a way around it, but I'd caution anybody who likes to use the "change picture” option (in System Preferences>Desktop & Screen Saver) to skip this app. As long as I do’t have the "change picture" option selected, however, HiddenMe works fine. Does what it says with a caveat (Screen Recording prompt) So, it hides icons. That works just fine. The pro version works with multiple desktops, also works fine. The implementation probably needs some work or explanation. I'm not a developer, but I'm reminded of John Siracusa talking about his app SwitchGlass (which is awesome btw), and the implementation of grabbing the desktop background for use in the app Preferences window. One of the options was to use Quartz Display Services which can grab an image of your desktop background, but it requires screen recording to do so. It's a messy implementation and the easy way out and without proper notification, causes freak outs and 1-star reviews. I haven't done a deep dive to see if it's sending data out, but it's something worth considering. So, I assume the developer is using screen recording to grab the current desktop backgorund and then display that image on top of everything, hiding your icons. I'm willing to give the developer the benefit of the doubt, but this is conjecture because as a non-developer, this is how I would do this. I think the developer needs to respond to this and listen to the Accidental Tech Podcast episode 356 starting at 1:32:17, John Siracusa describes this exact problem. Hidden Me Pro prompt I use Hidden Me sevral times a week when I do screen shots for different meetings. Its nice to be able to quickly click on the icon and have blank screeen ready to go with the choice of a couple of different colors. I don’t like needing to click on “later” everytime I open the app and it detects two screens and asks if I want to purchase the “Pro” HiddenMe. There needs to be a switch for “Don’t ask this again” Other than that its a great app does what I think it should. Developer Response , Great Point Randy! We'll include an option to let the user avoid that alert. Data Not Collected The developer does not collect any data from this app. Privacy practices may vary, for example, based on the features you use or your age. Learn More - 380.6 KB - Requires macOS 10.15 or later. - Age Rating - Copyright © 2012-2022 Appersian. All rights reserved. - In-App Purchases - Upgrade to Pro $1.99
OPCFW_CODE
Percona Live Europe is now more than a week away. l left Amsterdam with a positive thought: it has been the best European event for MySQL so far. Maybe the reason is that I saw the attendance increasing, or maybe it was the quality of the talks, or because I heard others making the same comment, and I also saw a reinvigorated MySQL ecosystem. There are three main aspects I want to highlight. 1. MySQL 5.7 and the strong presence of the Oracle/MySQL team There have been good talks and keynotes on MySQL 5.7. It is a sign of the strong commitment of Oracle towards MySQL. I think there is an even more important point. The most interesting features in 5.7 and the projects still in MySQL Labs derive or are in some way inspired by features available from other vendors. Some examples: - The JSON datatype from MySQL and MariaDB – two fairly different approaches, but definitely an interesting addition - Improvements in the optimizer from MySQL and MariaDB. There is a pretty long list of differences, this slide deck can help understand them a bit better… - Improvement for semi-sync replication from MySQL and WebScaleSQL - Automatic failover with replication from MySQL and MHA - Multi-source replication from MySQL and MariaDB 10 - Group replication in MySQL and MariaDB 10 – Here things differ quite a lot, but the concept is similar. - MySQL router in MySQL and MaxScale – Again, a different approach but similar concepts to achieve the same results My intent here is not to compare the features-I am simply pointing out that the competition among projects in the MySQL ecosystem is at least inspirational and can offer great advantages to the end user. Of course the other side of the coin is the creation of almost identical features, and the addition of more confusion and incompatibilities among the distributions. 2. The Pluggable Storage Engine Architecture is alive and kicking Oracle’s commitment to improving InnoDB has been great so far, and hopefully InnoDB will get even better in the future. That said, the Pluggable Storage Engine Architecture was a unique feature for a long time. There have been two recent additions to the list of storage engines that have been around for long time. Today TokuDB, Infobright, InfiniDB, and ScaleDB share the advantage of being pluggable to MySQL with Deep and RocksDB. RocksDB is also pluggable to MongoDB, and even more important, it has been designed with a specific use case in mind. 3. Great support from the users The three aspects have similar weight in measuring the health of MySQL, but this is my favourite, because it demonstrates how important MySQL is for some of the most innovative companies on the planet. Despite Kristian Koehntopp’s great keynote, showing us how boring the technology is at Booking.com, nobody really thought it was true. Using a stable and mature product like MySQL is not boring, it is wise. But this was not the only presentation that we enjoyed from the end users. Many showed a great use of MySQL, especially compared to the levels of scalability and performance that NoSQL databases ( these two combined aspects being the number 1 reason for using a NoSQL DB) struggle to produce with certain workloads. I am looking forward to seeing the next episode, at Percona Live 2016 in Santa Clara.
OPCFW_CODE