Saturday, February 15, 2020

GitHub CLI brings GitHub to the command line. Command-line interface to the code hosting platform can be used for issues and pull requests

Source: https://www.infoworld.com/article/3526424/github-cli-brings-github-to-the-command-line.html
GitHub has launched GitHub CLI, a new command-line interface that promises a more-seamless way to work with the code hosting platform. GitHub CLI is available immediately in a beta version. 
Installable on Windows, Linux, and macOS, GitHub CLI can be used to access issues, and pull requests from the terminal, where developers already are working with git and their code. GitHub CLI offers the following benefits:
  • Easy creation of pull requests and issues with no need to leave the command line.
  • Fast status checks. You can see the status of open issues and pull requests and find out what awaits review.
  • Easy navigation and filtering of issues and pull requests, which can be opened in the browser.
For pull requests, GitHub CLI automatically creates a fork when developers lack one, and pushes a branch and forges a pull request to get a change merged. Developers  later can view a snapshot of what happened since the pull request was created.
GitHub is seeking user feedback on GitHub CLI. To provide it, developers can create an issue in the GitHub CLI open source repository or offer feedback on a Google form. The company wants to know what commands users want and what is clunky or missing.

Wednesday, October 9, 2019

Source: https://sdtimes.com/softwaredev/go-programming-language-teams-starts-implementing-go-2-proposals/

Go programming language teams starts implementing Go 2 proposals

The Go programming language team is outlining its next steps as it makes its way to Go 2.
Go 2 has been in the works for a couple of years now. It has been about seven years since Go first reached 1.0, and the team has been working towards 2.0 ever since. However, according to the team, Go 2 won’t happen as a big release. It will be an ongoing effort of incremental releases and proposal implementations.
The team is currently working on Go 1.13, which is expected to be ready by early August. According to the team, this is the first release to include concrete language changes. The changes come from a large list of Go 2 proposals.
“We wanted our initial selection of proposals to be relatively minor and mostly uncontroversial, to have a reasonably high chance of having them make it through the process. The proposed changes had to be backward-compatible to be minimally disruptive since modules, which eventually will allow module-specific language version selection, are not the default build mode quite yet. In short, this initial round of changes was more about getting the ball rolling again and gaining experience with the new process, rather than tackling big issues, the Go team wrote in a post
The initial list of proposals included general Unicode identifiers, binary integer literals, separators for number literals and signed integer shift counts. While general Unicode identifiers didn’t make the cut this time, binary integer literals were expanded, which led to an overhaul of the languages number literal syntax, the team explained. In addition, the team also added error inspection to the Go 2 draft design proposal and it has been partially accepted. 
“The goals we have for Go today are the same as in 2007: to make software development scale. The three biggest hurdles on this path to improved scalability for Go are package and version management, better error handling support, and generics,” the team wrote.
Next, the team will focus on Go 1.14. The proposals the team will be looking at for this release include a built-in Go error check function, the ability to embed overlapping interfaces, ability to diagnose string(int) conversion in go vet, and the adoption of crypto principles. 

“Unless there are strong reasons to not even proceed into the experimental phase with a given proposal, we are planning to have all these implemented at the start of the Go 1.14 cycle (beginning of August, 2019) so that they can be evaluated in practice. Per the proposal evaluation process, the final decision will be made at the end of the development cycle (beginning of November, 2019),” the team wrote. 

ARTICLE TAGS

About Christina Cardoza

Christina Cardoza is the News Editor of SD Times. She is responsible for the oversight of the daily news published to the website as well as the company's weekly newsletter, News on Monday. She covers agile, DevOps, AI, machine learning, mixed reality and software security. She is an undeniable nerd who loves Marvel comics and Star Wars. On Follow her on Twitter at @chriscatdoza!

Sunday, September 8, 2019

IBM Trusted AI toolkits for Python combat AI bias IBM has released Python toolkits for identifying and mitigating against bias in training data and machine learning models


Source: https://www.infoworld.com/article/3434009/ibm-trusted-ai-toolkits-for-python-combat-ai-bias.html
IBM Trusted AI toolkits for Python combat AI bias
Getty Images
Researchers at IBM are developing ways to reduce bias in machine learning models and to identify bias in data sets that train AI, with the goal of avoiding discrimination in the behavior and decisions made by AI-driven applications.
As a result of this research, called the Trusted AI project, the company has released three open source, Python-based toolkits. The latest toolkit, AI Explainability 360, which was released earlier this month, includes algorithms that can be used to explain the decisions made by a machine learning model. 
The three IBM Trusted AI toolkits for Python are all accessible from GitHub. Some information about the three packages: 
  • AI Explainability 360, aka AIX360, provides algorithms that cover the different dimensions of explainability of machine learning models and proxy explainability metrics. The extensible toolkit can help users understand how these models predict labels by various means throughout the AI application lifecycle. Algorithmic research is translated from the lab into actual practice for domains including finance, human capital management, education, and healthcare. The AIX360 toolkit was introduced on August 8, 2019 and can be downloaded from this link. You can access API docs at this link.
  • AI Fairness 360, or AIF360, provides metrics to check for unwanted bias in data sets and machine learning models and contains algorithms to mitigate bias. With this toolkit, IBM aims to prevent the development of machine learning models that could give certain privileged groups a systematic advantage. Bias in training data, due to either prejudice in labels or under-sampling or over-sampling, leads to models with biased decision-making. Introduced in September 2018, AIF360 includes nine algorithms that can be called in a standard way. AIF360 contains tutorials on credit scoring, predicting medical expenditures, and classifying face images by gender. AIF360 code can be found at this link. Documentation is accessible at this link.
  • Adversarial Robustness Toolbox is a Python library supporting developers and researchers in defending DNNs (Deep Neural Networks) against adversarial attacks and thus making AI systems more secure and trustworthy. DNNs are vulnerable to adversarial examples, which are inputs (say, images) deliberately modified to produce a desired response by the DNN. The toolbox can be used to build defense techniques and deploy practical defenses. The approach for defending DNNs involves measuring model robustness and model hardening, with approaches such as preprocessing DNN inputs to augment training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been tampered with by an adversary. Released by IBM Research Ireland in April 2018, the Adversarial Robustness Toolbox can be found at this link.
Moving forward, IBM also is pondering the release of a tool for accountability of AI models. The intent is that over the lifecycle of a model, provenance and a data trail would be maintained, so the model can be trusted.

Sunday, June 9, 2019

Reinforcement learning explained. Reinforcement learning uses rewards and penalties to teach computers how to play games and robots how to perform tasks independently

Source: https://www.infoworld.com/article/3400876/reinforcement-learning-explained.html


Reinforcement learning explained
Martin Heller / IDG
You have probably heard about Google DeepMind’s AlphaGo program, which attracted significant news coverage when it beat a 2-dan professional Go player in 2015. Later, improved evolutions of AlphaGo went on to beat a 9-dan (the highest rank) professional Go player in 2016, and the #1-ranked Go player in the world in May 2017. A new generation of the software, AlphaZero, was significantly stronger than AlphaGo in late 2017, and not only learned Go but also chess and shogi (Japanese chess).
AlphaGo and AlphaZero both rely on reinforcement learning to train. They also use deep neural networks as part of the reinforcement learning network, to predict outcome probabilities.
In this article, I’ll explain a little about reinforcement learning, how it has been used, and how it works at a high level. I won’t dig into the math, or Markov Decision Processes, or the gory details of the algorithms used. Then I’ll get back to AlphaGo and AlphaZero.

What is reinforcement learning?

There are three kinds of machine learning: unsupervised learning, supervised learning, and reinforcement learning. Each of these is good at solving a different set of problems.
Unsupervised learning, which works on a complete data set without labels, is good at uncovering structures in the data. It is used for clustering, dimensionality reduction, feature learning, and density estimation, among other tasks.
Supervised learning, which works on a complete labeled data set, is good at creating classification models for discrete data and regression models for continuous data. The machine learning or neural network model produced by supervised learning is usually used for prediction, for example to answer “What is the probability that this borrower will default on his loan?” or “How many widgets should we stock next month?”
Reinforcement learning trains an actor or agent to respond to an environment in a way that maximizes some value. That’s easier to understand in more concrete terms.
For example, AlphaGo, in order to learn to play (the action) the game of Go (the environment), first learned to mimic human Go players from a large data set of historical games (apprentice learning). It then improved its play through trial and error (reinforcement learning), by playing large numbers of Go games against independent instances of itself.
Note that AlphaGo doesn’t try to maximize the size of the win, like dan (black belt)-level human players usually do. It also doesn’t try to optimize the immediate position, like a novice human player would. AlphaGo maximizes the estimated probability of an eventual win to determine its next move. It doesn’t care whether it wins by one stone or 50 stones.

Reinforcement learning applications

Learning to play board games such as Go, shogi, and chess is not the only area where reinforcement learning has been applied. Two other areas are playing video games and teaching robots to perform tasks independently.
In 2013, DeepMind published a paper about learning control policies directly from high-dimensional sensory input using reinforcement learning. The applications were seven Atari 2600 games from the Arcade Learning Environment. A convolutional neural network, trained with a variant of Q-learning (one common method for reinforcement learning training), outperformed all previous approaches on six of the games and surpassed a human expert on three of them.
The convolutional neural network’s input was raw pixels and its output was a value function estimating future rewards. The convolutional-neural-network-based value function worked better than more common linear value functions. The choice of a convolutional neural network when the input is an image is unsurprising, as convolutional neural networks were designed to mimic the visual cortex.
DeepMind has since expanded this line of research to the real-time strategy game StarCraft II. The AlphaStar program learned StarCraft II by playing against itself to the point where it could almost always beat top players, at least for Protoss versus Protoss games. (Protoss is one of the alien races in StarCraft.)
Robotic control is another problem that has been attacked with deep reinforcement learning methods, meaning reinforcement learning plus deep neural networks, with the deep neural networks often being convolutional neural networks trained to extract features from video frames. Training with real robots is time-consuming, however. To reduce training time, many of the studies start off with simulations before trying out their algorithms on physical drones, robot dogs, humanoid robots, or robotic arms.

How reinforcement learning works

We’ve already discussed that reinforcement learning involves an agent interacting with an environment. The environment may have many state variables. The agent performs actions according to a policy, which may change the state of the environment. The environment or the training algorithm can send the agent rewards or penalties to implement the reinforcement. These may modify the policy, which constitutes learning.
For background, this is the scenario explored in the early 1950s by Richard Bellman, who developed dynamic programming to solve optimal control and Markov decision process problems. Dynamic programming is at the heart of many important algorithms for a variety of applications, and the Bellman equation is very much part of reinforcement learning.
A reward signifies what is good immediately. A value, on the other hand, specifies what is good in the long run. In general, the value of a state is the expected sum of future rewards. Action choices—policies—need to be computed on the basis of long-term values, not immediate rewards.
Effective policies for reinforcement learning need to balance greed or exploitation—going for the action that the current policy thinks will have the highest value—against exploration, randomly driven actions that may help improve the policy. There are many algorithms to control this, some using exploration a small fraction of the time ε, and some starting with pure exploration and slowly converging to nearly pure greed as the learned policy becomes strong.
There are many algorithms for reinforcement learning, both model-based (e.g. dynamic programming) and model-free (e.g. Monte Carlo). Model-free methods tend to be more useful for actual reinforcement learning, because they are learning from experience, and exact models tend to be hard to create.
If you want to get into the weeds with reinforcement learning algorithms and theory, and you are comfortable with Markov decision processes, I’d recommend Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto. You want the 2nd edition, revised in 2018.

AlphaGo and AlphaZero

I mentioned earlier that AlphaGo started learning Go by training against a database of human Go games. That bootstrap got its deep-neural-network-based value function working at a reasonable strength.
For the next step in AlphaGo’s training, it played against itself—a lot—and used the game results to update the weights in its value and policy networks. That made the strength of the program rise above most human Go players.
At each move while playing a game, AlphaGo applies its value function to every legal move at that position, to rank them in terms of probability of leading to a win. Then it runs a Monte Carlo tree search algorithm from the board positions resulting from the highest-value moves, picking the move most likely to win based on those look-ahead searches. It uses the win probabilities to weight the amount of attention it gives to searching each move tree.
The later AlphaGo Zero and AlphaZero programs skipped training against the database of human games. They started with no baggage except for the rules of the game and reinforcement learning. At the beginning they played random moves, but after learning from millions of games against themselves they played very well indeed. AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.
AlphaZero, as I mentioned earlier, was generalized from AlphaGo Zero to learn chess and shogi as well as Go. According to DeepMind, the amount of reinforcement learning training the AlphaZero neural network needsdepends on the style and complexity of the game, taking roughly nine hours for chess, 12 hours for shogi, and 13 days for Go, running on multiple TPUs. In chess, AlphaZero’s guidance is much better than conventional chess-playing programs, reducing the tree space it needs to search. AlphaZero only needs to evaluate 10,000’s of moves per decision versus 10,000,000’s of moves per decision for Stockfish, the strongest handcrafted chess engine.
These board games are not easy to master, and AlphaZero’s success says a lot about the power of reinforcement learning, neural network value and policy functions, and guided Monte Carlo tree search. It also says a lot about the skill of the researchers, and the power of TPUs.
Robotic control is a harder AI problem than playing board games or video games. As soon as you have to deal with the physical world, unexpected things happen. Nevertheless, there has been progress on this at a demonstration level, and the most powerful approaches currently seem to involve reinforcement learning and deep neural networks.

Sunday, April 28, 2019

Node.js vs. PHP: An epic battle for developer mindshare How do the old guard and the upstart darling of the server-side web stack up against each other? Let’s compare

Source: https://www.infoworld.com/article/3166109/nodejs-vs-php-an-epic-battle-for-developer-mindshare.html

Node.js vs. PHP: An epic battle for developer mindshare

How do the old guard and the upstart darling of the server-side web stack up against each other? Let’s compare

  •  
  •  
  •  
  •  
  •  
CURRENT JOB LISTINGS
Skyscanner in London, United Kingdom
Tesco in Welwyn, United Kingdom
Skyscanner in London, United Kingdom
XR Trading in London, United Kingdom
Tesco in Welwyn, United Kingdom
It’s a classic Hollywood plot: the battle between two old friends who went separate ways. Often the friction begins when one pal sparks an interest in what had always been the other pal’s unspoken domain. In the programming language version of this movie, it’s the introduction of Node.js that turns the buddy flick into a grudge match: PHP and JavaScript, two partners who once ruled the internet together but now duke it out for the mind share of developers.
In the old days, the partnership was simple. JavaScript handled little details on the browser, while PHP managed all the server-side tasks between port 80 and MySQL. It was a happy union that continues to support many crucial parts of the internet. Between WordPress, Drupal, and Facebook, people can hardly go a minute on the web without running into PHP.
Then some clever kid discovered he could get JavaScript running on the server. Suddenly, there was no need to use PHP to build the next generation of server stacks. One language was all it took to build Node.js and the frameworks running on the client. “JavaScript everywhere” became the mantra for some.
Since that discovery, JavaScript has exploded. Node.js developers can now choose between an ever-expanding collection of excellent frameworks and scaffolding: React, Vue, Express, Angular, Meteor, and more. The list is long and the biggest problem is choosing between excellent options.
Some look at the boom in Node.js as proof that JavaScript is decisively winning, and there  is plenty of raw data to bolster that view. GitHub reports that JavaScript is the most popular language in its collection of repositories, and JavaScript’s kissing cousin, TypeScript, is rapidly growing too. Many of the coolest projects are written in JavaScript and many of the most popular hashtags refer to it. PHP, in the meantime, has slipped from third place to fourth in this ranking and it’s probably slipped even more in the count of press releases, product rollouts, and other heavily marketed moments.
But hype fades and software can live on for decades. Most of the PHP code base isn’t going to migrate and it continues to serve up large portions of the text we read each day. By some estimates, 40 percent of the pages we view begin, in some form, with PHP. Part of this is because PHP continues to be reborn. In the last few years, the guts of the systems running PHP have been completely rewritten. It’s not the same PHP code that ran your grandparent’s website.
PHP’s zippy, just-in-time compiler is delivering answers faster than ever thanks to the same smart techniques that powered the Node.js revolution. Now PHP 7.2 and the HHVM offer many of the same clever on-the-fly optimizations that V8 brought to Chrome and Node.js. Not only that, but HHVM has Hack, a clever PHP dialect that offers full support for sophisticated programming features such as lambdas, generics, and collections. So if you need these features, you don’t need to search for a more full-featured stack.
Of course, the ending isn’t written yet. For every coder crowing about the purity and youth of Node.js and the simplicity of JavaScript everywhere, there’s another who’s happy with the deep code base and long-understood stability of PHP. Will the old codger beat back the server-side upstart? Will JavaScript topple its old friend to achieve world domination? Put another batch of popcorn in the microwave and sit back.

Where PHP wins: Mixing code with content

You’re typing along, pouring thoughts into text for your website, and you want to add a branch to the process, a little if-then statement to make it look pretty, say, depending on some parameter in the URL. Or maybe you want to mix in text or data from a database. With PHP, you open up the magic PHP tags and start writing code within seconds. No need for templates—everything is a template! No need for extra files or elaborate architectures, only programmable logistical power at your fingertips.

Where Node wins: Separating concerns

Mixing code with content is a crutch that can end up crippling you. Sure, it’s fun to mix code in with HTML the first two or three times you do it. But soon your code base becomes a tangled mess of logic. Real programmers add structure and separate the cosmetic layer from the logical layer. It’s cleaner for new programmers to understand and easier to maintain. The frameworks running on Node.js are built by programmers who know that life is better when the model, view, and controller are separate.

Where PHP wins: Deep code base

The web is filled with PHP code. The most popular platforms for building websites (WordPress, Drupal, Joomla) are written in PHP. Not only are the platforms open source, but so are most of their plugins. There’s PHP code everywhere, and it’s waiting for you to download, modify, and use for your needs.

Where Node wins: More modern features

Sure, there are thousands of great open source PHP files, but some are 12-year-old WordPress plug-ins hoping and praying that someone will download them. For every modern version of Symfony, there is a dusty, long-forgotten library that no one updates.
Who wants to spend hours, days, or weeks monkeying with code that hasn’t been updated in years? Node.js plug-ins are not only newer, they were built with full knowledge of the latest architectural approaches. They were built by programmers who understand that modern web apps should push most of the intelligence to the client.
And while JavaScript has many little idiosyncrasies that drive some mad, for the most part it is a modern language that sports a modern syntax and a few useful features like closures. You can reconfigure and extend it easily, making powerful libraries like jQuery possible. You can pass functions around like objects. Why limit yourself?

Where PHP wins: Simplicity (sort of)

There’s not much to PHP: a few variables and basic functions for juggling strings and numbers. It’s a thin layer that doesn’t do much except move the data from port 80 to the database and back. That’s what it’s supposed to do. A modern database is a magical tool, and it makes sense to leave the heavy lifting to it. PHP is the right amount of complexity for a job that’s not supposed to be complex.
Then again, if you’re a programmer who wants to do more than interact with a database and format the results, you can now do more with PHP without holding your nose. Facebook’s HHVM adds support for Hack, a complete language filled with modern features like type annotations, generics, and lambda expressions. Using this limits your code to running only on the HHVM, but that’s not the worst thing in the world. It’s very fast.

Where Node wins: Dozens of language options

If PHP users are happy to get access to Hack, they should consider moving to the world of Node.js because many major languages can be cross-compiled to run in JavaScript. There are well-known options like Java, C#, or Lisp and dozens of others like Scala, OCaml, and Haskell. There are even gifts for nostalgic lovers of BASIC or Pascal. This list of languages that compile to JavaScript from Jeremy Ashkenas is fairly comprehensive. Plus JavaScript cousins like TypeScript and CoffeeScript offer slightly different and improved approaches to the same game.

Where PHP wins: No client app needed

All of the talk about using the same language in the browser and on the server is nice, but what if you don’t need to use any language on the browser? What if you ship the data in HTML form? What if you’re building a spartan, static website to deliver strictly what is needed without the interactive bling? The browser pops it up, and there are no headaches or glitches caused by misfiring JavaScript threads that try to create a page on the browser from two dozen web service calls. Pure HTML works more often than anything else, and PHP is optimized to create that. Why bother with JavaScript on the browser? Build up everything on the server and avoid overloading that little browser on the little phone.

Where Node wins: Service calls are thinner than HTML-fat PHP calls

While AJAX-crazy HTML5 web apps can have too many moving parts, they are cool—and very efficient. Once the JavaScript code is in the browser cache, the only thing that moves along the wires is the new data. There’s not a ton of HTML markup, and there are no repeated trips to download the entire page. Only the data has changed. If you’re willing to put in the time to create a slick browser-side web app, there’s a big payoff. Node.js is optimized to deliver the data and only the data through web services. If your app is complex and data-rich, it’s a good foundation for efficient delivery.

Where PHP wins: SQL

PHP was built to co-exist with MySQL and its many variants, like MariaDB. If MySQL isn’t exactly right, there are other great SQL databases from Oracle and Microsoft. Your code can switch with a few changes to your queries. The vast SQL world doesn’t end at its borders. Some of the most stable, well-developed code will interface with an SQL database, meaning all that power can also be easily integrated into a PHP project. It may not be one perfect, happy family, but it’s a big one. Not only that, but the database world is slowly getting better as developers find ways to add more intelligence to the database so you don’t need to work as hard.

Where Node.js wins: JSON

If you must have access to SQL, Node.js has libraries to do that. But Node.js also speaks JSON, the lingua franca for interacting with many of the latest NoSQL databases. That’s not to say you can’t get JSON libraries for your PHP stack, but there’s something fluid about the simplicity of working with JSON when using JavaScript. It’s one syntax from browser to web server to database. The colons and the curly brackets work the same way everywhere. That alone will save you from hours of frustration.

Where PHP wins: Coding speed

For most developers, writing PHP for web apps feels faster: no compilers, no deployment, no JAR files or preprocessors—simply your favorite editor and some PHP files in a directory. Your mileage will vary, but when it comes to banging a project together quickly, PHP is a good tool to use.

Where Node.js wins: Application speed

Writing JavaScript code is a bit harder when you’re counting curly brackets and parentheses, but when it’s done, your Node.js code can fly. The callback mechanism is brilliant because it saves you from juggling the threads. The core is well-built and designed to do all that for you. Isn’t that what everyone wants?

Where PHP wins: Competition

The battle for the hearts and minds of PHP developers is still unfolding. The HHVM team and the Zend team are working hard to deliver fast code for everyone. Independent benchmarks are appearing, and everyone is pushing the code bases to the limit. This only means better performance.

Where Node.js wins: Solidarity

Do you really want two different code bases? Sure, competition helps, but fragmentation soon follows. What happens when your code runs on only one of the two? Competition doesn’t do any good if you have to spend weeks or months rewriting your code. While Node.js experienced its own splintering a few years back, with the launch of io.js, the Node.js universe has since reunited, giving it the kind of language solidarity that PHP developers may soon long for.

Where PHP wins: Basic apps

In the last few years, a few developers have started up web apps and found themselves frustrated by the sluggish behavior. The JavaScript that drives all of those moving pieces can be tens of thousands of bytes, sometimes hundreds of thousands. When all of the packets arrive, they must be parsed, compiled, and finally executed—all to deliver a few bytes like the temperature and the forecast.
The backlash against this rococco insanity can be found in the teams building static site generators (463 at this writing) and stripped-down webpages in the AMP format. PHP is a natural choice for any team that wants to concentrate the intelligence on the server so the client isn’t overburdened.

Where Node.js wins: Richness

Ludwig Mies van der Rohe, the architect of buildings, once said, “Less is more.” Robert Venturi, another architect, came along and retorted, “Less is a bore.” The smartphones have more power than a room full of Cray computers. The desktops have video cards with multiple fans to keep them cool during all of the processing. Why should we strip down our code and live like a Depression-era victim in a Steinbeck novel? Live it up. Big, slick websites full of JavaScript code are eye-catching, dramatic, and most of all fun. Sure it’s kind of obscene to waste that much bandwidth on a few bits of data, but bandwidth has never been cheaper. Live a little!

Where both win: Headless

The word “headless” refers to the PHP code running on the server. Recently some of the top PHP applications like Drupal have peered across the aisle and come away amazed by the sophisticated user interfaces built by the JavaScript frameworks like React, Angular, or Vue. Instead of trying to compete with them, they’re ceding control of the client and concentrating on doing a good job with the back-end on the server.
If you’ve got quite a bit invested in PHP code running on the server, this may be a way to enjoy the best of both approaches. The old, established PHP code acts as a front door to the database, double-checking the requests, cleaning up the data, and generally providing all of the business logic. The client side is a progressive web app written with the latest JavaScript framework. When it needs information, it sends off an AJAX request to the PHP code.
This may not make sense for someone starting from scratch, but if you’ve relied upon PHP for years and you want to move forward gradually, this can be a happy compromise.

Where both win: Microservices and serverless

The rising microservice or serverless paradigms offer a way for JavaScript and PHP code to cohabit the server and get along. Both solutions split the work into dozens of smaller services or functions and these can run independently and stay in their lanes. Some parts, usually the older and most stable sections of the app, can run PHP. The other parts, often the newer ones, will be written in Node.js. The language of POST or GET can be the lingua franca that unites them all.