This continues my series of exercise in decorating GuzzleHttp calls. Recap:
- PHP: decorating async GuzzleHttp calls
- PHP: decorating async GuzzleHttp calls - handling exceptions
- PHP: decorating async GuzzleHttp calls - handling exceptions a different way
The treatment here is traditional:
- we get a request for data based on an ID;
- we check the cache for data stored against that ID;
- if we have it, we returned the cached version;
- if not, we call the web service;
- and before returning the result, we cache it for "next time".
class CachingService {
public $cache;
public function __construct(){
$this->cache = [];
}
public function contains($id){
return array_key_exists($id, $this->cache);
}
public function get($id){
echo "Getting $id from cache" . PHP_EOL . PHP_EOL;
return $this->cache[$id];
}
public function put($id, $value){
echo "Putting $id into cache" . PHP_EOL . PHP_EOL;
return $this->cache[$id] = $value;
}
}
In reality we'd use redis or something like that, but it's the interface that matters here, not the implementation. Note I'm also outputing a message when the cache methods are hit, just so we can tell what's going on when the code is running.
Next I have a CachingGuzzleAdapter which is the decorator for a GuzzleAdapter. This has a chunk of code in it, so I'll break it down:
class CachingGuzzleAdapter {
private $adapter;
private $cache;
public function __construct($adapter, $cache) {
$this->adapter = $adapter;
$this->cache = $cache;
}
// ...
}
No surprises here. The decorator takes the adapter it's decorating, as well as an instance of the cache service it'll be talking to. I probably shoulda called that $cachingService, now that I look at it. We're taking the same approach here as we did with the other decorators (in the earlier articles): all the guzzling is offloaded to the adapter here; this class only deals with the caching side of things.
To that end, our get method is very simple:
public function get($id){
if ($this->cache->contains($id)) {
return $this->returnResponseFromCache($id);
}
return $this->returnResponseFromWebService($id);
}
If it's in the cache: use that; if not: go get it. The details of how to get the thing from cache or how to get things from the adapter are hidden in helper functions.
Here's the tricky bit: getting the response back from cache:
private function returnResponseFromCache($id) {
$p = new Promise(function() use (&$p, $id){
echo "GETTING FROM CACHE" . PHP_EOL;
$cachedResult = $this->cache->get($id);
$newResponse = new Response(
$cachedResult['status'],
$cachedResult['headers'],
$cachedResult['body']
);
$p->resolve($newResponse);
});
return $p;
}
Here we:
- get some data from the cache. This method only gets called if the data is there, so we can dive straight in.
- We create a new Guzzle Response object, and put the data back into it.
- We resolve a promise with that as its value. We need to return a promise here because that's what the get method we're decorating returns.
I'm not entirely comfortable with the self-referential bit around $p, but that was from the docs as the recommended way of doing this. So be it.
If the stuff wasn't in cache, we need to make an actual web service call:
private function returnResponseFromWebService($id){
$response = $this->adapter->get($id);
$response->then(function(Response $response) use ($id) {
echo "PUTTING IN CACHE" . PHP_EOL;
$detailsToCache = [
'status' => $response->getStatusCode(),
'headers' => $response->getHeaders(),
'body' => $response->getBody()->getContents()
];
$this->cache->put($id, $detailsToCache);
});
return $response;
}
This is straight forward:
- make the call;
- in its resolution handler grab the data and bung it in the cache.
- Note that I'm only caching the significant bits of the response, not the response itself. My original version of this cached the entire response object: because the cache is just an in-memory array I can do that; in reality I'd need to serialise it somehow, so I wanted to make a point of only caching the data here.
That's it!
Of course I need to test this, so I wrote a test rig, similar to the ones I've already done:
$endPoint = "http://cf2016.local:8516/cfml/misc/guzzleTestEndpoints/getById.cfm?id=";
$guzzleAdapter = new GuzzleAdapter($endPoint);
$cache = new CachingService();
$adapter = new CachingGuzzleAdapter($guzzleAdapter, $cache);
printf("Getting not-yet cached results @ %s%s", (new DateTime())->format("H:i:s"), PHP_EOL . PHP_EOL);
makeRequests($adapter);
echo "===================================================" .PHP_EOL . PHP_EOL;
sleep(10);
printf("Getting results again (from cache) @ %s%s", (new DateTime())->format("H:i:s"), PHP_EOL . PHP_EOL);
makeRequests($adapter);
function makeRequests($adapter){
$startTime = time();
$ids = ["001", "002"];
$responses = [];
foreach ($ids as $id) {
echo "Making requests" . PHP_EOL . PHP_EOL;
echo "Requesting: $id" . PHP_EOL;
$responses[] = $adapter->get($id);
echo "Requests made" . PHP_EOL . PHP_EOL;
}
echo "Fetching responses" . PHP_EOL . PHP_EOL;
foreach ($responses as $response){
$body = (string) $response->wait()->getBody();
echo "Body: $body" . PHP_EOL . PHP_EOL;
}
echo "Responses fetched" . PHP_EOL . PHP_EOL;
$duration = time() - $startTime;
echo "Process duration: {$duration}sec" . PHP_EOL . PHP_EOL;
}
This just:
- calls the adapter for a coupla IDs,
- waits 10sec
- and gets the same data again.
<cfscript>
cfcontent(type="application/json");
writeLog(file="testApp", text="[ID: #URL.id#] request received");
sleep(5000);
writeOutput(serializeJson({"id"=URL.id, "retrieved"=now().dateTimeFormat("HH:nn:ss.lll")}));
writeLog(file="testApp", text="[ID: #URL.id#] response returned");
</cfscript>
It'll only be 5sec in total not 10sec because the calls are being made asynchronously, remember?
The second round of calls should take no time at all because the adapter will be able to get the results from cache. In theory. Let's see:
>php testCachingAdapter.php
Getting not-yet cached results @ 09:24:25
Making requests
Requesting: 001
Requests made
Making requests
Requesting: 002
Requests made
Fetching responses
PUTTING IN CACHE
Putting 001 into cache
PUTTING IN CACHE
Putting 002 into cache
Body: {"retrieved":"09:24:30.453","id":"001"}
Body: {"retrieved":"09:24:30.457","id":"002"}
Responses fetched
Process duration: 5sec
===================================================
Getting results again (from cache) @ 09:24:40
Making requests
Requesting: 001
Requests made
Making requests
Requesting: 002
Requests made
Fetching responses
GETTING FROM CACHE
Getting 001 from cache
Body: {"retrieved":"09:24:30.453","id":"001"}
GETTING FROM CACHE
Getting 002 from cache
Body: {"retrieved":"09:24:30.457","id":"002"}
Responses fetched
Process duration: 0sec
>
Getting not-yet cached results @ 09:24:25
Making requests
Requesting: 001
Requests made
Making requests
Requesting: 002
Requests made
Fetching responses
PUTTING IN CACHE
Putting 001 into cache
PUTTING IN CACHE
Putting 002 into cache
Body: {"retrieved":"09:24:30.453","id":"001"}
Body: {"retrieved":"09:24:30.457","id":"002"}
Responses fetched
Process duration: 5sec
===================================================
Getting results again (from cache) @ 09:24:40
Making requests
Requesting: 001
Requests made
Making requests
Requesting: 002
Requests made
Fetching responses
GETTING FROM CACHE
Getting 001 from cache
Body: {"retrieved":"09:24:30.453","id":"001"}
GETTING FROM CACHE
Getting 002 from cache
Body: {"retrieved":"09:24:30.457","id":"002"}
Responses fetched
Process duration: 0sec
>
Hurrah! As predicted the first call takes 5sec; the second takes 0sec. And the intermediary telemetry confirms that the cache is being used correctly. Also note the timestamps on the returned data reflect when the initial requests were made, not when the second round of calls were made.
That was pretty easy this time. It took me a while to find the code for returning a resolved promise with the Response value in it, but wiring everything up was simple.
As an exercise I'm now gonna wire up a CachedLoggedErrorMappedGuzzleAdapter. Or perhaps a LoggedErrorMappedCacheGuzzleAdapter. Not sure. But it's all a difference in wiring, and that's it really. I'll write that up next.
Righto.
--
Adam