时空碰撞优化系列·二

用户头像
誓约·追光者
关注
发布于: 2020 年 09 月 22 日



优化源于痛点(┬_┬)



有没有痛点取决于业务场景的需求;有多痛取决于当前方案对业务的契合度



让我们从业务场景①、当前方案②切入,联立①②来推导当前痛点③吧!



话不多说,开始分析



①业务场景:



1.需要计算时间和空间都在一定范围内的数据对,数据来源可能有多种类型,人、车、码等



2.计算需要并行进行,每次计算一天的数据量,大约亿级



②当前方案:



按照时间、经度、纬度三个维度扩充27倍数据,然后join



③当前痛点:



1.27倍太大了,从执行计划上来看,是真的扩了27倍的数据,1亿的基础数据就会变成27亿,吃不消



那么问题来了,是否存在什么更合适的方案来解决这些痛点呢?



我想,是有的。



根据痛点③,反推我们的预期目标④;



根据目标④,尝试推导出优化思路⑤;



落地思路⑤,成为最终的优化方案⑥



④预期目标



1.拆分join流程,把三个维度拆成2+1,先用in逻辑过滤一部分数据,再扩充一次,再join



⑤优化思路



1.拆成9倍和3倍,先算经纬度,后算时间



⑥优化方案



test:原表,存储需要计算的数据



时间间隔10S



空间间隔150米



①同前文



with event as (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from(
select
aid,
data_type,
lat,
lon,
time,
floor(unix_timestamp(time)/10) as time_part,
floor(lat/0.001) as lat_part,
floor(lon/0.001) as lon_part,
row_number() over(partition by aid,time,location order by time) as rn
from test where data_type in ('man','woman')
) total
where rn=1
)



②先按经纬度扩充,然后left semi join,过滤一部分数据。此处不用join是防止数据爆炸反而超过9倍,left semi join等同于in逻辑,结果集只会小于等于9倍。完成过滤后再从时间维度扩充



source_pre as(
select
aid,
data_type,
lat,
lon,
time,
time_part,
ad.lat_part as lat_part,
ad.lon_part as lon_part
from (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from event
where data_type='man'
) source_original
lateral view explode(split(concat(lat_part-1,',',lat_part,',',lat_part+1),',')) ad
as lat_part
lateral view explode(split(concat(lon_part-1,',',lon_part,',',lon_part+1),',')) ad
as lon_part
),
target as (
select
aid,
data_type,
lat,
lon,
time,
cast(time_part as string) as time_part,
cast(lat_part as string) as lat_part,
cast(lon_part as string) as lon_part
from event where data_type='women'
),
source as (
select
aid,
data_type,
lat,
lon,
time,
ad.time_part as time_part,
lat_part,
lon_part
from (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from source_pre left semi join target
on source_pre.lat_part=target.lat_part and source_pre.lon_part=target.lon_part
)source_original
lateral view explode(split(concat(time_part-1,',',time_part,',',time_part+1),',')) ad
as time_part
)



③配对,计算,取满足条件的对



select
source_aid,
target_aid
from (select
abs(unix_timestamp(source.time)-unix_timestamp(target.time)) as time_diff,
round(6378138*2*asin(sqrt(pow(sin((source.lat*pi()/180-target.lat*pi()/180)/2),2)+cos(source.lat*pi()/180)*cos(target.lat*pi()/180)* pow(sin((source.lon*pi()/180-target.lon*pi()/180)/2),2)))) as site_diff,
source.aid as source_aid,
target.aid as target_aid
from source join target
on source.lon_part=target.lon_part and source.lat_part=target.lat_part and source.time_part=target.time_part
) diff
where time_diff<=10 and site_diff<=150



最终代码



with event as (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from(
select
aid,
data_type,
lat,
lon,
time,
floor(unix_timestamp(time)/10) as time_part,
floor(lat/0.001) as lat_part,
floor(lon/0.001) as lon_part,
row_number() over(partition by aid,time,location order by time) as rn
from test where data_type in ('man','woman')
) total
where rn=1
),
source_pre as(
select
aid,
data_type,
lat,
lon,
time,
time_part,
ad.lat_part as lat_part,
ad.lon_part as lon_part
from (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from event
where data_type='man'
) source_original
lateral view explode(split(concat(lat_part-1,',',lat_part,',',lat_part+1),',')) ad
as lat_part
lateral view explode(split(concat(lon_part-1,',',lon_part,',',lon_part+1),',')) ad
as lon_part
),
target as (
select
aid,
data_type,
lat,
lon,
time,
cast(time_part as string) as time_part,
cast(lat_part as string) as lat_part,
cast(lon_part as string) as lon_part
from event where data_type='women'
),
source as (
select
aid,
data_type,
lat,
lon,
time,
ad.time_part as time_part,
lat_part,
lon_part
from (
select
aid,
data_type,
lat,
lon,
time,
time_part,
lat_part,
lon_part
from source_pre left semi join target
on source_pre.lat_part=target.lat_part and source_pre.lon_part=target.lon_part
)source_original
lateral view explode(split(concat(time_part-1,',',time_part,',',time_part+1),',')) ad
as time_part
)
select
source_aid,
target_aid
from (select
abs(unix_timestamp(source.time)-unix_timestamp(target.time)) as time_diff,
round(6378138*2*asin(sqrt(pow(sin((source.lat*pi()/180-target.lat*pi()/180)/2),2)+cos(source.lat*pi()/180)*cos(target.lat*pi()/180)* pow(sin((source.lon*pi()/180-target.lon*pi()/180)/2),2)))) as site_diff,
source.aid as source_aid,
target.aid as target_aid
from source join target
on source.lon_part=target.lon_part and source.lat_part=target.lat_part and source.time_part=target.time_part
) diff
where time_diff<=10 and site_diff<=150



以上就是我的优化方案,所有sql均在spark.sql中执行,优点如下:



1.通过对流程的拆分,减轻了数据压力



本次优化从思考到实现的全过程啦,希望大家喜欢(≧▽≦)l



发布于: 2020 年 09 月 22 日 阅读数: 18
用户头像

誓约·追光者

关注

还未添加个人签名 2020.08.11 加入

还未添加个人简介

评论

发布
暂无评论
时空碰撞优化系列·二